Towards Surgical Context Inference and Translation to Gestures
Kay Hutchinson,Zongyu Li,Ian Reyes,Homa Alemzadeh,Kay Hutchinson,Zongyu Li,Ian Reyes,Homa Alemzadeh
Manual labeling of gestures in robot-assisted surgery is labor intensive, prone to errors, and requires expertise or training. We propose a method for automated and explainable generation of gesture transcripts that leverages the abundance of data for image segmentation. Surgical context is detected using segmentation masks by examining the distances and intersections between the tools and objects...


