Temporal segmentation and recognition of actions performed throughout a video have numerous applications in robotics, medical science, surveillance, etc. It plays a crucial role in the field of Minimally Invasive Robotic Surgery (MIRS), wherein the results can help obscure skill deficiencies, predict the most probable future gesture and improve the quality of feedback provided during surgical training. The current state-of-the-art techniques for MIRS are developed based on kinematic data. However, recent works have found video data to be equally discriminative. In my work, the video-based action segmentation is performed using the Bidirectional Long short-term memory network designed originally for only kinematic data. The model was further improved to make predictions based on both kinematic and video data. Our model achieves competitive performance using both the video and kinematic data on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Further, the user is provided with information about the top 3 possible gesture predictions along with an estimate of the model’s confidence for each prediction. Additionally, the model was evaluated on a new surgical activity dataset called MIRO dataset, collected using the DLR’s MiroSurge System.
Recognition and Segmentation of Surgical Gestures
2019-12-11
Miscellaneous
Electronic Resource
English
Exemplar-Based Tracking and Recognition of Arm Gestures
British Library Conference Proceedings | 2003
|Exemplar-based tracking and recognition of arm gestures
IEEE | 2003
|Visual recognition of pointing gestures for human-robot interaction
British Library Online Contents | 2007
|