Segmentation-robust Representations, Matching, and Modeling for Sign Language Recognition
published: Aug. 24, 2011, recorded: June 2011, views: 3704
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Distinguishing true signs from transitional, extraneous movements made by the signer as s/he moves from one sign to the next is a serious hurdle in the design of continuous Sign Language recognition systems. This problem is further compounded by the ambiguity of segmentation and occlusions, resulting in propagation of errors to higher levels. This talk will describe our experience with representations and matching methods, particularly those that can handle errors in low-level segmentation methods and uncertainties in segmentation of signs in sentences. We have formulated a novel framework that can address both these problems (i) using a nested level-building-based dynamic programming approach, when there is dearth of training data, and (ii) using a HMM-based approach generalized to handle multiple possible observations, when we have statistical models of signs. We will also discuss an automated approach to both extract and learn models for continuous signs from continuous sentences in a weakly unsupervised manner. This can help build training data for the recognition process. Our publications that discuss these issues can be found at http://marathon.csee.usf.edu/ASL/.
Disclaimer: There may be mistakes or omissions in the interpretation as the interpreters are not experts in the field of interest and performed a simultaneous translation without comprehensive preparation.
Download slides: gesturerecognition2011_sarkar_segmentation_01.pdf (2.1 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !