Automatic and Efficient Long Term Arm and Hand Tracking for Continuous Sign Language TV Broadcasts thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Automatic and Efficient Long Term Arm and Hand Tracking for Continuous Sign Language TV Broadcasts

Published on Oct 09, 20123742 Views

We present a fully automatic arm and hand tracker that detects joint positions over continuous sign language video sequences of more than an hour in length. Our framework replicates the state-of-the

Related categories

Chapter list

Automatic and Efficient Long Term Arm and Hand Tracking for Continuous Sign Language TV Broadcasts00:00
Motivation00:13
Objective02:01
Difficulties03:09
Overview - 104:13
Related work05:00
Out work – automatic and fast!06:40
Overview - 207:10
The problem07:20
One solution: depth data (e.g. Kinect)07:39
Constancies08:02
Co-segmentation08:41
Co-segmentation – overview09:28
Backgrounds10:23
Foreground colour model11:26
Qualitative co-segmentation results12:23
Overview - 313:05
Colour model13:18
Overview - 414:48
Joint position estimation15:04
Random Forests16:37
Evaluation: comparison to Buehler et al.- 117:59
Evaluation: comparison to Buehler et al.- 218:26
Evaluation: quantitative results18:52
Evaluation: problem cases19:21
Evaluation: generalisation to new signers19:37
Conclusion19:55