From activity to language: learning to recognise the meaning of motion thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

From activity to language: learning to recognise the meaning of motion

Published on Aug 24, 20113753 Views

Whether the task is recognising an atomic action of an individual or their implied activity, the continuous multichannel nature of sign language recognition or the appearance of words on the lips, all

Related categories

Chapter list

From Activity to Language00:00
Overview00:12
Activity Recognition00:42
Action/Activity Recognition - 100:44
Action/Activity Recognition - 204:01
KTH Action Recognition04:55
Hollywood Action Recognition05:09
Video Mining and Grouping06:32
Results – YouTube dataset07:24
Sign Recognition08:39
Sign Language Recognition08:41
Sign Language10:01
Mining Signs10:45
Sign Language Recognition11:43
HamNoSys13:26
HamNoSys Example13:40
Motion Features14:50
Mapping Hands to HamNoSys14:57
Handshape demonstrator15:32
Motion Features16:13
Dictionary Overview16:27
Results17:13
Live Demo18:11
Kinect Demo18:31
Moving to 3D features19:25
Scene Particle approach19:44
Scene Particles20:00
3D Tracking20:17
Kinect Data Set20:41
3D Kinect Results20:55
Facial Feature Tracking - 121:49
Facial Feature Tracking - 221:51
Linear Predictors - 122:06
Linear Predictors - 223:02
Linear Predictors - 323:04
Tracking lips with Linear Predictors23:17
Facial Feature Tracking24:06
Sequential Patterns - 124:44
Sequential Patterns - 224:45
Sequential Patterns - 324:46
Sequential Patterns - 424:46
Sequential Patterns - 524:51
Sequential Patterns - 624:52
Sequential Patterns - 724:53
Sequential Patterns - 824:54
Sequential Patterns - 925:24
Sequential Patterns - 1025:24
Sequential Patterns - 1125:27
Video25:27
Conclusions26:06