VACE Multimodal Meeting Corpus thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

VACE Multimodal Meeting Corpus

Published on Feb 25, 20073665 Views

In this paper, we report on the infrastructure we have de- veloped to support our research on multimodal cues for understanding meetings.With our focus on multimodality, we investigate the interaction

Related categories

Chapter list

VACE Multimodal Meeting Corpus Lei Chen, Travis Rose, Fey Perill, Xu Han, Jilin Tu, Zhongquian Huang, Mary Harper, Francis Quek, David McNeill, Ronald Tuttle, and Thomas Huang00:06
Corpus Rationale00:58
Why Multimodal Language Analysis?01:54
Multimodal Language Example02:46
Embodied Communicative Behavior04:36
In a Nutshell06:36
ARDA/VACE Program09:07
From Video to Information: Cross-Modal Analysis for Planning Meetings10:04
Team12:10
Overarching Approach12:57
Scenarios13:49
Scenarios (cont’d)14:52
Scenario Development 15:01
Meulaboh, Indonesia15:10
Corpus Assembly15:52
Data Acquisition & Processing15:55
Meeting Room and Camera Configuration16:04
Cam116:09
Global & Pairwise Camera Calibration16:15
VICON Motion Capture16:33
VICON Motion Capture16:54
Speech Processing Tasks17:00
Audio Processing17:08
VACE Metadata Approach17:37
Data Collection Status18:20
Some Multimodal Meeting Room Results18:24
Gaze - NIST July 29, 2003 Data18:27
Gaze - AFIT data20:01
F-formation analysis20:28
Summary20:29