About
AMI (Augmented Multiparty Interaction, http://www.amiproject.org) is a newly launched (January 2004) European Integrated Project (IP) funded under Framework FP6 as part of its IST program. AMI targets computer enhanced multi-modal interaction in the context of meetings. The project aims at substantially advancing the state-of-the-art, within important underpinning technologies (such as human-human communication modeling, speech recognition, computer vision, multimedia indexing and retrieval). It will also produce tools for off-line and on-line browsing of multi-modal meeting data, including meeting structure analysis and summarizing functions. The project also makes recorded and annotated multimodal meeting data widely available for the European research community, thereby contributing to the research infrastructure in the field.
PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning, http://www.pascal-network.org) is a newly lauched (December 2003) European Network of Excellence (NoE) as part of its IST program. The NoE brings together experts from basic research areas such as Statistics, Optimisation and Computational Learning and from a number of application areas, with the objective of integrating research agendas and improving the state of the art in all concerned fields.
IM2 (Interactive Multimodal Information Management, http://www.im2.ch) is a Swiss National Center of Competence in Research (NCCR) aiming at the advancement of research, and the development of prototypes, in the field of man-machine interaction. IM2 is particularly concerned with technologies coordinating natural input modes (such as speech, image, pen, touch, hand gestures, head and/or body movements, and even physiological sensors) with multimedia system outputs, such as speech, sounds, images, 3D graphics and animation. Among other applications, IM2 is also targeting research and development in the context of smart meeting rooms.
M4 (Multi-Modal Meeting Manager, http://www.m4project.org) is an EU IST project launched in March 2002 concerned with the construction of a demonstration system to enable structuring, browsing and querying of an archive of automatically analysed meetings. The archived meetings will have taken place in a room equipped with multimodal sensors.
Given the multiple links between AMI, PASCAL, IM2 and M4, it was decided to organize a join workshop in order to bring together researchers from the different communities around the common theme of advanced machine learning algorithms for processing and structuring multimodal human interaction in meetings.
Related categories
Uploaded videos:
Lectures
An Integrated framework for the management of video collection
Feb 25, 2007
·
2818 Views
Lectures
Automatic pedestrian tracking using discrete choice models and image correlation...
Feb 25, 2007
·
4682 Views
Mountains, Exploration, Education, Rich Media and Design
Feb 25, 2007
·
5252 Views
The NITE XML Toolkit meets the ICSI Meeting Corpus: import, annotation, and brow...
Feb 25, 2007
·
3721 Views
Tandem Connectionist Feature Extraction for Conversational Speech Recognition
Feb 25, 2007
·
5351 Views
An Efficient Online Algorithm for Hierarchical Phoneme Classification
Feb 25, 2007
·
3583 Views
Immersive Conferencing Directions at FX Palo Alto Laboratory
Feb 25, 2007
·
2773 Views
Zakim - A multimodal sofware system for large-scale teleconferencing
Feb 25, 2007
·
2737 Views
A Programming Model for Next Generation Multimodal Applications
Feb 25, 2007
·
3062 Views
Recognition of Isolated Complex Mono- and Bi-Manual 3D Hand Gestures using Discr...
Feb 25, 2007
·
3896 Views
Using Static Documents as Structured and Thematic Interfaces to Multimedia Meeti...
Feb 25, 2007
·
2912 Views
On the Adequacy of Baseform Pronunciations and Pronunciation Variants
Feb 25, 2007
·
3231 Views
Towards Computer Understanding of Human Interactions
Feb 25, 2007
·
3139 Views
Mixture of SVMs for Face Class Modeling
Feb 25, 2007
·
4603 Views
S-SEER: A Multimodal Office Activity Recognition System with Selective Perceptio...
Feb 25, 2007
·
6223 Views
Meeting Modelling
Feb 25, 2007
·
2937 Views
A Mixed-lingual Phonological Component in Polyglot TTS Synthesis
Feb 25, 2007
·
3701 Views
Artificial Companions
Feb 25, 2007
·
3438 Views
On the Adequacy of Baseform Pronunciations and Pronunciation Variants
Feb 25, 2007
·
3915 Views
Tandem Connectionist Feature Extraction for Conversational Speech Recognition
Feb 25, 2007
·
5045 Views
Accessing Multimodal Meeting Data: Systems, Problems and Possibilities
Feb 25, 2007
·
3262 Views
EU research initiatives in multimodal interaction
Feb 25, 2007
·
2902 Views
Confidence Measures in Speech Recognition
Feb 25, 2007
·
9252 Views
Browsing Recorded Meetings With Ferret
Feb 25, 2007
·
3036 Views