Multi-modal Authoring Tool for Populating a Database of Emotional Reactive Animations

author: Alejandra Rojas García, Virtual Reality Lab, École Polytechnique Fédérale de Lausanne
published: Feb. 25, 2007,   recorded: June 2005,   views: 3508


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


We aim to create a model of emotional reactive virtual hu- mans. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen.

See Also:

Download slides icon Download slides: mlmi04uk_garcia_mmatp_01.pdf (618.1 KB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: