
Multi-modal Authoring Tool for Populating a Database of Emotional Reactive Animations
author: Alejandra Rojas García,
Virtual Reality Lab, École Polytechnique Fédérale de Lausanne
published: Feb. 25, 2007, recorded: June 2005, views: 3508
published: Feb. 25, 2007, recorded: June 2005, views: 3508
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
We aim to create a model of emotional reactive virtual hu- mans. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Write your own review or comment: