Designing Frameworks for Automatic Affect Prediction and Classification in Dimensional Space
published: Aug. 24, 2011, recorded: June 2011, views: 3382
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. To realize this prediction, next-generation computing should develop anticipatory user interfaces that are human-centred, built for humans, and based on naturally occurring multimodal human behaviour such as affective and social signaling.
The facial behaviour is our preeminent means to communicating affective and social signals. This talk discusses a number of components of human facial behavior, how they can be automatically sensed and analysed by computer, what is the past research in the field conducted by the iBUG group at Imperial College London, and how far we are from enabling computers to understand human facial behavior.
Disclaimer: There may be mistakes or omissions in the interpretation as the interpreters are not experts in the field of interest and performed a simultaneous translation without comprehensive preparation.
Download slides: gesturerecognition2011_pantic_designing_01.pdf (12.2 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !