About
Unconsciously, humans evaluate situations based on environment and social parameters when recognizing emotions in social interactions. Without context, even humans may misunderstand the observed facial, vocal or body behavior. Contextual information, such as the ongoing task, the identity and natural expressiveness of the individual, and the intra- and inter-personal context, help us interpret and respond to social interactions. These considerations suggest that attention to context information can deepen our understanding of affect communication for a reliable real-world affect related applications.
Building upon the success of previous CBAR workshops, the key aim of this third workshop is to explore how computer vision can address the challenging task of automatic extraction and recognition of context information in real-world applications. Specifically, we wish to exploit advances in computer vision and machine learning for real time scene analysis and understanding that include, tracking and recognition of human actions, gender recognition, age estimation, and objects recognition and tracking for real-time context based visual, vocal, or audiovisual affect recognition.
The key aim of the workshop is to explore the challenges, benefits, and drawbacks of integrating context information on affect production, interpretation and recognition. We wish to investigate cutting-edge methods and methodologies in computer vision and machine learning that can be applied to (1) detect and interpret context information in social interaction and/or human-machine interaction, and (2) train and validate classifiers to create a fully automatic multimodal and context based affect recognition.
The workshop is relevant to FG given the challenging area of research on context based affect recognition and its wide range of applications such as but not limited to intelligent video surveillance, human-computer interaction, intelligent humanoid robots, clinical diagnosis (e.g., pain and depression assessment). The workshop focuses on making affect recognition more robust and useful in real-world situations (e.g., work, home, school, and health care environment). We solicit high quality papers from a variety of fields such as computer vision and pattern recognition, behavioral and automatic methodologies that uses innovative and promising approaches to extract, interpret and/or include contextual information in audiovisual affect recognition and how it can improve existing frameworks for human-centered affect recognition.
For its third year, the workshop aims at inviting scientists working in related areas of machine learning and computer vision, scene analysis, ambient computing, smarts environments to share their expertise and achievements in the emerging field of automatic and context based visual, audio, and multimodal affect analysis and recognition.
Related categories
Uploaded videos:
The engines of emotion: towards a shared understanding of the work they do
Jul 02, 2015
·
1267 Views
The influence of context on emotion recognition in humans
Jul 02, 2015
·
1202 Views
Person-specific Behavioral Features for Automatic Stress Detection
Jul 02, 2015
·
1416 Views