Toward Adaptive Information Fusion in Multimodal Systems
published: April 14, 2007, recorded: July 2005, views: 3916
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Techniques for information fusion are at the heart of multimodal system design. In this talk, I'll summarize recent work on predictive modeling of users' multimodal integration patterns, including that (1) there are large individual differences in users' dominant speech and pen multimodal integration patterns, (2) these patterns can be identified almost immediately and remain highly consistent for individual users over time, (3) they are highly resistant to change, even when users are given strong selective reinforcement or explicit instructions to switch patterns, and (4) these distinct patterns appear to derive from enduring differences among users in cognitive style. I'll also discuss findings on systematic entrenchment of users' dominant multimodal integration pattern when under load, including as task difficulty increases and during error handling. I'll conclude by highlighting work we are now pursuing that combines predictive user modeling with machine learning techniques to accelerate, generalize, and improve the reliability of information fusion during multimodal system processing. Implications of this research will be discussed for the design of adaptive multimodal systems with substantially improved performance characteristics.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !