Human-Centered Machine Learning in a Social Interaction Assistant for Individuals with Visual Impairments
published: Jan. 19, 2010, recorded: December 2009, views: 8387
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Over the last couple of decades, the increasing focus on accessibility has resulted in the design and development of several assistive technologies to aid people with visual impairments in their daily activities. Most of these devices have been centered on enhancing the interaction of a user who is blind or visually impaired with objects and environments, such as a computer monitor, personal digital assistant, cellphone, road traffic, or a grocery store. Although these efforts are very essential for the quality of life of these individuals, there is also a need (which has so far not been seriously considered) to enrich the interactions of individuals who are blind, with other individuals.
Non-verbal cues (including prosody, elements of the physical environment, the appearance of communicators and physical movements) account for as much as 65% of the information communicated during social interactions . However, more than 1.1 million individuals in the US who are legally blind (and 37 million worldwide) have a limited experience of this fundamental privilege of social interactions. These individuals continue to be faced with fundamental challenges in coping with everyday interactions in their social lives. The work described in this paper is based on the design and development of a Social Interaction Assistant that is intended to enrich the experience of social interactions for individuals who are blind, by providing real-time access to information about individuals and their surrounds. The realization of a Social Interaction Assistant device involves solving several challenging problems in pattern analysis and machine intelligence such as person recognition/tracking, head/body pose estimation, gesture recognition, expression recognition, etc on a wearable real-time platform. A list of eight significant daily challenges faced by these individuals was identified in our initial focus group studies conducted with 27 individuals who are blind or visually impaired. Each of these problems raises unique machine learning challenges that need to be addressed.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !