Toward Text-to-Picture Synthesis
published: Jan. 19, 2010, recorded: December 2009, views: 104
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
It is estimated that more that 2 million people in the United States have significant communication impairments that result in them relying on methods other than natural speech alone for communication . One type of commonly used augmentative and alternative communication (AAC) system is pictorial communication software such as SymWriter , which uses a lookup table to transliterate each word (or common phrase) in a sentence into an icon. This is an example of converting information between modalities. However, the resulting sequence of icons can be difficult to understand. We have been developing general-purpose Text-to-Picture (TTP) synthesis algorithms [10, 5] to improve understandability using machine learning techniques. Our goal is to help users with special needs, such as the elderly or those with disabilities, to rapidly browse documents through pictorial summaries (e.g., Figure 5). Our TTP system targets general English. This differs from other pictorial conversion systems that require hand-crafted narrative descriptions of a scene [1, 9], 3D models , or special domains . Instead, we use a concatenative or “collage” approach. In this talk, we discuss how machine learning enables the key components of our TTP system.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !