Audio-Visual Speech Analysis & Recognition
published: Feb. 14, 2008, recorded: February 2008, views: 947
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Human speech production and perception mechanisms are essentially bimodal. Interesting evidence for this audiovisual nature of speech is provided by the so-called Mc Gurk effect. To properly account for the complementary visual aspect we propose a unified framework to analyse speech and present our related findings in applications such as audiovisual speech inversion and recognition. Speaker's face is analysed by means of Active Appearance Modelling and the extracted visual features are integrated with simultaneously extracted acoustic features to recover the underlying articulator properties, e.g., the movement of the speaker's tongue tip, or recognize the recorded utterance, e.g. the sequence of the numbers uttered. Possible asynchrony between the audio and visual stream is also taken into account. For the case of recognition we also exploit feature uncertainty as given by the corresponding front-ends, to achieve adaptive fusion. Experimental results are presented in QSMT, MOCHA and CUAVE audiovisual databases.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !