Building Watson: An Overview of DeepQA for the Jeopardy! Challenge

author: David Ferrucci, IBM Thomas J. Watson Research Center
published: Sept. 28, 2011,   recorded: August 2011,   views: 6820


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Computer systems that can directly and accurately answer peoples’ questions over a broad domain of human knowledge have been envisioned by scientists and writers since the advent of computers themselves. Open domain question answering holds tremendous promise for facilitating informed decision making over vast volumes of natural language content. Applications in business intelligence, healthcare, customer support, enterprise knowledge management, social computing, science and government would all beneit from deep language processing. The DeepQA project is aimed at exploring how advancing and integrating natural language processing, information retrieval, machine learning, massively parallel computation, and knowledge representation and reasoning can greatly advance open-domain automatic question answering. An exciting proof-point in this challenge is to develop a computer system that can successfully compete against top human players at the Jeopardy! quiz show. Attaining champion-level performance Jeopardy! requires a computer system to rapidly and accurately answer rich open-domain questions, and to predict its own performance on any given category/question. The system must deliver high degrees of precision and conidence over a very broad range of knowledge and natural language content with a 3-second response time. To do this DeepQA evidences and evaluates many competing hypotheses. A key to success is automatically learning and combining accurate conidences across an array of complex algorithms and over different dimensions of evidence. Accurate conidences are needed to know when to “buzz in” against your competitors and how much to bet. High precision and accurate conidence computations are just as critical for providing real value in business settings where helping users focus on the right content sooner and with greater conidence can make all the difference. The need for speed and high precision demands a massively parallel computing platform capable of generating, evaluating and combing 1000s of hypotheses and their associated evidence. In this talk I will introduce the audience to the Jeopardy! Challenge and how we tackled it using DeepQA.

See Also:

Download slides icon Download slides: aaai2011_ferrucci_building_01.pdf (4.0 MB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Hossein Sadeghi, January 14, 2012 at 7:49 a.m.:

please help me
i need to aaai2011_ferrucci_building video file
my internet speed is very slow

Write your own review or comment:

make sure you have javascript enabled or clear this field: