Bayesian models of human inductive learning

author: Joshua B. Tenenbaum, Center for Future Civic Media, Massachusetts Institute of Technology, MIT
published: June 22, 2007,   recorded: June 2007,   views: 4226
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations -- far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people's everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called "intuitive theories" or "schemas". For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases -- critical for generalization from very few examples -- with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have been pre-programmed to perform in. The models I discuss will connect to several directions in contemporary machine learning, such as semi-supervised learning, structure learning in graphical models, hierarchical Bayesian modeling, and nonparametric Bayes.

See Also:

Download slides icon Download slides: icml07_tenenbaum_bmhi.ppt (13.3┬áMB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Lazar, July 30, 2007 at 7:26 a.m.:

I enjoyed every minute of this great lecture. This should be a mandatory AI educational piece, and I will recommend it to everyone who I know is interested in the subject.


Comment2 AIier, August 1, 2007 at 9:07 a.m.:

truly Amazing Lecture, highly recommended. People who like old-school AI of "structured" representation, will get to know how to do achieve that using mainstream "Statistical" approaches. Thanks Josh!


Comment3 Eugen Hotwagner, December 13, 2007 at 1:14 p.m.:

impressive lecture. coming from a psychologist background myself this was the most informative and insightful presentation of that subject i have seen. in my opinion the lack of rigor and practical orientation renders the psychological approaches useless most of the time.

Write your own review or comment:

make sure you have javascript enabled or clear this field: