Bayesian models of human inductive learning
published: June 22, 2007, recorded: June 2007, views: 27344
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations -- far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people's everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called "intuitive theories" or "schemas". For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases -- critical for generalization from very few examples -- with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have been pre-programmed to perform in. The models I discuss will connect to several directions in contemporary machine learning, such as semi-supervised learning, structure learning in graphical models, hierarchical Bayesian modeling, and nonparametric Bayes.
Download slides: icml07_tenenbaum_bmhi.ppt (13.3 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !
Reviews and comments:
I enjoyed every minute of this great lecture. This should be a mandatory AI educational piece, and I will recommend it to everyone who I know is interested in the subject.
truly Amazing Lecture, highly recommended. People who like old-school AI of "structured" representation, will get to know how to do achieve that using mainstream "Statistical" approaches. Thanks Josh!
impressive lecture. coming from a psychologist background myself this was the most informative and insightful presentation of that subject i have seen. in my opinion the lack of rigor and practical orientation renders the psychological approaches useless most of the time.
Write your own review or comment: