How to Grow a Mind: Statistics, Structure and Abstraction
published: Aug. 17, 2012, recorded: July 2012, views: 6146
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The fields of cognitive science and artificial intelligence grew up together, with the twin goals of understanding human minds and making machines smarter in more humanlike ways. Yet since the 1980s they have mostly grown apart, as cognitive scientists came to see AI as too focused on applications and technical engineering issues rather than big questions of intelligence, while AI researchers came to see cognitive science as too informal and concerned with peculiarities of human minds and brains rather than general principles. Just in the last few years, however, these fields appear poised to reconverge in exciting and deep ways. Cognitive scientists have begun to adopt the toolkit of modern probabilistic AI as a unifying framework for modeling natural intelligence, while many AI researchers are looking beyond immediate applications to some of the big picture questions that originally motivated the field, and both communities are increasingly aware of and even informed by the other's moves in these directions.
This talk will describe recent work at the center of the convergence: computational accounts of human intelligence that both draw on and advance state-of-the-art AI. I will focus on capacities for which even young children still far surpass machines: learning from very few examples, and common sense reasoning about the physical and social world. These abilities can be explained as approximate forms of probabilistic (Bayesian) inference over richly structured models — probabilistic models built on top of knowledge representations familiar from earlier, classic AI days, such as graphs, grammars, schemas, predicate logic, and functional programs. In many cases, sampling-based approximate inference with these models can be surprisingly tractable and can predict human judgments with high quantitative accuracy. Extended in a hierarchical nonparametric Bayesian framework, these models can explain how children learn to learn, bootstrapping adult-like intelligence from more primitive foundations. Using probabilistic programming languages, these models can be integrated into a unified cognitive architecture. Throughout the talk I will present concrete examples, along with a few more speculative predictions, of how these cognitive modeling efforts can inform the development of more intelligent machine systems.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !