Reinforcement Learning in Humans and Other Animals

author: Nathaniel Daw, Center for Neural Science, New York University (NYU)
published: Jan. 12, 2011,   recorded: December 2010,   views: 5916


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 52:27
Watch Part 2
Part 2 48:30


Algorithms from computer science can serve as detailed process-level hypotheses for how the brain might approach difficult information processing problems. This tutorial reviews how ideas from the computational study of reinforcement learning have been used in biology to conceptualize the brain's mechanisms for trial-and-error decision making, drawing on evidence from neuroscience, psychology, and behavioral economics. We begin with the much-debated relationship between temporal-difference learning and the neuromodulator dopamine, and then consider how more sophisticated methods and concepts from RL -- including partial observability, hierarchical RL, function approximation, and various model-based approaches -- can provide frameworks for understanding additional issues in the biology of adaptive behavior.

In addition to helping to organize and conceptualize data from many different levels, computational models can be employed more quantitatively in the analysis of experimental data. The second aim of this tutorial is to review and demonstrate, again using the example of reinforcement learning, recent methodological advances in analyzing experimental data using computational models. An RL algorithm can be viewed as generative model for raw, trial-by-trial experimental data such as a subject's choices or a dopaminergic neuron's spiking responses; the problems of estimating model parameters or comparing candidate models then reduce to familiar problems in Bayesian inference. Viewed this way, the analysis of neuroscientific data is ripe for the application of many of the same sorts of inferential and machine learning techniques well studied by the NIPS community in other problem domains.

See Also:

Download slides icon Download slides: nips2010_daw_rlh.pdf (3.2┬áMB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: