Reinforcement Learning in Humans and Other Animals
published: Jan. 12, 2011, recorded: December 2010, views: 5907
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Algorithms from computer science can serve as detailed process-level hypotheses for how the brain might approach difficult information processing problems. This tutorial reviews how ideas from the computational study of reinforcement learning have been used in biology to conceptualize the brain's mechanisms for trial-and-error decision making, drawing on evidence from neuroscience, psychology, and behavioral economics. We begin with the much-debated relationship between temporal-difference learning and the neuromodulator dopamine, and then consider how more sophisticated methods and concepts from RL -- including partial observability, hierarchical RL, function approximation, and various model-based approaches -- can provide frameworks for understanding additional issues in the biology of adaptive behavior.
In addition to helping to organize and conceptualize data from many different levels, computational models can be employed more quantitatively in the analysis of experimental data. The second aim of this tutorial is to review and demonstrate, again using the example of reinforcement learning, recent methodological advances in analyzing experimental data using computational models. An RL algorithm can be viewed as generative model for raw, trial-by-trial experimental data such as a subject's choices or a dopaminergic neuron's spiking responses; the problems of estimating model parameters or comparing candidate models then reduce to familiar problems in Bayesian inference. Viewed this way, the analysis of neuroscientific data is ripe for the application of many of the same sorts of inferential and machine learning techniques well studied by the NIPS community in other problem domains.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !