CORL: A Continuous-state Offset-dynamics Reinforcement Learner

author: Emma Brunskill, Computer Science Department, Carnegie Mellon University
published: July 30, 2008,   recorded: July 2008,   views: 3287


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Continuous state spaces and stochastic, switching dynamics characterize a number of rich, real world domains, such as robot navigation across varying terrain. We describe a reinforcement learning algorithm for learning in these domains and prove for certain environments the algorithm is probably approximately correct with a sample complexity that scales polynomially with the state-space dimension. Unfortunately, no optimal planning techniques exist in general for such problems; instead we use fitted value iteration to solve the learned MDP, and include the error due to approximate planning in our bounds. Finally, we report an experiment using a robotic car driving over varying terrain to demonstrate that these dynamics representations adequately capture real-world dynamics and that our algorithm can be used to efficiently solve such problems.

See Also:

Download slides icon Download slides: uai08_brunskill_corl.pdf (542.9 KB)

Download slides icon Download slides: uai08_brunskill_corl_01.ppt (4.0 MB)

Download subtitles Download subtitles: TT/XML, RT, SRT

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: