RL Glue and Codecs Glue
published: Dec. 20, 2008, recorded: December 2008, views: 545
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
RL-Glue is a protocol and software implementation for evaluating reinforcement learning algorithms. Our system facilitates the comparison of alternative algorithms and can greatly accelerate research progress as the UCI database has accelerated progress in supervised machine learning. Creating a comparable bench- marking resource for reinforcement learning is challenging because of the temporal nature of reinforcement learning. Reinforcement learning agents interact with a dynamic process (the environment) which gener- ates observations and rewards. The observations and rewards received by the learning agent depend on the actions; training data cannot simply be stored in a ﬁle as they are in supervised learning. Instead, the rein- forcement learning agent and environment must be interacting programs. RL-Glue agents and environments can be written in Java, C/C++, Matlab, Python, and Lisp and can all run on one machine, or can connect across the Internet. In this seminar, we will introduce the design principles that helped shape RL-Glue and demonstrate some of the interesting extensions that have been created by the reinforcement learning community.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !