Poster Spotlights: Situation Dependent Spatial Abstraction in Reinforcement Learning Based on Structural Knowledge
published: Aug. 26, 2009, recorded: June 2009, views: 3328
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
State space abstraction reduces the size of a representation by factoring out details that are not relevant for solving a task at hand. But even in abstract representations not every detail is relevant in any situation. In cases where the structure of the environment only allows for one particular action selection, all information that does not relate to the structure can be omitted. We present a method to identify such cases in a reinforcement learning setting and abstract from non-structural details when appropriate to shrink the state space and allow for knowledge reuse. A significant performance improvement of this approach is demonstrated in a goal-directed robot navigation task.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !