The Qualitative Learner of Action and Perception, QLAP
author: Benjamin Kuipers, Department of Electrical Engineering and Computer Science, University of Michigan
published: Sept. 1, 2010, recorded: June 2010, views: 7989
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
This video presents an introduction to the Qualitative Learner of Action and Perception, QLAP. QLAP autonomously learns a useful state abstraction and a set of hierarchical actions in continuous environments. Learning in QLAP is unsupervised. The agent begins with a very broad discretization of the world (it can only tell if the values of variables are increasing or decreasing). Using this discretization, QLAP creates a set of predictive models. Initially, these models are not very reliable, but for each one QLAP can find new discretizations to improve it. These new discretizations lead to more models creating a perception loop that leads to more accurate models and a finer discretization. The models are then converted into a set of hierarchical actions.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !
Write your own review or comment: