Hierarchical POMDP Controller Optimization by Likelihood Maximization
published: July 30, 2008, recorded: July 2008, views: 6861
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Planning can often be simplified by decomposing the task into smaller tasks arranged hierarchically. Charlin et al. recently showed that the hierarchy discovery problem can be framed as a non-convex optimization problem. However, the inherent computational difficulty of solving such an optimization problem makes it hard to scale to real world problems. In another line of research, Toussaint et al. developed a method to solve planning problems by maximum likelihood estimation. In this paper, we show how the hierarchy discovery problem in partially observable domains can be tackled using a similar maximum likelihood approach. Our technique first transforms the problem into a dynamic Bayesian network through which a hierarchical structure can naturally be discovered while optimizing the policy. Experimental results demonstrate that this approach scales better than previous techniques based on non-convex optimization.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !