A path integral approach to stochastic optimal control
published: Feb. 25, 2007, recorded: January 2005, views: 7379
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Many problems in machine learning use a probabilistic description. Examples are pattern recognition methods and graphical models. As a consequence of this uniform description, one can apply generic approximation methods such as mean field theory and sampling methods. Another important class of machine learning problems are the reinforcement learning problems, aka optimal control problems. Here, also a probabilistic description is used, but up to now efficient mean field approximations have not been obtained. In this presentation, I consider linear-quadratic control of an arbitrary dynamical system and show, that for this class of stochastic control problems the non-linear Hamilton-Jacobi-Bellman equation can be transformed into a linear equation. The transformation is similar to the transformation used to relate the Schrödinger equation to the Hamilton-Jacobi formalism. The computation can be performed efficiently by means of a forward diffusion process that can be computed by stochastic integration or that can be described by a path integral. For this path integral it is expected that a variational mean field approximation could be derived.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !