From linearly-solvable optimal control to trajectory optimization, and (hopefully) back
published: Oct. 16, 2012, recorded: September 2012, views: 297
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We have identified a general class of stochastic optimal control problems which are inherently linear, in the sense that the exponentiated optimal value function satisfies a linear equation. These problems have a number of unique properties which enable more efficient numerical methods than generic formulations. However, after several attempts to go beyond the simple numerical examples characteristic of this literature and scale to real-world problems (particularly in robotics), we realized that the curse of dimensionality is still a curse. We then took a detour, and developed trajectory optimization methods that can synthesize remarkably complex behaviors fully automatically. Thanks to the parallel processing capabilities of modern computers, some of these methods work in real time in model-predictive-control (MPC) mode, giving rise to implicitly defined feedback control laws. But not all problems can be solved in this way, and furthermore it would be nice to somehow re-use the local solutions that MPC generates. The next step is to combine the strengths of these two approaches: using trajectory optimization to identify the regions of state space where the optimally-controlled stochastic system is likely to spend its time, and then applying linearly-solvable optimal control restricted to these regions.
Download slides: cyberstat2012_todorov_optimal_control_01.pdf (1.4 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !