Probabilistic Approaches

Probabilistic Approaches

10 Lectures · Dec 11, 2009

About

Probabilistic Approaches for Robotics and Control

During the last decade, many areas of Bayesian machine learning have reached a high level of maturity. This has resulted in a variety of theoretically sound and efficient algorithms for learning and inference in the presence of uncertainty. However, in the context of control, robotics, and reinforcement learning, uncertainty has not yet been treated with comparable rigor despite its central role in risk-sensitive control, sensori-motor control, robust control, and cautious control. A consistent treatment of uncertainty is also essential when dealing with stochastic policies, incomplete state information, and exploration strategies. A typical situation where uncertainty comes into play is when the exact state transition dynamics are unknown and only limited or no expert knowledge is available and/or affordable. One option is to learn a model from data. However, if the model is too far off, this approach can result in arbitrarily bad solutions. This model bias can be sidestepped by the use of flexible model-free methods. The disadvantage of model-free methods is that they do not generalize and often make less efficient use of data. Therefore, they often need more trials than feasible to solve a problem on a real-world system. A probabilistic model could be used for efficient use of data while alleviating model bias by explicitly representing and incorporating uncertainty. The use of probabilistic approaches requires (approximate) inference algorithms, where Bayesian machine learning can come into play. Although probabilistic modeling and inference conceptually fit into this context, they are not widespread in robotics, control, and reinforcement learning. Hence, this workshop aims to bring researchers together to discuss the need, the theoretical properties, and the practical implications of probabilistic methods in control, robotics, and reinforcement learning. One particular focus will be on probabilistic reinforcement learning approaches that profit recent developments in optimal control which show that the problem can be substantially simplified if certain structure is imposed. The simplifications include linearity of the (Hamilton-Jacobi) Bellman equation. The duality with Bayesian estimation allow for analytical computation of the optimal control laws and closed form expressions of the optimal value functions.


The Workshop homepage can be found at http://mlg.eng.cam.ac.uk/marc/nipsWS09.

Related categories

Uploaded videos:

video-img
23:51

Planning under Uncertainty Using Distributions over Posteriors

Nicholas Roy

Jan 19, 2010

 · 

4150 Views

Lecture
video-img
21:09

GP-BayesFilters: Gaussian Process Regression for Bayesian Filtering

Dieter Fox

Jan 19, 2010

 · 

5901 Views

Lecture
video-img
19:12

Imitation Learning and Purposeful Prediction: Probabilistic and Non-probabilisti...

Drew Bagnell

Jan 19, 2010

 · 

4523 Views

Lecture
video-img
23:42

Probabilistic Control in Human Computer Interaction

Roderick Murray-Smith

Jan 19, 2010

 · 

4262 Views

Lecture
video-img
18:09

Estimating the Sources of Motor Errors

Konrad Körding

Jan 19, 2010

 · 

3557 Views

Lecture
video-img
23:02

Linear Bellman Equations: Theory and Applications

Emanuel Todorov

Jan 19, 2010

 · 

7567 Views

Lecture
video-img
20:08

KL Control Theory and Decision Making under Uncertainty

Bert Kappen

Jan 19, 2010

 · 

8041 Views

Lecture
video-img
29:08

Linear Bellman Combination for Simulation of Human Motion

Jovan Popović

Jan 19, 2010

 · 

3580 Views

Lecture
video-img
24:52

Probabilistic Design: Promises and Prospects

Miroslav Kárný

Jan 19, 2010

 · 

3177 Views

Lecture
video-img
26:24

Approximate Inference Control

Marc Toussaint

Jan 19, 2010

 · 

4772 Views

Lecture