Sample-Based Planning for Continuous Action Markov Decision Processes thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Sample-Based Planning for Continuous Action Markov Decision Processes

Published on Jul 21, 20114206 Views

In this paper, we present a new algorithm that integrates recent advances in solving continuous bandit problems with sample-based rollout methods for planning in Markov Decision Processes (MDPs). Our

Related categories

Chapter list

Sample-Based Methods for Continuous Action Markov Decision Processes00:00
From Learning to Planning - 100:44
From Learning to Planning - 201:23
From Learning to Planning - 301:46
Sparse Sampling02:19
Ideas03:41
UCB04:08
UCT05:30
UCT, cont...05:59
HOO06:45
HOO, cont... - 108:14
HOO, cont... - 208:33
HOO, cont... - 308:59
UCB vs HOO09:47
HOOT10:24
Empirical Results - 110:56
Empirical Results - 213:05
Future Work13:57
Summary14:42
Thanks!15:04