Model Selection in Markovian Processes thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Model Selection in Markovian Processes

Published on Jan 25, 20124224 Views

We address the problem of how to use a sample of trajectories to choose from a candidate set of possible state spaces in different types of Markov processes. Standard approaches to solving this proble

Related categories

Chapter list

Model Selection in Markovian Processes00:00
What matters in policies? - 0100:42
What matters in policies? - 0201:08
What matters in policies? - 0301:34
What matters in policies? - 0402:24
What matters in policies? - 0502:48
Types of uncertainty - 0103:32
Types of uncertainty - 0204:14
Types of uncertainty - 0304:47
Types of uncertainty - 0405:37
Motivation I - 0105:57
Motivation I - 0207:03
Motivation I - 0307:51
Motivation II - 0109:23
Motivation II - 0209:58
Motivation II - 0310:09
Motivation II - 0410:39
Motivation II - 0511:20
Motivation III - 0111:34
Motivation III - 0212:05
Motivation III - 0313:01
Motivation III - 0413:48
Common to the problems - 0114:54
Common to the problems - 0215:30
Common to the problems - 0315:30
Common to the problems - 0415:59
Common to the problems - 0516:38
Common to the problems - 0618:10
Two important questions18:42
The Model Selection Problem19:13
The Identification Problem24:08
Penalized Likelihood Criteria24:50
Impossibility result26:24
Identifying Markov Reward Processes30:14
Reward Aggregation31:38
Reward Aggregation Score34:06
Hierarchical model selection36:05
Experiments with artificial data - 0137:40
Experiments with artificial data - 0238:23
Experiments with real data38:52
Real data40:39
Conclusion42:17
Outlook - 0143:42
Outlook - 0244:24
Outlook - 0344:36
Outlook - 0445:15
Outlook - 0545:38