Deep Robotic Learning thumbnail
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Deep Robotic Learning

Published on May 27, 201612837 Views

The problem of building an autonomous robot has traditionally been viewed as one of integration: connecting together modular components, each one designed to handle some portion of the perception and

Related categories

Chapter list

Deep Robotic Learning00:00
Perception / Action Cycle - 100:03
Perception / Action Cycle - 200:56
Perception / Action Cycle - 300:59
Perception / Action Cycle - 401:06
Perception / Action Cycle - 501:09
Perception / Action Cycle - 601:16
Example - 101:40
Example - 201:46
Example - 302:18
Example - 402:20
Example - 502:28
Example - 602:34
Example - 702:38
Example - 802:41
KAIST´s DRC-HUBO opening a door03:08
No direct supervision / Actions have consequences05:15
Overview - 105:31
Overview - 205:51
General-purpose neural network policy - 105:52
General-purpose neural network policy - 206:09
General-purpose neural network policy - 306:18
General-purpose neural network policy - 406:28
General-purpose neural network policy - 506:34
General-purpose neural network policy - 606:49
General-purpose neural network policy - 707:03
General-purpose neural network policy - 807:07
General-purpose neural network policy - 907:10
General-purpose neural network policy - 1007:18
General-purpose neural network policy - 1107:28
General-purpose neural network policy - 1207:58
General-purpose neural network policy - 1308:07
General-purpose neural network policy - 1408:13
General-purpose neural network policy - 1508:19
General-purpose neural network policy - 1608:28
General-purpose neural network policy - 1708:37
General-purpose neural network policy - 1808:42
General-purpose neural network policy - 1908:46
General-purpose neural network policy - 2008:54
General-purpose neural network policy - 2109:03
General-purpose neural network policy - 2209:06
General-purpose neural network policy - 2309:15
Break up the task: separately solve N different task instances09:24
Guided Policy Search12:12
Solve using Bregman ADMM (BADMM), a type of dual decomposition method - 114:28
Solve using Bregman ADMM (BADMM), a type of dual decomposition method - 216:17
Learning on PR217:16
Training time / Test time18:14
Experimental Tasks - 119:00
Experimental Tasks - 219:18
Generalization Experiments19:51
Comparisons - 120:46
Comparisons - 222:14
Guided Policy Search Applications23:59
Overview - 325:12
Ingredients for success in learning - 125:18
Grasping with Learned Hand-Eye Coordination26:51
Using Grasp Success Prediction - 128:06
Using Grasp Success Prediction - 230:47
Open-Loop vs. Closed-Loop Grasping - 131:24
Open-Loop vs. Closed-Loop Grasping - 232:07
Open-Loop vs. Closed-Loop Grasping - 332:19
Grasping Experiments32:52
Overview - 434:18
Ingredients for success in learning - 234:23
Acknowledgements35:16
Questions?35:27