Machine Learning for Robotics thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Machine Learning for Robotics

Published on Oct 29, 201214504 Views

Robots are typically far less capable in autonomous mode than in tele-operated mode. The few exceptions tend to stem from long days (and more often weeks, or even years) of expert engineering for a sp

Related categories

Chapter list

Machine Learning for Robotics00:00
Outline - 100:27
Challenges in helicopter control00:55
Many success stories in hover and forward flight regime01:32
Example result - 102:07
Example result - 202:27
One of our first attempts at autonomous flips03:46
Aggressive, non-stationary regimes05:13
Stationary vs. aggressive flight06:13
Learning to perform dynamic maneuvers: outline07:58
Target trajectory08:00
Expert demonstrations: Airshow08:46
Learning Trajectory - 109:39
Learning Trajectory - 210:56
Results: Time-aligned demonstrations12:03
Results: Loops12:39
Learning to perform dynamic maneuvers: outline13:17
Baseline dynamics model13:20
Empirical evaluation of standard modeling approach14:22
ecmlpkdd2012_abbeel_learning_roboti.jpg15:19
Key observation - 115:35
Key observation - 215:58
Trajectory-specific local models16:54
Learning to perform dynamic maneuvers: outline17:45
Experimental Setup17:48
Experimental procedure18:56
Results: Autonomous airshow23:14
Results: Flight accuracy25:56
Thus far26:34
Surgical knot tie - 127:29
Surgical knot tie - 228:57
Surgical knot tie - 330:00
Generalizing Trajectories30:47
Cartoon Problem Setting - 131:17
Cartoon Problem Setting - 232:03
Cartoon Problem Setting - 332:39
Cartoon Problem Setting - 432:44
Cartoon Problem Setting - 533:04
Learning f : R3 -> R3 from samples - 133:42
Learning f : R3 -> R3 from samples - 235:20
Experiments: Plate Pick-Up36:03
Experiments: Scooping36:58
Experiment: Knot-Tie37:07
Autonomous tying of a knot for a previously unseen situation37:12
Outline - 238:21
Problem Structure38:30
Inverse RL History39:41
Inverse RL Examples40:26
Inverse RL Examples (ctd)40:34
Quadruped40:36
Experimental setup42:14
Without learning42:56
With learned reward function43:47
Safe exploration44:35
Safe exploration --- towards45:37
Safe exploration – Key idea46:36
Perception and clothes manipulation47:40
Conclusion49:26
Thank you50:14