Exploratory Learning of Grasp Affordances
published: Nov. 8, 2010, recorded: June 2010, views: 3353
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Grasping known objects is a capablity fundamental to many important applications of autonomous robotics. Here, active learning holds a lot of promise given the complexities of the real world and the uncertainties associated with physical manipulation. To this end, we have developed learnable object representations for interaction. Objects and associated action parameters are jointly represented by Markov networks whose edge potentials encode pairwise spatial relationships between local features in 3D. Local features typically correspond to visual signatures, but may also represent action-relevant parameters such as object-relative gripper poses useful for grasping the object. Thus, detecting, recognizing and synthesizing grasps for known objects is unified within a single probabilistic inference procedure. Learning these representations is a two-step procedure. First, visual object models are learned by play-like, autonomous, exploratory interaction of a robot with its environment. Secondly, object-specific grasping skills are incrementally acquired, again by play-like interaction. The result is an autonomous system that autonomously acquires knowledge about objects and how to detect, recognize and grasp them.
Download slides: rss2010_piater_elg_01.pdf (9.1 MB)
Download rss2010_piater_elg_01.mp4 (Video - generic video source 197.1 MB)
Download rss2010_piater_elg_01.flv (Video 93.2 MB)
Download rss2010_piater_elg_01.wmv (Video 105.2 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !