STAIR: The STanford Artificial Intelligence Robot Project

author: Andrew Ng, Computer Science Department, Stanford University
published: July 22, 2009,   recorded: July 2009,   views: 686
Categories

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

This talk will describe the STAIR home assistant robot project, and the satellite projects that led to key STAIR components such as (1) robotic grasping of previously unknown objects, (2) depth perception from a single still image, (3) practical object recognition using multimodal sensors, and (4) a software architecture for integrative AI. Since its birth in 1956, the AI dream has been to build systems that exhibit broad-spectrum competence and intelligence. STAIR revisits this dream, and seeks to integrate onto a single robot platform tools drawn from all areas of AI including learning, vision, navigation, manipulation, planning, and speech and NLP. This is in distinct contrast to, and also represents an attempt to reverse, the 30 year old trend of working on fragmented AI sub-fields. STAIR’s goal is a useful home assistant robot, and over the long term, we envision a single robot that can perform tasks such as tidying up a room, using a dishwasher, fetching and delivering items, and preparing meals. In this talk, Ng will describe our progress on having the STAIR robot fetch items from around the office, and on having STAIR take inventory of office items. Specifically, he’ll describe learning to grasp previously unseen objects (including unloading items from a dishwasher); probabilistic multi-resolution maps, which enable the robot to open or use doors; and a robotic foveal plus peripheral vision system for object recognition and tracking. Ng will also outline some of the main technical ideas - such as learning 3-D reconstructions from a single still image, and reinforcement learning algorithms for robotic control - that played key roles in enabling these STAIR components.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 john, July 28, 2009 at 11:10 p.m.:

this video is not showing

Write your own review or comment:

make sure you have javascript enabled or clear this field: