Bridging the gap between machines and people
published: Dec. 1, 2010, recorded: November 2010, views: 4290
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In the last few years, how robots operate in the world has advanced considerably. Examples include the autonomous vehicles in the DARPA Grand Challenges and Urban Challenge, the considerable work in robot mapping, and the growing interest in home and service robots. However, these example technologies and systems are still mostly restricted to research prototypes. One obstacle to getting more widely useful robots is that the way robots reason about their world is still pretty different to how people reason. Robots think in terms of point features, dense occupancy grids and action cost maps. People think in terms of landmarks, segmented objects and tasks (among other representations). There are good reasons why these are different, and robots are unlikely to ever reason about the world in the same way that people do. But, there has been recent work in bridging the gap between low-level geometry and control, and higher-level semantic representations. I will talk about how machine learning is being used to develop more capable robots that can operate in populated environments and perform complex tasks. I will discuss the state of the art, what the open challenges are and the potential impact of solving these challenges.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !