Where's What? - Towards Semantic Mapping of Urban Environments

author: Ingmar Posner, University of Oxford
published: Jan. 19, 2010,   recorded: December 2009,   views: 4321
Categories

Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

The availability of continuous streams of data from multiple modalities covering the same workspace has long been recognised as a privilege by robotics researchers. Data fusion has a successful track record in the field leading to the by now routine generation of high-quality large scale metric and topological maps of unstructured environments. With this success, however, comes the realisation that prominent applications in robotics -- such as action selection and human machine interaction -- require information beyond mere metric or topological representations. As a result, researchers throughout the community are becoming increasingly interested in adding higher-order, semantic information to the maps obtained. In this context, the availability of a rich set of data from complimentary modalities once again comes into its own. In this talk we provide a snapshot of ongoing work aiming to enrich standard metric or topological maps as provided by a mobile robot with higher-order semantic information. Environmental cues are considered for classification at different scales. The first stage considers local scene properties using a probabilistic bag-of-words classifier. The second stage incorporates contextual information across a given scene (spatial context) and across several consecutive scenes (temporal context) via a Markov Random Field (MRF). Our approach is driven by data from an onboard camera and 3D laser scanner and uses a combination of visual and geometric features. We demonstrate the virtue of considering such spatial and temporal context during the classification task and analyse the performance of our technique on data gathered over 17 km of track through a city.

See Also:

Download slides icon Download slides: nipsworkshops09_posner_wwts_01.pdf (6.6┬áMB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: