Action Recognition with Exemplar Based 2.5D Graph Matching
chairman: Michael J. Black, Max Planck Institute for Intelligent Systems, Max Planck Institute
author: Bangpeng Yao, Computer Science Department, Stanford University
published: Nov. 12, 2012, recorded: October 2012, views: 330
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
This paper deals with recognizing human actions in still images. We make two key contributions. (1) We propose a novel, 2.5D representation of action images that considers both viewindependent pose information and rich appearance information. A 2.5D graph of an action image consists of a set of nodes that are keypoints of the human body, as well as a set of edges that are spatial relationships between the nodes. Each key-point is represented by view-independent 3D positions and local 2D appearance features. The similarity between two action images can then be measured by matching their corresponding 2.5D graphs. (2) We use an exemplar based action classification approach, where a set of representative images are selected for each action class. The selected images cover large within-action variations and carry discriminative information compared with the other classes. This exemplar based representation of action classes further makes our approach robust to pose variations and occlusions. We test our method on two publicly available datasets and show that it achieves very promising performance.
Download slides: eccv2012_yao_action_01.pdf (2.5 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !