Cost-Sensitive Top-Down/Bottom-Up Inference for Multiscale Activity Recognition
chairman: Michael J. Black, Max Planck Institute for Intelligent Systems, Max Planck Institute
chairman: Ivan Laptev, INRIA - The French National Institute for Research in Computer Science and Control
published: Nov. 12, 2012, recorded: October 2012, views: 156
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
This paper addresses a new problem, that of multiscale activity recognition. Our goal is to detect and localize a wide range of activities, including individual actions and group activities, which may simultaneously co-occur in high-resolution video. The video resolution allows for digital zoom-in (or zoom-out) for examining fine details (or coarser scales), as needed for recognition. The key challenge is how to avoid running a multitude of detectors at all spatiotemporal scales, and yet arrive at a holistically consistent video interpretation. To this end, we use a three-layered AND-OR graph to jointly model group activities, individual actions, and participating objects. The AND-OR graph allows a principled formulation of efficient, cost-sensitive inference via an explore-exploit strategy. Our inference optimally schedules the following computational processes: 1) direct application of activity detectors-called α process; 2) bottom-up inference based on detecting activity parts-called β process; and 3) top-down inference based on detecting activity context-called γ process. The scheduling iteratively maximizes the log-posteriors of the resulting parse graphs. For evaluation, we have compiled and benchmarked a new dataset of high-resolution videos of group and individual activities co-occurring in a courtyard of the UCLA campus.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !