Unsupervised Structure Learning: Hierarchical Recursive Composition, Suspicious Coincidence and Competitive Exclusion

author: Alan L. Yuille, Department of Statistics, University of California, Los Angeles, UCLA
published: Aug. 26, 2009,   recorded: June 2009,   views: 4476
Categories

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

We describe a new method for unsupervised structure learning of a hierarchical compositional model (HCM) for deformable objects. The learning is unsupervised in the sense that we are given a training data set of images containing the object in cluttered backgrounds but we do not know the position or boundary of the object. The structure learning is performed by a bottom-up and top-down process. The bottom-up process is a novel form of hierarchical clustering which recursively composes proposals for simple structures to generate proposals for more complex structures. We combine standard clustering with the suspicious coincidence principle and the competitive exclusion principle to prune the number of proposals to a practical number and avoid an exponential explosion of possible structures. The hierarchical clustering stops automatically, when it fails to generate new proposals, and outputs a proposal for the object model. The top-down process validates the proposals and fills in missing elements. We tested our approach by using it to learn a hierarchical compositional model for parsing and segmenting horses on Weizmann dataset. We show that the resulting model is comparable with (or better than) alternative methods. The versatility of our approach is demonstrated by learning models for other objects (e.g., faces, pianos, butterflies, monitors, etc.). It is worth noting that the low-levels of the object hierarchies automatically learn generic image features while the higher levels learn object specific features. We then describe more recent work which uses similar principles to learn hierarchies for many objects simultaneously.

This talk is based on two research projects. The full authors for these projects are:

  • Project 1 (ECCV 2008) L. Zhu (UCLA), C. Lin (Microsoft Beijing), H. Huang (Microsoft Beijing), Y.Chen (USTC), and A.L. Yuille (UCLA),
  • Project 2. L. Zhu (MIT), Y. Chen (USTC), W. Freeman (MIT), A. Torrabla (MIT), and A.L. Yuille (UCLA).

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: