## Learning Sparse Representations of High Dimensional Data on Large Scale Dictionaries

author: Zhen James Xiang, Princeton University
published: Jan. 25, 2012,   recorded: December 2011,   views: 886
Categories
You might be experiencing some problems with Your Video player.

# Slides

0:00 Slides Learning Sparse Representa.ons of High Dimensional Data on Large Scale Dic.onaries Outline - 1 Sparse Representations: Why? - 1 Sparse Representations: Why? - 2 Sparse Representations: Why? - 3 Sparse Representations: Why? - 4 Sparse Representations: Why? - 5 Sparse Representations: How? - 1 Sparse Representations: How? - 2 Sparse Representations: How? - 3 Examples - 1 Examples - 2 Examples - 3 Outline - 2 Solving One Lasso Problem Screening Tests - 1 Screening Tests - 2 Screening Tests - 3 Speed Up Lasso - 1 Speed Up Lasso - 2 Speed Up Lasso - 3 Speed Up Lasso - 4 Speed Up Lasso - 5 Fast and Online - 1 Fast and Online - 2 Fast and Online - 3 Fast and Online - 4 The Math Behind the Tests - 1 The Math Behind the Tests - 2 The Math Behind the Tests - 1 The Math Behind the Tests - 2 The Math Behind the Tests - 3 Outline - 3 Tree Structured Dictionaries - 1 Tree Structured Dictionaries - 2 Tree Structured Dictionaries - 3 Random Projections - 1 Random Projections - 2 Random Projections - 3 Evaluation: Time and Quality Conclusion Acknowledgements

# Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

# Description

Learning sparse representations on data adaptive dictionaries is a state-of-the-art method for modeling data. But when the dictionary is large and the data dimension is high, it is a computationally challenging problem. We explore three aspects of the problem. First, we derive new, greatly improved screening tests that quickly identify codewords that are guaranteed to have zero weights. Second, we study the properties of random projections in the context of learning sparse representations. Finally, we develop a hierarchical framework that uses incremental random projections and screening to learn, in small stages, a hierarchically structured dictionary for sparse representations. Empirical results show that our framework can learn informative hierarchical sparse representations more efficiently.