Differentiable Sparse Coding

author: David Bradley, Robotics Institute, School of Computer Science, Carnegie Mellon University
published: Jan. 15, 2009,   recorded: November 2008,   views: 620
Categories
You might be experiencing some problems with Your Video player.

Slides

0:00 Slides Differentiable Sparse Coding 100,000 ft View 10,000 ft view Sparse Coding As a combination of factors Sparse coding uses optimization Sparse vectors Example: X=Handwritten Digits Optimization vs. Projection part1 Optimization vs. Projection part2 Generative Model Sparse Approximation part1 Sparse Approximation part2 Example: Squared Loss + L1 L1 Sparse Coding Differentiable Sparse Coding L1 Regularization is Not Differentiable Why is this unsatisfying? Problem #1: Instability Problem #2: No closed‐form Equation Solution: Implicit Differentiation Example: Squared Loss, KL prior Handwritten Digit Recognition part1 Handwritten Digit Recognition part2 Handwritten Digit Recognition part3 Handwritten Digit Recognition part4 KL Maintains Sparsity KL adds Stability Performance vs. Prior KL adds Stability Performance vs. Prior Classifier Comparison Comparison to other algorithms Transfer to English Characters part1 Comparison to other algorithms Transfer to English Characters part1 Transfer to English Characters part2 Transfer to English Characters part3 Transfer to English Characters part4 Text Application part1 Text Application part2 Text Application part3 Movie Review Sentiment Future Work Future Work: Convex Sparse Coding - Questions - Questions

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

Description

Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a Laplacian (L1 ) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of applications, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.