Seeking Interpretable Models for High Dimensional Data

author: Bin Yu, Department of Statistics, UC Berkeley
published: July 30, 2009,   recorded: June 2009,   views: 775
Categories

Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

Extracting useful information from high-dimensional data is the focus of today's statistical research and practice. After broad success of statistical machine learning on prediction through regularization, interpretability is gaining attention and sparsity has been used as its proxy. With the virtues of both regularization and sparsity, Lasso (L1 penalized L2 minimization) has been very popular recently. In this talk, I would like to discuss the theory and pratcice of sparse modeling. First, I will give an overview of recent research on sparsity and explain what useful insights have been learned from theoretical analyses of Lasso. Second, I will present collaborative research with the Gallant Lab at Berkeley on building sparse models (linear, nonlinear, and graphical) that describe fMRI responses in primary visual cortex area V1 to natural images.

See Also:

Download slides icon Download slides: mlss09us_yu_simhdd_01.ppt (9.6┬áMB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: