Seeking Interpretable Models for High Dimensional Data
published: July 30, 2009, recorded: June 2009, views: 775
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Extracting useful information from high-dimensional data is the focus of today's statistical research and practice. After broad success of statistical machine learning on prediction through regularization, interpretability is gaining attention and sparsity has been used as its proxy. With the virtues of both regularization and sparsity, Lasso (L1 penalized L2 minimization) has been very popular recently. In this talk, I would like to discuss the theory and pratcice of sparse modeling. First, I will give an overview of recent research on sparsity and explain what useful insights have been learned from theoretical analyses of Lasso. Second, I will present collaborative research with the Gallant Lab at Berkeley on building sparse models (linear, nonlinear, and graphical) that describe fMRI responses in primary visual cortex area V1 to natural images.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !