Sparse Exponential Weighting and Langevin Monte-Carlo
published: May 6, 2009, recorded: April 2009, views: 286
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The performance of statistical estimators in several scenarios, such as adaptive nonparametric estimation, aggregation of estimators and estima- tion under the sparsity constraint can be assessed in terms of sparsity oracle inequalities (SOI) for the prediction risk. One of the challenges is to ﬁnd estimators that attain the sharpest SOI under minimal assumptions on the dictionary. Methods of estimation adapted to the sparsity scenario like the Lasso, the Dantzig selector or their modiﬁcations, can be easily realized for very large dimensions of the problem but their performance is conditioned by severe restrictions on the dictionary. Such methods fail when the ele- ments of the dictionary are not approximately non-correlated. This is some- what unsatisfactory, since it is known that the BIC method enjoys better SOI without any assumption on the dictionary. However, the BIC method is NP-hard. This talk will focus on Sparse Exponential Weighting, a new tech- nique of sparse recovery aiming to realize a compromise between theoretical optimality and computational eﬃciency. The method is based on aggrega- tion with exponential weights using a heavy-tailed sparsity favoring prior. The theoretical performance of Sparse Exponential Weighting in terms of SOI is comparable with that of the BIC and is even better in some aspects. No assumption on the dictionary is needed. At the same time, we show that the method is computationally feasible for relatively large dimensions of the problem. We prove that Langevin Monte-Carlo (LMC) algorithms can be successfully used for computing Sparse Exponential Weighting estimators. Numerical experiments conﬁrm fast convergence properties of the LMC and demonstrate nice performance of the resulting estimators. This is a joint work with Arnak Dalalyan.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !