Connections between the Lasso and Support Vector Machines
published: Aug. 26, 2013, recorded: July 2013, views: 555
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We investigate the relation of two fundamental tools in machine learning and signal processing, that is the support vector machine (SVM) for classiﬁcation, and the Lasso technique used in regression. We show  that the resulting optimization problems are equivalent, in the following sense: Given any instance of one of the two problems, we construct an instance of the other, having the same optimal solution.
In consequence, many existing optimization algorithms for both SVMs and Lasso can also be applied to the respective other problem instances. Also, the equivalence allows for many known theoretical insights for SVM and Lasso to be translated between the two settings. One such implication gives a simple kernelized version of the Lasso, analogous to the kernels used in the SVM setting. Another consequence is that the sparsity of a Lasso solution is equal to the number of support vectors for the corresponding SVM instance, and that one can use screening rules to prune the set of support vectors. Furthermore, we can relate sublinear time algorithms for the two problems, and give a new such algorithm variant for the Lasso.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !