The Regularization Frontier in Machine Learning
published: Oct. 10, 2008, recorded: September 2008, views: 884
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Machine Learning algorithms often involve the joint optimization of several objective functions for achieving good generalization performance. Well known examples are Support Vector Machines for regression, classification and novelty detection or the Lasso problem where one objective function is related to the perfect fit of the data and the second one concerns particular desirable properties such as smoothness or sparsity of the target model. These two goals being antagonist, a trade-off needs to be achieved. Hence, the learning process can be cast in a multi-objective optimisation problem. The aim of this tutorial is to bridge the gap between the multi-objective optimization literature and the machine learning community by providing an insight on the Pareto frontier, the efficient computation of this frontier using regularization path algorithms. The connection between these algorithms and parametric optimisation problems will be highlighted as well as issues related to sparsity, model selection and numerical implementation.
Download slides: ecmlpkdd08_gasso_trfim_01.pdf (5.5 MB)
Download slides: ecmlpkdd08_gasso_trfim.pdf (5.5 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !