Survey of Boosting from an Optimization Perspective
author: S.V.N. Vishwanathan, Department of Computer Science, University of California Santa Cruz
published: Aug. 26, 2009, recorded: June 2009, views: 4872
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
Boosting has become a well known ensemble method. The algorithm maintains a distribution on the ±-labeled examples and a new base learner is added in a greedy fashion. The goal is to obtain a small linear combination of base learners that clearly separates the examples. We focus on a recent view of Boosting where the update algorithm for distribution on the examples is characterized by a minimization problem that uses a relative entropy as a regularization. The most well known boosting algorithms is AdaBoost. This algorithm approximately maximizes the hard margin, when the data is separable. We focus on recent algorithms that provably maximize the soft margin when the data is noisy. We will teach the new algorithms, give a unied and versatile view of Boosting in terms of relative entropy regularization, and show how to solve large scale problems based on state of the art optimization techniques. Our goal is to motivate people to mimic the recent successes of the SVM community for scaling up the solvable problem size. This goal is challenging because in Boosting the regularization (relative entropy) is more complicated than the one used for SVMs (squared Euclidean distance). Nevertheless we can solve dense problems with 200K examples in less than a minute on a laptop.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Reviews and comments:
excellent
Write your own review or comment: