Empirical Bernstein boosting
published: June 3, 2010, recorded: May 2010, views: 189
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Concentration inequalities that incorporate variance information (such as Bernstein's or Bennett's inequality) are often significantly tighter than counterparts (such as Hoeffding's inequality) that disregard variance. Nevertheless, many state of the art machine learning algorithms for classification problems like AdaBoost and support vector machines (SVMs) extensively use Hoeffding's inequalities to justify empirical risk minimization and its variants. This article proposes a novel boosting algorithm based on a recently introduced principle--sample variance penalization--which is motivated from an empirical version of Bernstein's inequality. This framework leads to an efficient algorithm that is as easy to implement as AdaBoost while producing a strict generalization. Experiments on a large number of datasets show significant performance gains over AdaBoost. This paper shows that sample variance penalization could be a viable alternative to empirical risk minimization.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !