Bounding Excess Risk in Machine Learning
published: July 30, 2009, recorded: June 2009, views: 7201
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We will discuss a general approach to the problem of bounding the excess risk of learning algorithms based on empirical risk minimization (possibly penalized). This approach has been developed in the recent years by several authors (among others: Massart; Bartlett, Bousquet and Mendelson; Koltchinskii). It is based on powerful concentration inequalities due to Talagrand as well as on a variety of tools of empirical processes theory (comparison inequalities, entropy and generic chaining bounds on Gaussian, empirical and Rademacher processes, etc.). It provides a way to obtain sharp excess risk bounds in a number of problems such as regression, density estimation and classification and for many different classes of learning methods (kernel machines, ensemble methods, sparse recovery). It also provides a general way to construct sharp data dependent bounds on excess risk that can be used in model selection and adaptation problems.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !