Convex Risk Minimization and Conditional Probability Estimation
published: Sept. 9, 2015, recorded: July 2015, views: 131
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
This manuscript strengthens the link between convex risk minimization and conditional probability estimation, a connection already notable for establishing consistency results (Friedman et al., 2000; Zhang, 2004b; Bartlett et al., 2006). Specifically, this manuscript first shows that a loss function, linear space of predictors, and probability measure together define a unique optimal conditional probability model, moreover one which may be attained by the usual convex risk minimization. This result is proved in infinite dimensions, and thus gives a concrete convergence target for unregularized methods like boosting which can fail to have minimizers. Second, this convergence result is refined in finitely many dimensions to hold for empirical risk minimization. This uniform convergence result exhibits no dependence on the norms of its predictors, and thus can justify the practical effectiveness of minimally-regularized optimization schemes.
Download slides: colt2015_telgarsky_conditional_probability_01.pdf (637.7 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !