published: Oct. 6, 2014, recorded: December 2013, views: 1700
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Worst-case sample complexity bounds generally scale quadratically with the excess error. However, when the target error is small, the dependence on the excess error is more similar to a linear dependence rather than quadratic. In this talk I will discuss when and how such optimistic rates are possible, in particular in the non-parametric scale-sensitive case, and when they are not possible, argue that the "optimistic" regime better captures the relevant regime to learning, and show examples of how an analysis based on such optimistic rates is necessary in order to understand various learning phenomena.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !