Regularization and Computations: Early stopping for Online Learning
published: Oct. 6, 2014, recorded: December 2013, views: 1683
Download slides: nipsworkshops2013_rosasco_regularization_01.pdf (4.0 MB)
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Early stopping is one of the most appealing heuristics when dealing with big data, since the computational resources required for learning are directly linked to the desired generalization properties. Interestingly, the theoretical foundations of learning with early stopping have only recently been developed and only for the case of the classical batch gradient descent.
In this talk, we discuss and analyze the potential impact of early stopping for online learning in a stochastic setting. More precisely, we study the estimator defined by the incremental gradient descent of the (unregularized) empirical risk and show that it’s universally consistent when provided with a universal step-size, and a suitable early stopping rule. Our results shed light on the need of considering several passes over the data (epochs) in online learning.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !
Write your own review or comment: