The Sample-Computational Tradeoff
published: Jan. 16, 2013, recorded: December 2012, views: 3907
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
When analyzing the error of a learning algorithm, it is common to decompose the error into approximation error (measuring how well the hypothesis class fits the problem) and estimation error (due to the fact we only receive a finite training set). In practice, we usually pay an additional error, called optimization error, due to the fact that we have a limited computational power. I will describe this triple tradeoff and will demonstrate how more training examples can lead to more efficient learning algorithms.
Download slides: nipsworkshops2012_shalev_shwartz_tradeoff_01.pdf (614.9 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !