Trading off Mistakes and Don't-Know Predictions
published: March 25, 2011, recorded: December 2010, views: 3466
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We discuss an online learning framework in which the agent is allowed to say "I don't know2 as well as making incorrect predictions on given examples. We analyze the trade off between saying "I don't know" and making mistakes. If the number of don't know predictions is forced to be zero, the model reduces to the well-known mistake-bound model introduced by Littlestone [Lit88]. On the other hand, if no mistakes are allowed, the model reduces to KWIK framework introduced by Li et. al. [LLW08]. We propose a general, though inefficient, algorithm for general finite concept classes that minimizes the number of don't-know predictions if a certain number of mistakes are allowed. We then present specific polynomial-time algorithms for the concept classes of monotone disjunctions and linear separators.
Download slides: nips2010_blum_tom_01.pdf (242.2 KB)
Download article: nips2010_1297.pdf (108.8 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !