Activized Learning: Transforming Passive to Active with Improved Label Complexity
published: Jan. 15, 2009, recorded: October 2008, views: 532
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In active learning, a learning algorithm is given access to a large pool of unlabeled examples, and is allowed to request the labels of any particular examples in that pool, interactively. In empirically driven research, one of the most common techniques for designing new active learning algorithms is to use an existing passive learning algorithm as a subroutine, and actively construct a training set for that method by carefully choosing informative examples to label. The resulting active learning algorithms are thus able to inherit the tried-and-true learning bias of the underlying passive algorithm, while often requiring significantly fewer labels to achieve a given accuracy compared to random sampling.
This naturally raises the theoretical question of whether every passive learning algorithm can be "activized", or transformed into an active learning algorithm that uses a smaller number of labels to achieve a given accuracy. In this talk, I will address precisely this question. In particular, I will explain how to use any passive learning algorithm as a subroutine to construct an active learning algorithm that provably achieves a strictly superior asymptotic label complexity. Along the way, I will also describe many of the recent advances in the formal study of the potential benefits of active learning in general.
Download slides: cmulls08_hanneke_altp_01.pdf (1.2 MB)
Download slides: cmulls08_hanneke_altp_01.ppt (2.9 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !