Faster Rates via Active Learning
published: Feb. 25, 2007, recorded: October 2005, views: 3733
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Traditional sampling and statistical learning theories deal with data collection processes that are completely independent of the target function to be estimated, aside from possible a priori specifications reflective of assumed properties of the target. We refer to such processes as passive learning methods. Alternatively, one can envision sequential, adaptive data collection procedures that use information gleaned from previous observations to guide the process. We refer to such feedback-driven processes as active learning methods. While there have been many successful practical applications of active learning, there is scant theoretical evidence to support the effectiveness of active over passive learning. This talk covers some of the most encouraging theoretical results to date, and focuses on new results regarding the capabilities of active methods for learning (nonparametric) smooth and piecewise smooth functions. Significantly faster rates of error convergence are achieved by active learning compared to passive learning in cases involving functions whose complexity is highly concentrated within small regions its domain (e.g., functions that are smoothly varying apart from highly localized abrupt changes such as jumps or edges). This is joint work with Rui Castro and Rebecca Willett. Please see our on-line technical report for further details: http://www.ece.wisc.edu/~nowak/ECE-05-03.pdf
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !