Hybrid Stochastic-Adversarial On-Line Learning
published: Oct. 20, 2009, recorded: September 2009, views: 3219
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Most of the research in online learning focused either on the problem of adversarial classification (i.e., both inputs and labels are arbitrarily chosen by an adversary) or on the traditional supervised learning problem in which samples are i.i.d. according to a probability distribution. Nonetheless, in a number of domains the relationship between inputs and labels may be adversarial, whereas inputs are generated according to a fixed distribution. This scenario can be formalized as an hybrid classification problem in which inputs are i.i.d., while labels are adversarial. In this paper we introduce the hybrid stochastic-adversarial problem, we propose an online learning algorithm for its solution, and we analyze its performance. In particular, we show that, given a hypothesis space H with finite VC dimension, it is possible to incrementally build a suitable finite set of hypotheses that can be used as input for an exponentially weighted forecaster achieving a cumulative regret over n rounds of order O( p nV C(H) log n) with overwhelming probability.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !