published: July 15, 2014, recorded: June 2014, views: 3662
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Herding is an algorithm of recent interest in the machine learning community, motivated by inference in Markov random fields. It solves the following sampling problem: given a set X⊂Rd with mean μ, construct an infinite sequence of points from X such that, for every t≥1, the mean of the first t points in that sequence lies within Euclidean distance O(1/t) of μ. The classic Perceptron boundedness theorem implies that such a result actually holds for a wide class of algorithms, although the factors suppressed by the O(1/t) notation are exponential in d. Thus, to establish a non-trivial result for the sampling problem, one must carefully analyze the factors suppressed by the O(1/t) error bound.
This paper studies the best error that can be achieved for the sampling problem. Known analysis of the Herding algorithm give an error bound that depends on geometric properties of X but, even under favorable conditions, this bound depends linearly on d. We present a new polynomial-time algorithm that solves the sampling problem with error O(d√log2.5|X|/t) assuming that X is finite. Our algorithm is based on recent algorithmic results in discrepancy theory. We also show that any algorithm for the sampling problem must have error Ω(d√/t). This implies that our algorithm is optimal to within logarithmic factors.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !