Optimal Distributed Online Prediction Using Mini-Batches
published: Jan. 13, 2011, recorded: December 2010, views: 4612
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work we present the distributed mini-batch algorithm, a method of converting any serial gradient-based online pre- diction algorithm into a distributed algorithm that scales nicely to multiple cores, clusters, and grids. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our regret analysis explicitly takes into account communication latencies that occur on the network. Our method can also be adapted to optimally solve the closely- related distributed stochastic optimization problem.
Download slides: nipsworkshops2010_xiao_odo_01.pdf (187.6 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !
Write your own review or comment: