Distributed Exploration in Multi-Armed Bandits

author: Tomer Koren, Technion - Israel Institute of Technology
published: Nov. 7, 2014,   recorded: January 2014,   views: 1776


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


We study exploration in Multi-Armed Bandits (MAB) in a setting wherek players collaborate in order to identify an ϵ-optimal arm. Our motivation comes from recent employment of MAB algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the k players to communicate \emph{only once}, they are able to learn k√ times faster than a single player. That is, distributing learning to k players gives rise to a factork√ parallel speed-up. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor k speed-up in learning performance, with communication only logarithmic in1/ϵ.

See Also:

Download slides icon Download slides: machine_koren_distributed_exploration_01.pdf (265.5 KB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: