
en
0.25
0.5
0.75
1.25
1.5
1.75
2
Distributed Exploration in Multi-Armed Bandits
Published on 2014-11-071794 Views
We study exploration in Multi-Armed Bandits (MAB) in a setting where~k players collaborate in order to identify an ϵ-optimal arm. Our motivation comes from recent employment of MAB algorithms in compu
Related categories
Presentation
Distributed Exploration in Multi-Armed Bandits00:00
Distributed MAB setup00:17
Intuition (1 transmission)01:47
Our results (for k players)03:17