Distributed Exploration in Multi-Armed Bandits thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Distributed Exploration in Multi-Armed Bandits

Published on Nov 07, 20141789 Views

We study exploration in Multi-Armed Bandits (MAB) in a setting where~k players collaborate in order to identify an ϵ-optimal arm. Our motivation comes from recent employment of MAB algorithms in compu

Related categories

Chapter list

Distributed Exploration in Multi-Armed Bandits00:00
Distributed MAB setup00:17
Intuition (1 transmission)01:47
Our results (for k players)03:17