Thompson Sampling: a provably good Bayesian heuristic for bandit problems
published: Nov. 7, 2013, recorded: September 2013, views: 6327
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Multi-armed bandit problem is a basic model for managing the exploration/exploitation trade-off that arises in many situations. Thompson Sampling [Thompson 1933] is one of the earliest heuristic for the multi-armed bandit problem, which has recently seen a surge of interest due to its elegance, flexibility, efficiency, and promising empirical performance. In this talk, I will discuss recent results showing that Thompson Sampling gives near-optimal regret for several popular variants of the multi-armed bandit problem, including linear contextual bandits. Interestingly, these works provide a prior-free frequentist type analysis of a Bayesian heuristic, and thereby a rigorous support for the intuition that once you acquire enough data, it doesn't matter what prior you started from because your posterior will be accurate enough.
Download slides: lsoldm2013_agrawal_thompson_sampling_01.pdf (809.7 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !