Maximum Likelihood vs. Sequential Normalized Maximum Likelihood in On-line Density Estimation
published: Aug. 2, 2011, recorded: July 2011, views: 3760
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The paper considers sequential prediction of individual sequences with log loss (online density estimation) using an exponential family of distributions. We first analyze the regret of the maximum likelihood ("follow the leader") strategy. We find that this strategy is (1) suboptimal and (2) requires an additional assumption about boundedness of the data sequence. We then show that both problems can be be addressed by adding the currently predicted outcome to the calculation of the maximum likelihood, followed by normalization of the distribution. The strategy obtained in this way is known in the literature as the sequential normalized maximum likelihood or last-step minimax strategy. We show for the first time that for general exponential families, the regret is bounded by the familiar (k=2) log n and thus optimal up to O(1). We also show the relationship to the Bayes strategy with Jereys' prior.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !