Suboptimality of MDL and Bayes in Classification under Misspecification

author: Peter Grünwald, Centrum Wiskunde & Informatica (CWI)
published: Feb. 25, 2007,   recorded: October 2005,   views: 3197
Categories

Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

We show that forms of Bayesian and MDL learning that are often applied to classification problems can be *statistically inconsistent*. We present a large family of classifiers and a distribution such that the best classifier within the model has generalization error (expected 0/1-prediction loss) almost 0. Nevertheless, no matter how many data are observed, both the classifier inferred by MDL and the classifier based on the Bayesian posterior will behave much worse than this best classifier in the sense that their expected 0/1-prediction loss is substantially larger. Our result can be re-interpreted as showing that under misspecification, Bayes and MDL do not always converge to the distribution in the model that is closest in KL divergence to the data generating distribution. We compare this result with earlier results on Bayesian inconsistency by Diaconis, Freedman and Barron.

See Also:

Download slides icon Download slides: eurandom_2005.ppt (4.3 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: