Well-known shortcomings, advantages and computational challenges in Bayesian modelling: a few case stories

author: Ole Winther, Technical University of Denmark
published: Oct. 9, 2008,   recorded: September 2008,   views: 4500


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Bayesian inference can be used to judge the data fit quantitatively through the marginal likelihood. In many practical cases only one model is considered and parameter averaging is simply used to avoid overfitting. I show such an example for a large data set of genomic sequence tags where we want to predict how many new unique tags we will find if we perform new sequencing. The two parameter Yor-Pitman process is used and the results illustrate a few well-known facts: parameter averaging can be crucial and large data sets will expose the inadequacy of the model as seen by unrealistically narrow error-bars on (cross-validated) predictions. This indicates that we should come up with better models and being able to calculate the marginal likelihood for these models to perform model selection. In the second part of the talk I will discuss some of the computational challenges of calculating marginal likelihoods. Gaussian process classification is used as an example to illustrate that this is hard even for a uni-modal posterior.

See Also:

Download slides icon Download slides: bark08_winther_wksaacc_01.pdf (1.4┬áMB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: