Well-known shortcomings, advantages and computational challenges in Bayesian modelling: a few case stories
published: Oct. 9, 2008, recorded: September 2008, views: 358
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Bayesian inference can be used to judge the data fit quantitatively through the marginal likelihood. In many practical cases only one model is considered and parameter averaging is simply used to avoid overfitting. I show such an example for a large data set of genomic sequence tags where we want to predict how many new unique tags we will find if we perform new sequencing. The two parameter Yor-Pitman process is used and the results illustrate a few well-known facts: parameter averaging can be crucial and large data sets will expose the inadequacy of the model as seen by unrealistically narrow error-bars on (cross-validated) predictions. This indicates that we should come up with better models and being able to calculate the marginal likelihood for these models to perform model selection. In the second part of the talk I will discuss some of the computational challenges of calculating marginal likelihoods. Gaussian process classification is used as an example to illustrate that this is hard even for a uni-modal posterior.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !