Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?

author: Zoubin Ghahramani, Department of Engineering, University of Cambridge
published: Oct. 9, 2008,   recorded: September 2008,   views: 27700


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


I'll present some thoughts and research directions in Bayesian machine learning. I'll contrast black-box approaches to machine learning with model-based Bayesian statistics. Can we meaningfully create Bayesian black-boxes? If so what should the prior be? Is non-parametrics the only way to go? Since we often can't control the effect of using approximate inference, are coherence arguments meaningless? How can we convert the pagan majority of ML researchers to Bayesianism? If the audience gets bored of these philosophical musings, I will switch to talking about our latest technical work on Indian buffet processes.

See Also:

Download slides icon Download slides: bark08_ghahramani_samlbb_01.pdf (159.3 KB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Andrés Suárez, December 25, 2009 at 11:36 p.m.:

It isn't an introductory lecture. Useful for an intermediate or expert bayesianist.

Comment2 Murat Uney, January 19, 2012 at 5:39 p.m.:

What is the paper they are referring to at 09:45?

Comment3 Madison Wilson, July 25, 2021 at 12:13 p.m.:

this is good.

Write your own review or comment:

make sure you have javascript enabled or clear this field: