Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?
author: Zoubin Ghahramani,
Department of Engineering, University of Cambridge
published: Oct. 9, 2008, recorded: September 2008, views: 27700
published: Oct. 9, 2008, recorded: September 2008, views: 27700
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
I'll present some thoughts and research directions in Bayesian machine learning. I'll contrast black-box approaches to machine learning with model-based Bayesian statistics. Can we meaningfully create Bayesian black-boxes? If so what should the prior be? Is non-parametrics the only way to go? Since we often can't control the effect of using approximate inference, are coherence arguments meaningless? How can we convert the pagan majority of ML researchers to Bayesianism? If the audience gets bored of these philosophical musings, I will switch to talking about our latest technical work on Indian buffet processes.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Reviews and comments:
It isn't an introductory lecture. Useful for an intermediate or expert bayesianist.
What is the paper they are referring to at 09:45?
this is good. https://www.google.com
Write your own review or comment: