Bayesian Interpretations of RKHS Embedding Methods
published: Jan. 16, 2013, recorded: December 2012, views: 4462
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We give a simple interpretation of mean embeddings as expectations under a Gaussian process prior. Methods such as kernel two-sample tests, the Hilbert-Schmidt Independence Criterion, and kernel herding are all based on distances between mean embeddings, also known as the Maximum Mean Discrepancy (MMD). This Bayesian interpretation allows a derivation of optimal herding weights, principled methods of kernel learning, and sheds light on the assumptions necessary for MMD-based methods to work in practice. In the other direction, the MMD interpretation gives tight, closed-form bounds on the error of Bayesian estimators.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !