A Gaussian Process View on MKL
published: Jan. 12, 2011, recorded: December 2010, views: 10844
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Gaussian processes (GPs) provide an appealing probabilistic framework for multiple kernel learning (MKL). For more than a decade, it has been common practice to learn the well known sum-of-kernels by, for example, maximum likelihood estimation. In this talk, I’ll first introduce the sum-of-kernels Gaussian process formulation. I’ll then show how to go beyond convex formulations by learning the GP covariance. In particular, I’ll first introduce parametric forms of the covariance driving connections to co- training. I’ll then show how to learn non-parametric covariances via latent spaces. If time permits, I’ll talk about multi-task learning as well as multi-output Gaussian processes and show connections to metric learning. I’ll demonstrate the performance of some of these approaches in computer vision tasks such as object recognition.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !