Learning with Many Reproducing Kernel Hilbert Spaces
published: May 6, 2009, recorded: April 2009, views: 457
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In this talk, we consider the problem of learning a target function that belongs to the linear span of a large number of reproducing kernel Hilbert spaces. Such a problem arises naturally in many practice situations with the ANOVA, the additive model and multiple kernel learning as the most well known and important examples. We investigate approaches based on l1-type complexity regularization and the nonnegative garrote respectively. We show that the computation of both procedures can be done efficiently and the nonnegative garrote could be more favorable at times. We also study their theoretical properties from both variable selection and estimation perspective. We establish several probabilistic inequalities providing bounds on the excess risk and L2-error that depend on the sparsity of the problem. Part of the talk is based on joint work with Vladimir Koltchinskii.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !