Learning Kernels via Margin-and-Radius Ratios
published: Jan. 12, 2011, recorded: December 2010, views: 3420
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
Most existing MKL approaches employ the large margin principle to learning kernels. However, we point out that the margin itself can not well describe the goodness of a kernel due to the negligence of the scaling. We use the ratio between the margin and the radius of the minimal enclosing ball of data in the feature space endowed with a kernel, to measure how good the kernel is, and propose a new scaling-invariant formulation for kernel learning. Our presented formulation can handle both linear and nonlinear combination kernels. In linear combination cases, it is also invariant not only to types of norm constraints on combination coefficients but also to initial scalings of basis kernels. By establishing the differentiability of a general type of multilevel optimal value functions, we present a simple and efficient gradient-based kernel learning algorithm. Experiments show that our approach significantly outperforms other state-of- art kernel learning methods.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Write your own review or comment: