Kernel Representations and Kernel Density Estimation
published: Dec. 18, 2008, recorded: December 2008, views: 6904
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
There has been a great deal of attention in recent times particularly in machine learning to representation of multivariate data points x by K(x, ·) where K is positive and symmetric and thus induces a reproducing kernel Hilbert space.The idea is then to use the matrix
|K(Xi , Xj )||as a substitute for the empirical covariance matrix of a sample X1 , . . . , Xn for PCA|
and other inference.(Jordan and Fukumizu(2006) for instance. Nadler et. al(2006) connected this approach to one based on random walks and diffusion limits and indicated a connection to kernel density estimation.By making at least a formal connection to a multiplication operator on a function space we make further connection and show how clustering results of Beylkin ,Shih and Yu (2008) which apparently diﬀer from Nadler et al. can be explained.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !