Neighbourhood Components Analysis

author: Sam Roweis
published: Feb. 25, 2007,   recorded: July 2006,   views: 19588


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Say you want to do K-Nearest Neighbour classification. Besides selecting K, you also have to chose a distance function, in order to define "nearest". I'll talk about a novel method for *learning* -- from the data itself -- a distance measure to be used in KNN classification. The learning algorithm, Neighbourhood Components Analysis (NCA) directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and very fast classification in high dimensions. Of course, the resulting classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. If time permits, I'll also talk about newer work on learning the same kind of distance metric for use inside a Gaussian Kernel SVM classifier.

See Also:

Download slides icon Download slides: mlss06tw_roweis_nca.pdf (477.0┬áKB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Bird, January 15, 2010 at 12:28 a.m.:

This poor man just jumped to his death in NYC. He was depressed. I'm glad there is a record of his work here.

Write your own review or comment:

make sure you have javascript enabled or clear this field: