Learning Distance Function by Coding Similarity

author: Rioe Kliper, The Hebrew University of Jerusalem
published: June 23, 2007,   recorded: June 2007,   views: 503

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


We consider the problem of learning a similarity function from a set of positive equivalence constraints, i.e. "similar" point pairs. We define the similarity in information theoretic terms, as the gain in coding length when shifting from independent encoding of the pair to joint encoding. Under simple Gaussian assumptions, this formulation leads to a non-Mahalanobis similarity function which is effcient and simple to learn. This function can be viewed as a likelihood ratio test, and we show that the optimal similaritypreserving pro jection of the data is a variant of Fisher Linear Discriminant. We also show that under some naturally occurring sampling conditions of equivalence constraints, this function converges to a known Mahalanobis distance (RCA). The suggested similarity function exhibits superior performance over alternative Mahalanobis distances learnt from the same data. Its superiority is demonstrated in the context of image retrieval and graph based clustering, using a large number of data sets.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Jason, October 19, 2007 at 1:07 p.m.:

This guy is the coolest

Comment2 Pankaj Garg, November 18, 2009 at 7:16 p.m.:

It has very poor video. And there are no slides :(
Is there a way to get the slides?

Write your own review or comment:

make sure you have javascript enabled or clear this field: