Information-Theoretic Metric Learning thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Information-Theoretic Metric Learning

Published on Feb 25, 20076426 Views

We formulate the metric learning problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the Mahalanobis distance function. Via a surp

Related categories

Chapter list

Information-Theoretic Metric Learning00:01
Introduction00:24
Learning a Mahalanobis Distance01:43
Mahalanobis Distance and the Multivariate Gaussian03:12
Problem Formulation05:08
Overview: Optimizing the Model05:44
Overview: Optimizing the Model06:34
Low-Rank Kernel Learning06:53
Low-Rank Kernel Learning07:45
Equivalence to Kernel Learning08:47
Equivalence to Kernel Learning09:27
Proof Sketch10:15
Proof Sketch11:22
Optimization via Bregman’s Method11:45
Optimization via Bregman’s Method12:44
Optimization via Bregman’s Method13:24
Extensions14:13
Extensions14:45
Extensions15:07
Experimental Methodology15:42
Experimental Methodology15:53
Experimental Methodology16:36
Experimental Methodology16:51
Experimental Results16:56
Experimental Results17:30
Conclusion18:01