Probabilistic Non-Linear Principal Component Analysis with Gaussian Process Latent Variables

author: Neil D. Lawrence, Department of Computer Science, University of Sheffield
published: Feb. 25, 2007,   recorded: September 2004,   views: 10471

See Also:

Download slides icon Download slides: gplvm_smlw.ppt (2.6┬áMB)

Help icon Streaming Video Help

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a for a covariance function we can develop a non-linear probabilistic PCA.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: