A Factor Model for Learning Higher Order Features in Natural Images
published: Aug. 26, 2009, recorded: June 2009, views: 3917
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The visual system is a hierarchy of processing stages. Each stage in this pathway, in addition to encoding increasingly complex features of the input, performs complex non-linear computations. What is the functional role of these non-linear behaviors and how do we incorporate them into generative models of natural images?
A number of non-linear properties of visual neurons can be predicted from the statistical dependencies observed in natural images. For example, the magnitudes of linear filter outputs are correlated; normalizing filter responses removes this correlation (making the responses more independent and marginally Gaussian) and reproduces neural gain control. In addition, the pattern in these correlations is itself highly informative, and can be used to infer the context of patches sampled from a large scene. Here I will focus on these statistical patterns and describe a generative model that captures them using a set of factors in the space of log-covariance of a multivariate Gaussian distribution. Trained on natural images, the model learns a compact code for correlations observed in pixel (or linear feature) distributions that represents more abstract properties of the image. I will also connect this work to recent generative models that incorporate multiplicative interactions between observed and latent variables.
Download slides: icml09_karklin_fmlh_01.pdf (2.1 MB)
Download slides: icml09_karklin_fmlh_01.pptx (2.6 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !