A Generative Perspective on MRFs in Low-Level Vision
published: July 19, 2010, recorded: June 2010, views: 873
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Markov random fields (MRFs) are popular and generic probabilistic models of prior knowledge in low-level vision. Yet their generative properties are rarely examined, while application-specific models and non-probabilistic learning are gaining increased attention. In this paper we revisit the generative aspects of MRFs, and analyze the quality of common image priors in a fully application-neutral setting. Enabled by a general class of MRFs with flexible potentials and an efficient Gibbs sampler, we find that common models do not capture the statistics of natural images well. We show how to remedy this by exploiting the efficient sampler for learning better generative MRFs based on flexible potentials. We perform image restoration with these models by computing the Bayesian minimum mean squared error estimate (MMSE) using sampling. This addresses a number of shortcomings that have limited generative MRFs so far, and leads to substantially improved performance over maximum a-posteriori (MAP) estimation. We demonstrate that combining our learned generative models with sampling based MMSE estimation yields excellent application results that can compete with recent discriminative methods.
Download slides: cvpr2010_schmidt_gpmrf_01.v1.pdf (5.1 MB)
Download article: cvpr2010_schmidt_gpmrf_01.pdf (1.7 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !