Variational Inference and Experimental Design for Sparse Linear Models
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Sparsity is a fundamental concept in modern statistics, and often the only general principle available at the moment to address novel learning ap- plications with many more variables than observations. Despite the recent advances of the theoretical understanding and the algorithmics of sparse point estimation, higher-order problems such as covariance estimation or optimal data acquisition are seldomly addressed for sparsity-favouring mod- els, and there are virtually no scalable algorithms. We provide an approximate Bayesian inference algorithm for sparse lin- ear models, that can be used with hundred thousands of variables. Our method employs a convex relaxation to variational inference and settles an open question in continuous Bayesian inference: The Gaussian lower bound relaxation is convex for a class of super-Gaussian potentials including the Laplace and Bernoulli potentials. Our algorithm reduces to the same computational primitives used for sparse estimation methods, but requires Gaussian marginal variance esti- mation as well. We show how the Lanczos algorithm from numerical math- ematics can be employed to compute the latter. We are interested in Bayesian experimental design, a powerful framework for optimizing measurement architectures. We have applied our framework to problems of magnetic resonance imaging design and reconstruction.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !