Message-Passing Algorithms for GMRFs and Non-Linear Optimization
published: Feb. 1, 2008, recorded: December 2007, views: 4043
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We review a variety of iterative methods for inference and estimation in Gaussian graphical models, also known as Gauss-Markov random fields (GMRFs), and consider how to adapt these methods to non-linear optimization problems involving graph-structured objective functions. Specifically, we review the ''walk-sum'' interpretation of Gaussian belief propagation (GaBP) and consider a novel stable form of GaBP which always converges to the optimal estimate. In another direction, we discuss an iterative Lagrangian relaxation method applicable for GMRFs which decompose as a sum of convex quadratic potential functions defined on cliques of the graph. We then consider how such methods can be adapted to solve more general classes of MRFs with smooth potential functions. For instance, if the potential functions are convex, it is straight-forward to use Gaussian inference to efficiently implement Newton's method leading to the optimal solution. In non-convex MRFs, it is still possible to obtain approximate solutions using the method of Levenberg-Marquardt. As time permits, we may also consider the half-quadratic optimization approach for MRFs having non-smooth potential functions. In all of these approaches, the problem is approximated by a sequence of convex quadratic problems each of which can be solved in a distributed fashion using Gaussian inference techniques.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !