Learning Multi-Linear Representations of Probability Distributions for Efficient Inference
published: Oct. 20, 2009, recorded: September 2009, views: 2822
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We examine the class of multi-linear polynomial representations (MLR) for expressing probability distributions over discrete variables. Recently, MLR have been considered as intermediate representations that facilitate inference in distributions represented as graphical models. We show that MLR is an expressive representation of discrete distributions and can be used to concisely represent classes of distributions which have exponential size in other commonly used representations, while supporting probabilistic inference in time linear in the size of the representation. Our key contribution is presenting techniques for learning bounded-size distributions represented using MLR, which support efficient probabilistic inference. We propose algorithms for exact and approximate learning for MLR and, through a comparison with Bayes Net representations, demonstrate experimentally that MLR representations provide faster inference without sacrificing inference accuracy.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !