## Learning Multi-Linear Representations of Probability Distributions for Efficient Inference

author: Rajhans Samdani, University of Illinois at Urbana-Champaign
published: Oct. 20, 2009,   recorded: September 2009,   views: 145
You might be experiencing some problems with Your Video player.

# Slides

0:00 Slides Multi-linear Representations of Probability Distributions for Efficient Inference Probabilistic Inference and Learning (1) Probabilistic Inference and Learning (2) Probabilistic Representations An Alternative Probabilistic Representation Outline of the Talk (Multi-linear Representations) Multi-linear Representation: MLR (1) Multi-linear Representation: MLR (2) Inference in MLR (1) Inference in MLR (2) Inference in MLR (3) MLR vs Bayes nets Outline of the Talk (Learning in Multi-linear representations) Learning in MLR Learning MLR Monomials Learning Coefficients Learning coefficients, exactly (Exact-MLR) Learning Coefficientsâ€¦approximately (Approx-MLR) Summary of Learning Outline of the Talk (Experimental Results) Experiments Competing models Inference Experiments Inference results (1) Inference results (2) Conclusion and Future Work Thanks and Questions..? - Questions

# Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

# Description

We examine the class of multi-linear polynomial representations (MLR) for expressing probability distributions over discrete variables. Recently, MLR have been considered as intermediate representations that facilitate inference in distributions represented as graphical models. We show that MLR is an expressive representation of discrete distributions and can be used to concisely represent classes of distributions which have exponential size in other commonly used representations, while supporting probabilistic inference in time linear in the size of the representation. Our key contribution is presenting techniques for learning bounded-size distributions represented using MLR, which support efficient probabilistic inference. We propose algorithms for exact and approximate learning for MLR and, through a comparison with Bayes Net representations, demonstrate experimentally that MLR representations provide faster inference without sacrificing inference accuracy.