Bayesian Methods

author: Christian Borgelt, European Center for Soft Computing
published: Feb. 25, 2007,   recorded: June 2005,   views: 8144

See Also:

Download slides icon Download slides: acai05_borgelt_bm_01.pdf (308.7 KB)

Download slides icon Download slides: borgelt_christian_01.pdf (546.7 KB)

Download article icon Download article: borgelt_christian_00.doc (16.0 KB)

Help icon Streaming Video Help

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


In the last decade probabilistic graphical models - in particular Bayes networks and Markov networks - became very popular as tools for structuring uncertain knowledge about a domain of interest and for building knowledge-based systems that allow sound and efficient inferences about this domain. The core idea of graphical models is that usually certain independence relations hold between the attributes that are used to describe a domain of interest. In most uncertainty calculi -- and in particular in probability theory -- the structure of these independence relations is very similar to properties concerning the connectivity of nodes in a graph. As a consequence, it is tried to capture the independence relations by a graph, in which each node represents an attribute and each edge a direct dependence between attributes. In addition, provided that the graph captures only valid independences, it prescribes how a probability distribution on the (usually high-dimensional) space that is spanned by the attributes can be decomposed into a set of smaller (marginal or conditional) distributions. This decomposition can be exploited to derive evidence propagation methods and thus enables sound and efficient reasoning under uncertainty. The lecture gives a brief introduction into the core ideas underlying graphical models, starting from their relational counterparts and highlighting the relation between independence and decomposition. Furthermore, the basics of model construction and evidence propagation are discussed, with an emphasis on join/junction tree propagation. A substantial part of the lecture is then devoted to learning graphical models from data, in which quantitative learning (parameter estimation) as well as the more complex qualitative or structural learning (model selection) are studied. The lecture closes with a brief discussion of example applications.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Anergy, June 16, 2007 at 4:13 p.m.:

Nice contents but it would have been much better if camera focus more on slides than the lecturer.

Write your own review or comment:

make sure you have javascript enabled or clear this field: