Sequential Monte-Carlo Methods

author: Arnaud Doucet, Department of Statistics, University of Oxford
author: Nando de Freitas, Department of Computer Science, University of Oxford
published: Jan. 19, 2010,   recorded: December 2009,   views: 8688


Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 57:51
Watch Part 2
Part 2 47:35


Over the last fifteen years, sequential Monte Carlo (SMC) methods gained popularity as powerful tools for solving intractable inference problems arising in the modelling of sequential data. Much effort was devoted to the development of SMC methods, known as particle filters (PFs), for estimating the filtering distribution of the latent variables in dynamic models. This line of research produced many algorithms, including auxiliary-variable PFs, marginal PFs, the resample-move algorithm and Rao-Blackwellised PFs. It also led to many applications in tracking, computer vision, robotics and econometrics. The theoretical properties of these methods were also studied extensively in this period. Although PFs occupied the center-stage, significant progress was also attained in the development of SMC methods for parameter estimation, online EM, particle smoothing and SMC techniques for control and planning. Various SMC algorithms were also designed to approximate sequences of unnormalized functions, thus allowing for the computation of eigen-pairs of large matrices and kernel operators. Recently, frameworks for building efficient high-dimensional proposal distributions for MCMC using SMC methods were proposed. These allow us to design effective MCMC algorithms in complex scenarios where standard strategies failed. Such methods have been demonstrated on a number of domains, including simulated tempering, Dirichlet process mixtures, nonlinear non-Gaussian state-space models, protein folding and stochastic differential equations. Finally, SMC methods were also generalized to carry out approximate inference in static models. This is typically done by constructing a sequence of probability distributions, which starts with an easy-to-sample-from distribution and which converges to the desired target distribution. These SMC methods have been successfully applied to notoriously hard problems, such as inference in Boltzmann machines, marginal parameter estimation and nonlinear Bayesian experimental design. In this tutorial, we will introduce the classical SMC methods and expose the audience to the new developments in the field.

See Also:

Download slides icon Download slides: nips09_doucet_freitas_smc.pdf (2.0┬áMB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Philip Maybank, September 8, 2014 at 12:12 p.m.:

This material looks really interesting. However there seems to be a problem with the text formatting on the uploaded slides. Looks fine on the video, but almost incomprehensible in places on the pdf.

Comment2 Lukasz Wiklendt, September 14, 2014 at 8:56 a.m.:

Try these slides from Nando's website:

Write your own review or comment:

make sure you have javascript enabled or clear this field: