Beam Sampling for the Infinite Hidden Markov Model

author: Jurgen Van Gael, Computer Laboratory, University of Cambridge
published: Aug. 29, 2008,   recorded: July 2008,   views: 1057


Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


The infinite hidden Markov model is a nonparametric extension of the widely used hidden Markov model. Our paper introduces a new inference algorithm for the infinite hidden Markov model called beam sampling. Beam sampling combines slice sampling, which limits the number of states considered at each time step to a finite number, with dynamic programming, which samples whole state trajectories efficiently. Our algorithm typically outperforms the Gibbs sampler and is more robust. We present applications of iHMM inference using the beam sampler on changepoint detection and text prediction problems.

See Also:

Download slides icon Download slides: icml08_van_gael_bsihmm_01.pdf (804.9┬áKB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 vatsan, May 3, 2009 at 12:23 p.m.:

Why is he not considering the O(TK^2) time spent in computing the probability of a state sequence, in the beam sampling algorithm?

while the Gibbs sampler requires only O(TK), it is unclear how he compares the convergence of the former with the latter based on just the "number of iterations". One could run possibly tens of hundreds of more iterations of the Gibbs sampler, in the time it takes to run one iteration of the beam sampler.

Comment2 vatsan, May 3, 2009 at 12:32 p.m.:

also when he says "there can only be a finite number of states whose transition probabilities could be greater than the slice", i don't see why.

You could still have an a continuous probability distribution of infinite states whose transitions sum to 1. So where exactly is "infinite" being discussed/handled in this infinite HMM?

Write your own review or comment:

make sure you have javascript enabled or clear this field: