About
Bayesian nonparametric methods are an expanding part of the machine learning landscape. Proponents of Bayesian nonparametrics claim that these methods enable one to construct models that can scale their complexity with data, while representing uncertainty in both the parameters and the structure. Detractors point out that the characteristics of the models are often not well understood and that inference can be unwieldy. Relative to the statistics community, machine learning practitioners of Bayesian nonparametrics frequently do not leverage the representation of uncertainty that is inherent in the Bayesian framework. Neither do they perform the kind of analysis --- both empirical and theoretical --- to set skeptics at ease. In this workshop we hope to bring a wide group together to constructively discuss and address these goals and shortcomings.
Workshop homepage: http://people.seas.harvard.edu/~rpa/nips2011npbayes.html
Videos
Invited Talks

Scaling Latent Variable Models
Jan 24, 2012
·
5928 views

Spatial Bayesian Nonparametrics for Natural Image Segmentation
Jan 24, 2012
·
5576 views

Discussion of Erik Sudderth's talk: NPB Hype or Hope?
Jan 24, 2012
·
11215 views

Discussion of Alex Smola's talk: Remarks on parallelised MCMC
Jan 24, 2012
·
6312 views

What to do about M-open? A decision theoretic (distribution free) solution
Jan 24, 2012
·
3487 views

Two tales about Bayesian nonparametric modeling
Jan 24, 2012
·
6912 views

Discussion of Igor Pruenster´s talk
Jan 24, 2012
·
4620 views

Discussion of Christopher Holmes's talk: What to do about M-open?
Jan 31, 2012
·
5031 views

Why Bayesian nonparametrics?
Jan 24, 2012
·
23632 views