en
0.25
0.5
0.75
1.25
1.5
1.75
2
Scalable Training of Mixture Models via Coresets
Published on Jan 25, 20125210 Views
How can we train a statistical mixture model on a massive data set? In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations. A coreset is a weighted subs
Related categories
Chapter list
Scalable Training of Mixture Models via Coresets00:00
Fitting Mixtures to Massive Data00:23
Coresets for Mixture Models02:07
Naïve Uniform Sampling - 103:36
Naïve Uniform Sampling - 203:54
Sampling Distribution04:24
Importance Weights04:41
Creating a Sampling Distribution - 105:19
Creating a Sampling Distribution - 205:30
Creating a Sampling Distribution - 305:38
Creating a Sampling Distribution - 405:41
Creating a Sampling Distribution - 505:43
Creating a Sampling Distribution - 605:44
Creating a Sampling Distribution - 705:45
Creating a Sampling Distribution - 805:46
Creating a Sampling Distribution - 905:53
Creating a Sampling Distribution - 1006:09
Creating a Sampling Distribution - 1106:29
Importance Weights07:08
Importance Sample07:10
Coresets via Adaptive Sampling07:38
A General Coreset Framework08:51
A Geometric Perspective10:11
Geometric Reduction10:59
Semi-Spherical Gaussian Mixtures12:36
Extensions and Generalizations13:03
Composition of Coresets - 113:57
Composition of Coresets - 214:10
Composition of Coresets - 314:21
Coresets on Streams - 114:35
Coresets on Streams - 214:50
Coresets on Streams - 314:52
Coresets on Streams - 415:02
Coresets in Parallel15:45
Handwritten Digits16:36
Neural Tetrode Recordings17:33
Community Seismic Network17:57
Learning User Acceleration18:21
Seismic Anomaly Detection18:56
Conclusions19:23