en
0.25
0.5
0.75
1.25
1.5
1.75
2
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets
Published on Jan 22, 20137415 Views
We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at subl
Related categories
Chapter list
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets00:00
Context : Machine Learning for “Big Data”00:50
A standard machine learning optimization problem01:32
Deterministic methods02:19
Stochastic methods03:20
Hybrid methods (1)04:36
Hybrid methods (2)04:52
Related work - Sublinear convergence rate05:05
Related work - Linear convergence rate05:32
Stochastic Average Gradient Method (1)05:58
Stochastic Average Gradient Method (2)06:07
Stochastic Average Gradient Method (3)06:18
SAG convergence analysis08:08
Comparison with full gradient methods10:10
Experiments - Training cost (1)11:23
Experiments - Training cost (2)12:09
Experiments - Testing cost (1)12:43
Experiments - Testing cost (2)12:55
Reducing memory requirements13:09
Conclusion and Open Problems13:51