0.25
0.5
0.75
1.25
1.5
1.75
2
Beyond stochastic gradient descent for large-scale machine learning
Published on Oct 29, 20147696 Views
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are ma
Related categories
Chapter list
Beyond stochastic gradient descent for large-scale machine learning00:00
Context00:10
Supervised machine learning01:48
Smoothness and strong convexity - 104:04
Smoothness and strong convexity - 204:21
Smoothness and strong convexity - 304:50
Smoothness and strong convexity - 405:22
Iterative methods for minimizing smooth functions - 106:34
Iterative methods for minimizing smooth functions - 207:59
Stochastic approximation09:06
Convex stochastic approximation - 110:30
Convex stochastic approximation - 212:55
Least-mean-square algorithm15:19
Markov chain interpretation of constant step sizes - 117:27
Markov chain interpretation of constant step sizes - 219:19
Simulations - synthetic examples - 119:31
Simulations - benchmarks - 120:42
Beyond least-squares - Markov chain interpretation22:39
Simulations - synthetic examples - 223:50
Restoring convergence through online Newton steps - 125:12
Restoring convergence through online Newton steps - 225:20
Choice of support point for online Newton step26:33
Simulations - synthetic examples - 327:29
Simulations - benchmarks - 227:47
Conclusions - 1 27:50
Conclusions - 227:57