Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization

Published on Jan 25, 20124259 Views

Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was kno

Related categories

Chapter list

Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization00:00
Stochastic Convex Optimization 00:02
Strongly Convex Stochastic Optimization - 0100:46
Strongly Convex Stochastic Optimization - 0201:08
Better Algorithms01:57
Related Work02:54
This Work - 0103:44
This Work - 0203:58
This Work - 0304:42
This Work - 0405:06
This Work - 0505:23
This Work - 0605:29
Smooth F - 0105:51
Smooth F - 0206:10
Smooth F - 0306:38
Non-Smooth F07:51
Warm-up08:25
Second Example09:52
Fixing SGD - 0110:54
Fixing SGD - 0211:25
Experiments - 0112:20
Experiments - 0214:25
Conclusions and Open Problems15:00