From Averaging to Acceleration, There is Only a Step-size thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

From Averaging to Acceleration, There is Only a Step-size

Published on Aug 20, 20151864 Views

We show that accelerated gradient descent, averaged gradient descent and the heavy-ball method for non-strongly convex problems may be reformulated as constant parameter second-order difference equati

Related categories

Chapter list

From Averaging to Acceleration, There is Only a Step-size00:00
Goal: Optimizing a function f with potentially noisy access to its gradient00:00
Averaged gradient descent (Av-GD)00:53
Av-GD as a momentum algorithm - 102:00
Av-GD as a momentum algorithm - 202:58
Our contributions for quadratic non-strongly-convex problems03:17
Reformulation as a constant parameter second-order difference equation - 103:57
Reformulation as a constant parameter second-order difference equation - 204:27
Reformulation as a constant parameter second-order difference equation - 304:36
Convergence05:15
Area of convergence06:52
Deterministic bound08:46
Quadratic Optimization with Additive Noise - 109:37
Quadratic Optimization with Additive Noise - 210:10
Convergence result for unstructured noise - 110:35
Convergence result for unstructured noise - 211:17
Optimal bias-variance trade-of11:52
Structured noise and least-squares regression - 112:15
Structured noise and least-squares regression - 212:41
Minimax convergence rates - 113:22
Minimax convergence rates - 213:54
Conclusion14:07
Merci!14:42