0.25
0.5
0.75
1.25
1.5
1.75
2
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
Published on Aug 20, 20153525 Views
We analyze stochastic gradient descent for optimizing non-convex functions. For non-convex functions often it is good to find a reasonable local minimum, and the main concern is that gradient updates
Related categories
Chapter list
Escaping From Saddle Points - Online Stochastic Gradient for Tensor Decomposition00:00
Outline - 100:20
Stochastic Gradient Descent - 100:52
Stochastic Gradient Descent - 202:51
Stochastic Gradient Descent - 303:19
Stochastic Gradient Descent - 403:20
Stochastic Gradient Descent - 503:59
Stochastic Gradient Descent - 604:14
Outline - 204:22
Summary of Contribution04:29
Outline - 305:21
Strict Saddle Functions - 105:22
Strict Saddle Functions - 205:28
Strict Saddle Functions - 305:53
Strict Saddle Functions - 406:23
SGD at Saddle Point08:36
Outline - 410:06
SGD for Strict Saddle Functions - 110:10
SGD for Strict Saddle Functions - 210:53
Outline - 511:06
Main Result: Convergence Theorem - 111:09
Main Result: Convergence Theorem - 211:47
Main Result: Convergence Theorem - 312:31
Orthogonal Tensor Decompositon - 114:13
Orthogonal Tensor Decompositon - 215:29
Orthogonal Tensor Decompositon - 315:36
Orthogonal Tensor Decompositon Challenges16:32
Outline - 618:01
Experiment - 118:02
Experiment - 218:36
Outline - 619:03
Conclusion19:05