Who is Afraid of Non-Convex Loss Functions? thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Who is Afraid of Non-Convex Loss Functions?

Published on Dec 29, 200746600 Views

The NIPS community has suffered of an acute convexivitis epidemic: \\ - ML applications seem to have trouble moving beyond logistic regression, SVMs, and exponential-family graphical models; \\ -

Related categories

Chapter list

Who is afraid of nonconvex loss functions?00:00
Convex Shmonvex - 100:22
Convex Shmonvex - 203:54
To solve complicated AI taks, ML will have to go nonconvex06:08
Best results on MNIST (from raw images: no preprocessing)08:46
Convexity is overrated09:11
Normalized-uniform set: Error rates09:16
Normalized-uniform set: Learning times10:15
Experiment 2: Jittered-cluttered dataset11:24
Jittered-cluttered dataset12:06
Optimization algorithms for learning12:33
Theoretical guarantees are overrated12:49
The visual system is “deep” and learned14:18
Do we really need deep architectures?14:19
Why are deep architectures more efficient?14:21
Strategies (after Hinton 2007)14:25
Deep learning is hard? - 117:02
Deep learning is hard? - 219:20
Shallow models23:30
The problem with non-convex learning25:38
Backprop learning is not as bad as it seems27:30
Convolutional networks28:27
“Only Yann can do it” (NOT!)29:18
The basic idea for training deep feature hierarchies31:54
The right tools: Automatic differentiation32:10
A stochastic diagonal Levenberg-Marquardt method - 136:44
A stochastic diagonal Levenberg-Marquardt method - 237:49
On-line computation of Ψ38:44
Recipe38:52
Estimates of optimal learning rate - 140:10
Estimates of optimal learning rate - 240:40
The end40:52