Deep	Learning:Theoretical Motivations thumbnail
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Deep Learning:Theoretical Motivations

Published on Sep 13, 201588340 Views

Related categories

Chapter list

Deep Learning: Theoretical Motiovations00:00
Breakthrough00:35
Automating Feature Discovery01:31
Why is deep learning working so well?04:13
Machine Learning, AI & No Free Lunch04:23
Goal Hierarchy07:31
Why are classical nonparametric not cutting it? 11:23
ML 101. What We Are Fighting Against: The Curse of Dimensionality11:39
Not Dimensionality so much as Number of Variations14:54
Putting Probability Mass where Structure is Plausible16:44
Bypassing the curse of dimensionality21:39
Learning multiple levels of representation27:17
The Power of Distributed Representations28:41
Non-distributed representations28:53
The need for distributed representations32:35
Classical Symbolic AI vs Representation Learning52:23
Neural Language Models: fighting one exponential by another one!55:06
Neural word embeddings: visualization directions = Learned Attributes56:36
Analogical Representations for Free57:09
The Next Challenge: Rich Semantic Representations for Word Sequences59:44
The Power of Deep Representations59:45
The Depth Prior can be Exponentially Advantageous01:00:10
“Shallow” computer program01:03:13
“Deep” computer program01:03:35
Sharing Components in a Deep Architecture01:05:43
New theoretical result01:07:24
The Mirage of Convexity01:08:32
A Myth is Being Debunked: Local Minima in Neural Nets01:09:40
Saddle Points01:10:42
Saddle Points During Training01:18:47
Low Index Critical Points01:22:03
Saddle-Free Optimization01:22:04
Other Priors That Work with Deep Distributed Representations01:22:05
How do humans generalize from very few examples?01:22:31
Sharing Statistical Strength by SemiSupervised Learning01:25:29
Multi-Task Learning01:25:58
Google Image Search: Different object types represented in the same space01:26:59
Maps Between Representations01:27:29
Multi-Task Learning with Different Inputs for Different Tasks01:27:50
Why Latent Factors & Unsupervised Representation Learning? Because of Causality01:28:14