en-es
en-fr
en-sl
en
0.25
0.5
0.75
1.25
1.5
1.75
2
Introduction to CNNs
Published on Jul 27, 20176786 Views
Related categories
Chapter list
Introduction to Convolutional Networks
- 100:00
Introduction to Convolutional Networks
- 200:13
Vector Institute Faculty01:52
We’re hiring!!02:22
Outline 03:16
Neural Networks for Object Recogniton - 103:58
Neural Networks for Object Recogniton - 204:34
Neural Networks for Object Recogniton - 305:03
Neural Networks for Object Recogniton - 405:34
The invariance problem05:57
Applying neural nets to images06:17
Local Receptive Fields07:35
Topographic Maps08:33
Local Receptive Fields - 109:30
Local Receptive Fields - 209:57
Shared Weights11:24
Examples of Feature Detectors13:23
Local RFs 14:27
Weight sharing: Parameter saving15:13
Pooling17:32
Convolutional Layer22:15
Finishing it off22:49
Le Net 25:07
Outline 26:33
Modern CNNs: Alexnet (2012) 26:37
VGG (2014) 28:33
Residual Networks (2015) 28:56
Highway Networks (2015)30:43
How Deep? 31:52
How Deep? Good? 32:20
How Deep? Good? Slow? 34:06
How Deep? Good? Slow? Complex? 35:04
Recent Developments in CNNs36:19
Normalization 38:57
Divisive Normalizaton - 1 40:47
Divisive Normalizaton - 242:16
Network Design: Recep/ve Fields43:16
Effec/ve Recep/ve Fields45:19
Enlarging Effective Recep/ve Fields47:00
Outline48:51
Representations in CNNs 49:03
CNNs & Biology 49:22
Original CNN: Bio-inspired 50:42
CNNs & Biology 51:32
Visualizing the Representations - 155:29
Visualizing the Representations - 257:20
Visualizing the Representations - 359:04
Visualizing the Representations - 459:20
Analyzing the Representations 01:00:11
Parts-Based Representations 01:01:25
Analyzing the Representations 01:03:17
Theory of CNNs 01:04:58
Representations in very deep nets01:08:59
Outline01:12:21
Applications: Semantic Segmentation01:12:33
Up-‐sampling with convolu/ons 01:14:15
De-convolution01:15:33
Applying to semantic segmentation01:16:11
Outline 01:16:43
Cap+oning via Image/Text Embedding01:17:30
Ranking experiments: Flickr8K and Flickr30K 01:20:08
Genera+ng via encoder-‐decoder model01:22:05
Encoder-‐decoder model01:22:43
How to generate descrip+ons01:22:55
Some good results -‐ generation01:23:26
Some failure types01:23:38
Mad Libs01:24:31
Generate with style - 101:24:32
Generate with style - 201:25:27
Generate with style - 301:25:48
Visual Question-‐Answering01:26:11
Outline 01:26:12
Modern Deep Learning:More Data → Becer Results01:26:22
Conclusion - 101:26:50
Conclusion - 201:27:36