en
0.25
0.5
0.75
1.25
1.5
1.75
2
Dropout: A simple and effective way to improve neural networks
Published on Jan 16, 201354746 Views
In a large feedforward neural network, overfitting can be greatly reduced by randomly omitting half of the hidden units on each training case. This prevents complex co-adaptations in which a feature d
Related categories
Chapter list
Dropout: A simple way to improve neural networks00:00
What has happened to neural nets since 198500:08
Is there anything we cannot do with very big, deep neural networks?00:39
Averaging many models01:05
Two ways to average models01:43
Dropout: An efficient way to average many large neural nets.02:17
Dropout as a form of model averaging03:45
But what do we do at test time?04:18
What if we have more hidden layers?05:02
What about the input layer?05:28
A familiar example of dropout06:07
How well does dropout work?07:09
Experiments on TIMIT (Nitish Srivastava)08:04
Finetuning09:04
The ILSVRC-2012 competition on ImageNet10:12
Error rates on the ILSVRC-2012 competition10:39
A better way to think about dropout11:44
Comparison with Bayesian approach13:22
The end of this part14:11
An alternative to dropout14:21
The effect of only sending one bit15:17
An amusing piece of history16:25
Some explanations for why cortical neurons don’t send analog values19:46