Regularizing RNNs by Stabilizing Activations thumbnail
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Regularizing RNNs by Stabilizing Activations

Published on May 27, 20162827 Views

We stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing the squared distance between successive hidden states' norms. This penalty term is an effective regularizer for RNNs in

Related categories

Chapter list

Regularizing RNNs by Stabilizing Activations00:00
Stability: a generic prior for temporal models00:07
The Norm-stabilizer - 100:33
The Norm-stabilizer - 200:41
Outline - 101:12
Outline - 201:36
Why is stability important? 01:38
Stability doesn’t come for free!03:48
Why is stability important? (example)05:25
Outline - 306:01
Why does stability help generalization?06:08
Outline - 407:11
Stability in RNNs - 107:29
Stability in RNNs - 208:05
Stability in RNNs - 309:03
IRNN instability09:29
Outline - 510:03
Things we’re not doing - 110:08
Things we’re not doing - 210:24
Things we’re not doing - 310:53
Things we’re not doing - 411:10
Things we’re not doing - 511:27
Things we’re not doing - 612:16
Outline - 612:30
Tasks12:32
IRNN Performance (Penn Treebank)13:07
LSTM Performance (Penn Treebank)14:32
LSTM Performance (TIMIT)14:35
Alternative Cost Functions15:16
Untitled16:08