Neural Networks with Few Multiplications thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Neural Networks with Few Multiplications

Published on May 27, 20162328 Views

For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate

Related categories

Chapter list

Neural Networks with Few Multiplications00:00
Why we don’t want massive multiplications?00:36
Various trials in the past decades …01:37
Binarization as regularization?02:07
Our approach - 103:00
Our approach - 203:14
Binarize Weight Values - 103:19
Untitled03:19
Binarize Weight Values - 204:34
Our Approach - 304:55
Quantized Backprop - 105:07
Exponential Quantization05:48
Quantized Backprop - 206:28
Quantized Backprop - 306:56
How many multiplications saved?07:25
Range of Hidden Representations08:11
The Effect of Limiting the Range of Exponent08:50
General Performance09:13
Related Works & Recent Advances09:41
Any questions?10:24