Efficient Mini-batch Training for Stochastic Optimization thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Efficient Mini-batch Training for Stochastic Optimization

Published on Oct 07, 20142279 Views

Stochastic gradient descent (SGD) is a popular technique for large-scale optimization problems in machine learning. In order to parallelize SGD, minibatch training needs to be employed to reduce the c

Related categories