Efficient Mini-batch Training for Stochastic Optimization
Published on Oct 07, 20142279 Views
Stochastic gradient descent (SGD) is a popular technique for large-scale optimization problems in machine learning. In order to parallelize SGD, minibatch training needs to be employed to reduce the c