![Efficient Mini-batch Training for Stochastic Optimization thumbnail](https://apiminio.videolectures.net/vln/lectures/21967/1/en/thumbnail.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=masoud%2F20241217%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241217T133904Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=ad30e6ed2708436563b688b88554496550071d59d26fc69c77f162a128b9d27c)
Efficient Mini-batch Training for Stochastic Optimization
Published on Oct 07, 20142279 Views
Stochastic gradient descent (SGD) is a popular technique for large-scale optimization problems in machine learning. In order to parallelize SGD, minibatch training needs to be employed to reduce the c