2014 EfficientMiniBatchTrainingforSt

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Author Keywords

Abstract

Stochastic gradient descent (SGD) is a popular technique for large-scale optimization problems in machine learning. In order to parallelize SGD, minibatch training needs to be employed to reduce the communication cost. However, an increase in minibatch size typically decreases the rate of convergence. This paper introduces a technique based on approximate optimization of a conservatively regularized objective function within each minibatch. We prove that the convergence rate does not decrease with increasing minibatch size. Experiments demonstrate that with suitable implementations of approximate optimization, the resulting algorithm can outperform standard SGD in many scenarios.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2014 EfficientMiniBatchTrainingforStAlexander J. Smola
Tong Zhang
Mu Li
Yuqiang Chen
Efficient Mini-batch Training for Stochastic Optimization10.1145/2623330.26236122014