2017 SGDRStochasticGradientDescentwi

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Learning Rate Schedule; Cosine Annealing; Gradient Descent Algorithm.

Notes

Cited By

Quotes

Abstract

Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14\% and 16.21\%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at: https://github.com/loshchil/SGDR

References

BibTeX

@inproceedings{2017_SGDRStochasticGradientDescentwi,
  author    = {Ilya Loshchilov and
               Frank Hutter},
  title     = {SGDR: Stochastic Gradient Descent with Warm Restarts},
  booktitle = {Conference Track Proceedings of 5th International Conference on
               Learning Representations (ICLR 2017)},
  publisher = {OpenReview.net},
  year      = {2017},
  url       = {https://openreview.net/forum?id=Skq89Scxx},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2017 SGDRStochasticGradientDescentwiFrank Hutter
Ilya Loshchilov
SGDR: Stochastic Gradient Descent with Warm Restarts2017