2015 TrainingVeryDeepNetworks

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Recurrent Highway Neural Network; LSTM Recurrent Neural Network.

Notes

Cited By

Quotes

Abstract

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.

References

BibTeX

@inproceedings{2015_TrainingVeryDeepNetworks,
  author    = {Rupesh Kumar Srivastava and
               Klaus Greff and
               Jurgen Schmidhuber},
  editor    = {Corinna Cortes and
               Neil D. Lawrence and
               Daniel D. Lee and
               Masashi Sugiyama and
               Roman Garnett},
  title     = {Training Very Deep Networks},
  booktitle = {Advances in Neural Information Processing Systems 28: Annual Conference
               on Neural Information Processing Systems 2015},
  pages     = {2377--2385},
  year      = {2015},
  url       = {http://papers.nips.cc/paper/5850-training-very-deep-networks},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2015 TrainingVeryDeepNetworksJürgen Schmidhuber
Rupesh Kumar Srivastava
Klaus Greff
Training Very Deep Networks2015