Test:2019 CharacterLevelLanguageModelingw

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Character-Level Language Model.

Notes

Cited By


Quotes

Abstract

LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.

References

BibTeX

@inproceedings{2019_CharacterLevelLanguageModelingw,
  author    = {Rami Al-Rfou and
               Dokook Choe and
               Noah Constant and
               Mandy Guo and
               Llion Jones},
  title     = {Character-Level Language Modeling with Deeper Self-Attention},
  booktitle = {Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI
               2019), The Thirty-First Innovative Applications of Artificial Intelligence
               Conference (IAAI 2019), The Ninth AAAI Symposium on Educational
               Advances in Artificial Intelligence (EAAI 2019)},
  pages     = {3159--3166},
  publisher = {AAAI Press},
  year      = {2019},
  url       = {https://doi.org/10.1609/aaai.v33i01.33013159},
  doi       = {10.1609/aaai.v33i01.33013159},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2019 CharacterLevelLanguageModelingwRami Al-Rfou
Llion Jones
Dokook Choe
Noah Constant
Mandy Guo
Character-Level Language Modeling with Deeper Self-Attention