2011 StrategiesforTrainingLargeScale

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Neural Network Language Model; Hash-Based Maximum Entropy Model,

Notes

Cited By

Quotes

Abstract

We describe how to effectively train neural network based language models on large data sets. Fast convergence during training and better overall performance is observed when the training data are sorted by their relevance. We introduce hash-based implementation of a maximum entropy model, that can be trained as a part of the neural network model. This leads to significant reduction of computational complexity. We achieved around 10% relative reduction of word error rate on English Broadcast News speech recognition task, against large 4-gram model trained on 400M tokens.

References

BibTeX

@inproceedings{2011_StrategiesforTrainingLargeScale,
  author    = {Tomas Mikolov and
               Anoop Deoras and
               Daniel Povey and
               Lukas Burget and
               Jan Cernocky},
  editor    = {David Nahamoo and
               Michael Picheny},
  title     = {Strategies for training large scale neural network language models},
  booktitle = {Proceedings of the 2011 IEEE Workshop on Automatic Speech Recognition, and Understanding
               (ASRU 2011)},
  pages     = {196--201},
  publisher = {IEEE},
  year      = {2011},
  url       = {https://doi.org/10.1109/ASRU.2011.6163930},
  doi       = {10.1109/ASRU.2011.6163930},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2011 StrategiesforTrainingLargeScaleAnoop Deoras
Lukas Burget
Tomáš Mikolov
Jan Cernocky
Daniel Povey
Strategies for Training Large Scale Neural Network Language Models2011