2016 MultiDomainNeuralNetworkLanguag

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Neural Machine Translation Task; Natural Language Generation Task

Notes

Pre-Print(s) and Other Link(s)

Cited By

Quotes

Abstract

Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.

References

BibTeX

@inproceedings{2016_MultiDomainNeuralNetworkLanguag,
  author    = {Tsung-Hsien Wen and
               Milica Gasic and
               Nikola Mrksic and
               Lina Maria Rojas-Barahona and
               Pei-Hao Su and
               David Vandyke and
               Steve J. Young},
  editor    = {Kevin Knight and
               Ani Nenkova and
               Owen Rambow},
  title     = {Multi-domain Neural Network Language Generation for Spoken Dialogue
               Systems},
  booktitle = {Proceedings of the 2016 Conference of the North American Chapter
               of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2016),
               San Diego California, USA, June 12-17, 2016},
  pages     = {120--129},
  publisher = {The Association for Computational Linguistics},
  year      = {2016},
  url       = {https://doi.org/10.18653/v1/n16-1015},
  doi       = {10.18653/v1/n16-1015},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2016 MultiDomainNeuralNetworkLanguagTsung-Hsien Wen
Milica Gasic
Nikola Mrksic
Pei-Hao Su
David Vandyke
Steve J. Young
Lina Maria Rojas-Barahona
Multi-domain Neural Network Language Generation for Spoken Dialogue Systems2016