2018 AGraphtoSequenceModelforAMRtoTe

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Abstract Meaning Representation, AMR-to-Text Generation Task, Natural Language Generation Task, Graph-to-Sequence Model, Graph-to-Sequence AMR-to-Text Generation System, Copy Mechanism, Character LSTM.

Notes

Cited By

Quotes

Abstract

The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although being able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus facing challenges with large-graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature.

References

BibTeX

@inproceedings{2018_AGraphtoSequenceModelforAMRtoTe,
  author    = {Linfeng Song and
               Yue Zhang and
               Zhiguo Wang and
               Daniel Gildea},
  editor    = {Iryna Gurevych and
               Yusuke Miyao},
  title     = {A Graph-to-Sequence Model for AMR-to-Text Generation},
  booktitle = {Proceedings of the 56th Annual Meeting of the Association for Computational
               Linguistics (ACL 2018)  Volume 1: Long Papers, Melbourne, Australia, July 15-20, 2018.},
  pages     = {1616--1626},
  publisher = {Association for Computational Linguistics},
  year      = {2018},
  url       = {https://www.aclweb.org/anthology/P18-1150/},
  doi       = {10.18653/v1/P18-1150},
}

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2018 AGraphtoSequenceModelforAMRtoTeDaniel Gildea
Linfeng Song
Yue Zhang
Zhiguo Wang
A Graph-to-Sequence Model for AMR-to-Text Generation2018