Automated Text Generation (NLG) Task
(Redirected from Automated Language Generation Task)
Jump to navigation
Jump to search
An Automated Text Generation (NLG) Task is a text generation task (whose output are text items) that is an automated natural language task.
- Context:
- output: NLG Task Output (Machine Written text).
- measure: NLG Performance Measure.
- It can be an automated written language generation task.
- It can be solved by a Automated Text Generation System (that implements a text generation algorithm).
- It can range from being a Heuristic Language Generation Task to being a Data-Driven Language Generation Task.
- It can range from being a Domain-Specific NLG Task to being an Open-Domain NLG Task.
- …
- Example(s):
- Automated Text Error Correction.
- Automated Question Answering.
- Automated Image Description Generation.
- Automated Summarization.
- Automated Definitional Sentence Generation.
- Automated Tweet Writing.
- Automated Essay Writing.
- Machine Language Translation.
- Writing Assitance Task.
- Automated Wikitext Generation, such as Automated Wikipedia Page Creation.
- …
- CJS Neural Narrative Text Generation Task.
- …
- Automated Domain-Specific NLG, such as: Medical NLG, Legal NLG, Software NLG.
- …
- a Constraints NLG Task, such as:
- Generate Text(length={200}, subject='history', vocabulary='advanced', tone='formal', structure='intro, body, conclusion', deadline='2023-12-31', sentiments='neutral', audience='adults') => "Introduction about the subject of history. Detailed body text employing an advanced vocabulary and a formal tone. Conclusive remarks. Completed before the specified deadline, aimed at an adult audience with a neutral sentiment."
- …
- Counter-Example(s):
- See: Question Answering, Computational Linguistics, Natural Language Processing.
References
2018a
- (Clark et al., 2018) ⇒ Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. (2018). “Neural Text Generation in Stories Using Entity Representations As Context.” In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Volume 1 (Long Papers). DOI:10.18653/v1/N18-1204.
2018b
- (Fedus et al., 2018) ⇒ William Fedus, Ian Goodfellow, and Andrew M Dai. (2018). "MaskGAN: Better Text Generation via Filling in the ________". In: Proceedings of the Sixth International Conference on Learning Representations (ICLR-2018).
2018c
- (Guo et al., 2018) ⇒ Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. (2018). “Long Text Generation via Adversarial Training with Leaked Information.” In: Proceedings of the Thirty-Second (AAAI) Conference on Artificial Intelligence (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th (AAAI) Symposium on Educational Advances in Artificial Intelligence (EAAI-18).
2018d
- (Kudo & Richardson, 2018) ⇒ Taku Kudo, and John Richardson. (2018). “SentencePiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing.” In: arXiv preprint arXiv:1808.06226.
2018e
- (Lee et al., 2018) ⇒ Chris van der Lee, Emiel Krahmer, and Sander Wubben. (2018). “Automated Learning of Templates for Data-to-text Generation: Comparing Rule-based, Statistical and Neural Methods.” In: Proceedings of the 11th International Conference on Natural Language Generation (INLG 2018). DOI:http://dx.doi.org/10.18653/v1/W18-6504
2018f
- (Song et al., 2018) ⇒ Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. (2018). “A Graph-to-Sequence Model for AMR-to-Text Generation.” In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) Volume 1: Long Papers. DOI:10.18653/v1/P18-1150
2018g
- (Zhu et al., 2018) ⇒ Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. (2018). “Texygen: A Benchmarking Platform for Text Generation Models.” In: Proceedings of The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR 2018). DOI:10.1145/3209978.3210080.
2017a
- (Zhang et al., 2017) ⇒ Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. (2017). "Adversarial Feature Matching for Text Generation". In: Proceedings of the 34th International Conference on Machine Learning (ICML 2017).
2017b
- (Li et al., 2017) ⇒ Jiwei Li, Will Monroe, Tianlin Shi, Sebastien Jean, Alan Ritter, and Dan Jurafsky. (2017). “Adversarial Learning for Neural Dialogue Generation.” In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). DOI:10.18653/v1/D17-1230.
2017c
- (Lin, Li, et al., 2017) ⇒ Kevin Lin, Dianqi Li, Xiaodong He, Ming-ting Sun, and Zhengyou Zhang. (2017). “Adversarial Ranking for Language Generation.” In: Proceedings of Advances in Neural Information Processing Systems 30 (NIPS-2017).
2017d
- (Che et al., 2017) ⇒ Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. (2017). “Maximum-Likelihood Augmented Discrete Generative Adversarial Networks.” In: ArXiv Preprint: 1702.07983.
2017e
- (Semeniuta et al., 2017) ⇒ Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. (2017). “A Hybrid Convolutional Variational Autoencoder for Text Generation.” In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). DOI:10.18653/v1/D17-1066.
2017f
- (Yu et al., 2017a) ⇒ Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. (2017). “SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient.” In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI 2017).
2017g
- (Yu et al., 2017b) 7rArr; Yoshua Bengio (2017). "Creating Human-Level AI" (Presentation)]. In: Asilomar Conference on Beneficial AI.
- QUOTE: What’s Missing (to achieve AGI) … Actually understanding language (also solves generating), requiring enough world knowledge / commonsense
2017h
- https://github.com/pytorch/examples/tree/master/word_language_model
- QUOTE: This example trains a multi-layer RNN (Elman, GRU, or LSTM) on a language modeling task. By default, the training script uses the WikiText-2 dataset, provided. The trained model can then be used by the generate script to generate new text.
2016
- (Kusner & Hernndez-Lobato, 2016) ⇒ Matt J. Kusner, and Jose Miguel Hernndez-Lobato. (2016). "GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution". In: arXiv:1611.04051.
2015a
- (Bahdanau et al., 2015) ⇒ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. (2015). “Neural Machine Translation by Jointly Learning to Align and Translate.” In: Proceedings of the Third International Conference on Learning Representations, (ICLR-2015).