Natural Language Inference (NLI) Task: Difference between revisions
Jump to navigation
Jump to search
(ContinuousReplacement) Tag: continuous replacement |
|||
Line 18: | Line 18: | ||
=== 2016b === | === 2016b === | ||
* ([[2016_ADecomposableAttentionModelforN|Parikh et al., 2016]]) ⇒ [[Ankur P. Parikh]], [[Oscar Tackstrom]], [[Dipanjan Das]], and [[Jakob Uszkoreit]]. ([[2016]]). “[http://anthology.aclweb.org/D/D16/D16-1244.pdf A Decomposable Attention Model for Natural Language Inference].” In: [[Proceedings of 2016_Conference on Empirical Methods in Natural Language Processing]] ([[EMNLP 2016]]). [https://arxiv.org/abs/1606.01933 arXiv:1606.01933] | * ([[2016_ADecomposableAttentionModelforN|Parikh et al., 2016]]) ⇒ [[Ankur P. Parikh]], [[Oscar Tackstrom]], [[Dipanjan Das]], and [[Jakob Uszkoreit]]. ([[2016]]). “[http://anthology.aclweb.org/D/D16/D16-1244.pdf A Decomposable Attention Model for Natural Language Inference].” In: [[Proceedings of 2016_Conference on Empirical Methods in Natural Language Processing]] ([[EMNLP 2016]]). [https://arxiv.org/abs/1606.01933 arXiv:1606.01933] | ||
=== 2009 === | === 2009 === | ||
* ([[2009_NaturalLanguageInference|Maccartney, 2009]]) ⇒ [[author::Bill Maccartney]]. ([[year::2009]]). “[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.156.2685&rep=rep1&type=pdf Natural Language Inference].” Stanford University. ISBN:978-1-109-24088-7 | * ([[2009_NaturalLanguageInference|Maccartney, 2009]]) ⇒ [[author::Bill Maccartney]]. ([[year::2009]]). “[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.156.2685&rep=rep1&type=pdf Natural Language Inference].” Stanford University. ISBN:978-1-109-24088-7 |
Revision as of 23:45, 12 September 2019
A Natural Language Inference (NLI) Task is an inference task whose input is natural language item.
- Context:
- It can range from being a Simple Linguistic Inference Task to being a Complex Linguistic Inference Task.
- Example(s):
- Counter-Example(s):
- See: Paraphrasing Task.
References
2016a
- (Graves et al., 2016) ⇒ Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwinska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. (2016). “Hybrid Computing Using a Neural Network with Dynamic External Memory.” In: Nature, 538(7626). doi:10.1038/nature20101
- … When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external readâwrite memory.
2016b
- (Parikh et al., 2016) ⇒ Ankur P. Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. (2016). “A Decomposable Attention Model for Natural Language Inference.” In: Proceedings of 2016_Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). arXiv:1606.01933
2009
- (Maccartney, 2009) ⇒ Bill Maccartney. (2009). “Natural Language Inference.” Stanford University. ISBN:978-1-109-24088-7