# Natural Language Inference (NLI) Task

A Natural Language Inference (NLI) Task is an inference task of determining a inferential relationship between natural language hypothesis and premise.

**AKA:**Recognizing Textual Entailment (RTE) Task.**Context:****input**:- [math]\mathcal{D}_{input} = \{(s_1, s_2)_i\}^N_{i=1}[/math], where [math]s_1[/math] and [math]s_2[/math] are Natural Language Items corresponding to a input-pair (hypothesis, premise)

**Task Ouput**:- [math]\mathcal{D}_{output} = \{(\hat{s}_1, \hat{s}_2)_i,y_i\}^N_{i=1} [/math] with [math]y \in \{entailment,\; neutral,\; contradiction\}[/math]

**output**:- a Natural Language Inference System that can learn (or solve) a function [math]f_{NLI}(s_1,s_2) \to \{entailment,\; neutral,\; contradiction\} [/math] by implementing a Natural Language Inference Algorithm.

- It can range from being a Simple Linguistic Inference Task to being a Complex Linguistic Inference Task.

**Example(s):****Counter-Example(s):****See:**GLUE Benchmark, Paraphrasing Task, Syntactic Parsing Task, Morphological Analysis Task, Word Sense Disambiguation, Lexical Semantic Relatedness, Logical Inference.

## References

### 2019

- (Welleck et al., 2019) ⇒ Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. (2019). “Dialogue Natural Language Inference.” In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019).
- QUOTE:
**Natural Language Inference.**Natural Language Inference (NLI) assumes a dataset [math]\mathcal{D} = \{(s_1, s_2)_i,y_i\}^N_{i=1} [/math]which associates an input pair [math](s_1,s_2)[/math] to one of three classes [math]y \in \{entailment,\; neutral,\; contradiction\}[/math]. Each input item [math]s_j[/math] comes from an input space [math]\mathcal{S}[/math], which in typical NLI tasks is the space of natural language sentences, i.e. [math]s_j[/math] is a sequence of words [math](w_1,\cdots ,w_K)[/math] where each word [math]w_k[/math] is from a vocabulary [math]\mathcal{V}[/math].The input [math](s_1, s_2)[/math] are referred to as the premise and hypothesis, respectively, and each label is interpreted as meaning the premise entails the hypothesis, the premise is neutral with respect to the hypothesis, or the premise contradicts the hypothesis. The problem is to learn a function [math]f_{NLI}(s_1,s_2) \to \{E,N,C\} [/math] which generalizes to new input pairs.

- QUOTE:

### 2018

- (Chen et al., 2018) ⇒ Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. (2018). “Neural Natural Language Inference Models Enhanced with External Knowledge.” In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). doi:10.18653/v1/p18-1224
- QUOTE: Reasoning and inference are central to both human and artificial intelligence. Natural language inference (NLI), also known as recognizing textual entailment (RTE), is an important NLP problem concerned with determining inferential relationship (e.g., entailment, contradiction, or neutral) between a premise p and a hypothesis h. In general, modeling informal inference in language is a very challenging and basic problem towards achieving true natural language understanding.

### 2017

- (Chatzikyriakidis et al., 2017) ⇒ Stergios Chatzikyriakidis, Robin Cooper, Simon Dobnik, and Staffan Larsson. (2017). “An Overview of Natural Language Inference Data Collection: The Way Forward?.” In: Proceedings of the Computing Natural Language Inference Workshop.
- QUOTE: Indeed, NLI is considered by many researchers to be the crux of computational semantics. This paper is about the datasets created for this need. In particular, we discuss the most common NLI resources arguing that all these a) fail to capture the wealth of inferential mechanisms present in NLI and b) seem to be driven by the dominant discourse in the field at the time of their creation. In light of these observations, we want to discuss the requirements that an adequate NLI platform must satisfy both in terms of the range of inference patterns found in reasoning with NL as well as the range of the data collection mechanisms that are needed in order to acquire this range of inferential patterns.

### 2016a

- (Graves et al., 2016) ⇒ Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwinska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. (2016). “Hybrid Computing Using a Neural Network with Dynamic External Memory.” In: Nature, 538(7626). doi:10.1038/nature20101
- … When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external readâwrite memory.

### 2016b

- (Parikh et al., 2016) ⇒ Ankur P. Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. (2016). “A Decomposable Attention Model for Natural Language Inference.” In: Proceedings of 2016_Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). arXiv:1606.01933
- QUOTE: Natural language inference (NLI) refers to the problem of determining entailment and contradiction relationships between a premise and a hypothesis. NLI is a central problem in language understanding (...)

### 2009

- (Maccartney, 2009) ⇒ Bill Maccartney. (2009). “Natural Language Inference.” Stanford University. ISBN:978-1-109-24088-7
- QUOTE: Natural language inference (NLI) is the problem of determining whether a natural language hypothesis
*h*can reasonably be inferred from a natural language premise*p*. Of course, inference has been a central topic in artificial intelligence (AI) from the start, and over the last five decades, researchers have made tremendous progress in developing automatic methods for formal deduction. But the challenges of NLI are quite different from those encountered in formal deduction: the emphasis is on informal reasoning, lexical semantic knowledge, and variability of linguistic expression, rather than on long chains of formal reasoning (...)An intrinsic property of the NLI task definition is that the problem inputs are expressed in natural language. Research on methods for automated deduction, by contrast, typically assumes that the problem inputs are already expressed in some formal meaning representation, such as the language of first-order logic. This fact alone reveals how different the problem of NLI is from earlier work on logical inference, and places NLI squarely within the field of natural language processing (NLP): in developing approaches to NLI, we will be concerned with issues such as syntactic parsing, morphological analysis, word sense disambiguation, lexical semantic relatedness, and even linguistic pragmatics-topics which are the bread and butter of NLP, but are quite foreign to logical AI.

- QUOTE: Natural language inference (NLI) is the problem of determining whether a natural language hypothesis

Author | Sean Welleck +, Jason Weston +, Arthur Szlam +, Kyunghyun Cho +, Qian Chen +, Xiaodan Zhu +, Zhen-Hua Ling +, Diana Inkpen +, Si Wei +, Stergios Chatzikyriakidis +, Robin Cooper +, Simon Dobnik +, Staffan Larsson + and Bill Maccartney + |

year | 2019 +, 2018 +, 2017 + and 2009 + |