2009 NaturalLanguageInference

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Natural Language Inference.

Notes

Cited By

Quotes

Abstract

Inference has been a central topic in artificial intelligence from the start, but while automatic methods for formal deduction have advanced tremendously, comparatively little progress has been made on the problem of natural language inference (NLI), that is, determining whether a natural language hypothesis h can justifiably be inferred from a natural language premise p. The challenges of NLI are quite different from those encountered in formal deduction: the emphasis is on informal reasoning, lexical semantic knowledge, and variability of linguistic expression. This dissertation explores a range of approaches to NLI, beginning with methods which are robust but approximate, and proceeding to progressively more precise approaches.

We first develop a baseline system based on overlap between bags of words. Despite its extreme simplicity, this model achieves surprisingly good results on a standard NLI evaluation, the PASCAL RTE Challenge. However, its effectiveness is limited by its failure to represent semantic structure.

To remedy this lack, we next introduce the Stanford RTE system, which uses typed dependency trees as a proxy for semantic structure, and seeks a low-cost alignment between trees for p and h, using a cost model which incorporates both lexical and structural matching costs. This system is typical of a category of approaches to NLI based on approximate graph matching. We argue, however, that such methods work best when the entailment decision is based, not merely on the degree of alignment, but also on global features of the aligned [math]\displaystyle{ \langle p, h \rangle }[/math] pair motivated by semantic theory.

Seeking still greater precision, we devote the largest part of the dissertation to developing an approach to NLI based on a model of natural logic. We greatly extend past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. Our system decomposes an inference problem into a sequence of atomic edits which transforms p into h; predicts a lexical entailment relation for each edit using a statistical classifier; propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence.

Finally, we address the problem of alignment for NLI, by developing a model of phrase-based alignment inspired by analogous work in machine translation, including an alignment scoring function, inference algorithms for finding good alignments, and training algorithms for choosing feature weights.

Acknowledgments

Chapter 1 : The Problem Of Natural Language Inference

1.1 What is Natural Language Inference?

Natural language inference (NLI) is the problem of determining whether a natural language hypothesis h can reasonably be inferred from a natural language premise p. Of course, inference has been a central topic in artificial intelligence (AI) from the start, and over the last five decades, researchers have made tremendous progress in developing automatic methods for formal deduction. But the challenges of NLI are quite different from those encountered in formal deduction: the emphasis is on informal reasoning, lexical semantic knowledge, and variability of linguistic expression, rather than on long chains of formal reasoning. The following example may help to illustrate the difference:

{| style="border: 0px solid black; border-spacing: 1px; margin: 1em auto; text-align:left; width: 100%" |- |(1) || p ||Several airlines polled saw costs grow more than expected, even after adjusting for inflation. |- |||h ||Some of the companies in the poll reported cost increases. |- |}

In the NLI problem setting, (1) is considered a valid inference, for the simple reason that an ordinary person, upon hearing p, would likely accept that h follows. Note, however, that h is not a strict logical consequence of p: for one thing, seeing cost increases does not necessarily entail reporting cost increases -- it is conceivable that every company in the poll kept mum about increasing costs, perhaps for reasons of business strategy. That the inference is nevertheless considered valid in the NLI setting is a reflection of the informality of the task definition.

Although NLI involves recognizing an asymmetric relation of inferability between p and h, an important special case of NLI is the task of recognizing a symmetric relation of approximate semantic equivalence (that is, paraphrase) between p and h. (It is a special case because, if we have a system capable of determining whether h can be inferred from p, then we can detect semantic equivalence simply by running the system both “forwards” and “backwards”.) Recognizing approximate semantic equivalence between words is comparatively straightforward, using manually constructed thesauri such as WordNet (Fellbaum et al. 1998) or automatically constructed thesauri such as that of Lin (1998). But the ability to recognize when two sentences are saying more or less the same thing is far more challenging, and if possible, could be of enormous benefit to many language processing tasks. We describe a few potential applications in the next section.

An intrinsic property of the NLI task definition is that the problem inputs are expressed in natural language. Research on methods for automated deduction, by contrast, typically assumes that the problem inputs are already expressed in some formal meaning representation, such as the language of first-order logic. This fact alone reveals how different the problem of NLI is from earlier work on logical inference, and places NLI squarely within the field of natural language processing (NLP): in developing approaches to NLI, we will be concerned with issues such as syntactic parsing, morphological analysis, word sense disambiguation, lexical semantic relatedness, and even linguistic pragmatics-topics which are the bread and butter of NLP, but are quite foreign to logical AI.

Over the last few years, there has been a surge of interest in the problem of NLI, centered around the PASCAL Recognizing Textual Entailment (RTE) Challenge (Dagan et al. 2005) and within the U.S. Government AQUAINT program. Researchers working on NLI can build on the successes achieved during the last decade in areas such as syntactic parsing and computational lexical semantics, and begin to tackle the more challenging problems of sentence-level semantics.



References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2009 NaturalLanguageInferenceBill MaccartneyNatural Language Inference2009