2019 DialogueNaturalLanguageInferenc

From GM-RKB
(Redirected from Welleck et al., 2019)
Jump to navigation Jump to search

Subject Headings: Natural Language Inference Task; Dialogue Natural Language Inference Task.

Notes

Cited By

Quotes

Abstract

Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue model'€™s consistency.

1 Introduction

2 Dialogue Consistency and Natural Language Inference

(...)

Natural Language Inference. Natural Language Inference (NLI) assumes a dataset [math]\displaystyle{ \mathcal{D} = \{(s_1, s_2)_i,y_i\}^N_{i=1} }[/math]which associates an input pair [math]\displaystyle{ (s_1,s_2) }[/math] to one of three classes [math]\displaystyle{ y \in \{entailment,\; neutral,\; contradiction\} }[/math]. Each input item [math]\displaystyle{ s_j }[/math] comes from an input space [math]\displaystyle{ \mathcal{S} }[/math], which in typical NLI tasks is the space of natural language sentences, i.e. [math]\displaystyle{ s_j }[/math] is a sequence of words [math]\displaystyle{ (w_1,\cdots ,w_K) }[/math] where each word [math]\displaystyle{ w_k }[/math] is from a vocabulary [math]\displaystyle{ \mathcal{V} }[/math].

The input [math]\displaystyle{ (s_1, s_2) }[/math] are referred to as the premise and hypothesis, respectively, and each label is interpreted as meaning the premise entails the hypothesis, the premise is neutral with respect to the hypothesis, or the premise contradicts the hypothesis. The problem is to learn a function [math]\displaystyle{ f_{NLI}(s_1,s_2) \to \{E,N,C\} }[/math] which generalizes to new input pairs.

(...)

3 Dialogue NLI Dataset

3.1 Triple Generation

3.2 Triple Annotation

3.3 Statistics

4 Consistent Dialogue Agents Via Natural Language Inference =

5 Experiments

5.1 Experiment 1: NLI

5.2 Experiment 2: Consistency in Dialogue

5.3 Experiment 3: Human Evaluation

6 Conclusion

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2019 DialogueNaturalLanguageInferencJason Weston
Kyunghyun Cho
Sean Welleck
Arthur Szlam
Dialogue Natural Language Inference2019