# Difference between revisions of "2019 DialogueNaturalLanguageInferenc"

(→2 Dialogue Consistency and Natural Language Inference) |
|||

(4 intermediate revisions by 2 users not shown) | |||

Line 18: | Line 18: | ||

[[In this paper]], we frame the [[consistency of dialogue agent]]s as [[natural language inference (NLI)]] and create a [[new natural language inference dataset]] called [[Dialogue NLI]]. </s> | [[In this paper]], we frame the [[consistency of dialogue agent]]s as [[natural language inference (NLI)]] and create a [[new natural language inference dataset]] called [[Dialogue NLI]]. </s> | ||

[[We]] propose a [[method]] which demonstrates that a [[model trained]] on [[Dialogue NLI]] can be used to improve the [[consistency of a dialogue model]], and [[evaluate the method]] with [[human evaluation]] and with [[automatic metric]]s on a suite of [[evaluation set]]s designed to [[measure]] a [[dialogue modelâs consistency]]. </s> | [[We]] propose a [[method]] which demonstrates that a [[model trained]] on [[Dialogue NLI]] can be used to improve the [[consistency of a dialogue model]], and [[evaluate the method]] with [[human evaluation]] and with [[automatic metric]]s on a suite of [[evaluation set]]s designed to [[measure]] a [[dialogue modelâs consistency]]. </s> | ||

+ | |||

+ | === 1 Introduction === | ||

+ | |||

+ | === 2 Dialogue Consistency and Natural Language Inference === | ||

+ | (...) | ||

+ | '''[[Natural Language Inference]].''' [[Natural Language Inference (NLI)]] assumes a [[dataset]] <math>\mathcal{D} = \{(s_1, s_2)_i,y_i\}^N_{i=1} </math>which associates an [[input pair]] <math>(s_1,s_2)</math> to one of three [[classe]]s <math>y \in \{entailment,\; neutral,\; contradiction\}</math>. Each [[input item]] <math>s_j</math> comes from an input [[space]] <math>\mathcal{S}</math>, which in typical [[NLI task]]s is the [[space]] of [[natural language sentence]]s, i.e. <math>s_j</math> is a [[sequence of word]]s <math>(w_1,\cdots ,w_K)</math> where each [[word]] <math>w_k</math> is from a [[vocabulary]] <math>\mathcal{V}</math>. | ||

+ | <P> | ||

+ | The [[input]] <math>(s_1, s_2)</math> are referred to as the [[premise]] and [[hypothesis]], respectively, and each [[label]] is interpreted as meaning the [[premise]] entails the [[hypothesis]], the [[premise]] is neutral with respect to the [[hypothesis]], or the [[premise]] contradicts the [[hypothesis]]. The problem is to [[learn]] a [[function]] <math>f_{NLI}(s_1,s_2) \to \{E,N,C\} </math> which generalizes to new [[input pair]]s. | ||

+ | <P> | ||

+ | (...) | ||

+ | |||

+ | === 3 Dialogue NLI Dataset === | ||

+ | |||

+ | ==== 3.1 Triple Generation ==== | ||

+ | |||

+ | ==== 3.2 Triple Annotation ==== | ||

+ | |||

+ | ==== 3.3 Statistics ==== | ||

+ | |||

+ | === 4 Consistent Dialogue Agents Via Natural Language Inference ==== | ||

+ | |||

+ | === 5 Experiments === | ||

+ | |||

+ | ==== 5.1 Experiment 1: NLI ==== | ||

+ | |||

+ | ==== 5.2 Experiment 2: Consistency in Dialogue ==== | ||

+ | |||

+ | ==== 5.3 Experiment 3: Human Evaluation ==== | ||

+ | |||

+ | === 6 Conclusion === | ||

== References == | == References == |

## Revision as of 04:26, 13 September 2019

- (Welleck et al., 2019) ⇒ Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. (2019). “Dialogue Natural Language Inference.” In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019).

**Subject Headings:** Natural Language Inference Task; Dialogue Natural Language Inference Task.

## Notes

- Pre-print(s): arXiv:1811.00671

## Cited By

- Google Scholar: ~ 5 Citations
- Semantic Scholar: ~ 5 Citations

## Quotes

### Abstract

Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue modelâs consistency.

### 1 Introduction

### 2 Dialogue Consistency and Natural Language Inference

(...)
**Natural Language Inference.** Natural Language Inference (NLI) assumes a dataset [math]\mathcal{D} = \{(s_1, s_2)_i,y_i\}^N_{i=1} [/math]which associates an input pair [math](s_1,s_2)[/math] to one of three classes [math]y \in \{entailment,\; neutral,\; contradiction\}[/math]. Each input item [math]s_j[/math] comes from an input space [math]\mathcal{S}[/math], which in typical NLI tasks is the space of natural language sentences, i.e. [math]s_j[/math] is a sequence of words [math](w_1,\cdots ,w_K)[/math] where each word [math]w_k[/math] is from a vocabulary [math]\mathcal{V}[/math].

The input [math](s_1, s_2)[/math] are referred to as the premise and hypothesis, respectively, and each label is interpreted as meaning the premise entails the hypothesis, the premise is neutral with respect to the hypothesis, or the premise contradicts the hypothesis. The problem is to learn a function [math]f_{NLI}(s_1,s_2) \to \{E,N,C\} [/math] which generalizes to new input pairs.

(...)

### 3 Dialogue NLI Dataset

#### 3.1 Triple Generation

#### 3.2 Triple Annotation

#### 3.3 Statistics

### 4 Consistent Dialogue Agents Via Natural Language Inference =

### 5 Experiments

#### 5.1 Experiment 1: NLI

#### 5.2 Experiment 2: Consistency in Dialogue

#### 5.3 Experiment 3: Human Evaluation

### 6 Conclusion

## References

;

Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|

2019 DialogueNaturalLanguageInferenc | Sean Welleck Jason Weston Arthur Szlam Kyunghyun Cho | Dialogue Natural Language Inference | 2019 |

Author | Sean Welleck +, Jason Weston +, Arthur Szlam + and Kyunghyun Cho + |

title | Dialogue Natural Language Inference + |

year | 2019 + |