2018 TheNarrativeQAReadingComprehens

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Reading Comprehension Dataset; NarrativeQA Dataset; NarrativeQA Reading Comprehension Challenge.

Notes

Cited By

Quotes

Abstract

Reading comprehension (RC) - in contrast to information retrieval - requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.

1. Introduction

2. Review of Reading Comprehension Datasets and Models

Dataset Documents Questions Answers
MCTest (Richardson et al., 2013) 660 short stories, grade school level 2640 human generated, based on the document multiple choice
CNN/Daily Mail (Hermann et al., 2015) 93K+220K news articles 387K+997K Cloze-form, based on highlights entities
Children’s Book Test (CBT) (Hill et al., 2016) 687K of 20 sentence passages from 108 children’s books Cloze-form, from the 21st sentence multiple choice
BookTest (Bajgar et al., 2016) 14.2M, similar to CBT Cloze-form, similar to CBT multiple choice
SQuAD (Rajpurkar et al., 2016) 23K paragraphs from 536 Wikipedia articles 108K human generated, based on the paragraphs spans
NewsQA (Trischler et al., 2016) 13K news articles from the CNN dataset 120K human generated, based on headline, highlights spans
MS MARCO (Nguyen et al., 2016) 1M passages from 200K+ documents retrieved using the queries 100K search queries human generated, based on the passages
SearchQA (Dunn et al., 2017) 6.9m passages retrieved from a search engine using the queries 140k human generated Jeopardy! questions human generated Jeopardy! answers
NarrativeQA (this paper) 1,572 stories (books, movie scripts) & human generated summaries 46,765 human generated, based on summaries human generated, based on summaries
Table 1: Comparison of datasets.

3. NarrativeQA: A New Dataset

4. Baselines and Oracles

5. Experiments

6. Qualitative Analysis and Challenges

7. Related Work

8. Conclusion

References

2017a

2017b

2016a

2016b

2016c

2016d

2015

2013

BibTeX

@article{2018_TheNarrativeQAReadingComprehens,
  author    = {Tomas Kocisky and
               Jonathan Schwarz and
               Phil Blunsom and
               Chris Dyer and
               Karl Moritz Hermann and
               Gabor Melis and
               Edward Grefenstette},
  title     = {The NarrativeQA Reading Comprehension Challenge},
  journal   = {Trans. Assoc. Comput. Linguistics},
  volume    = {6},
  pages     = {317--328},
  year      = {2018},
  url       = {https://transacl.org/ojs/index.php/tacl/article/view/1197},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2018 TheNarrativeQAReadingComprehensChris Dyer
Edward Grefenstette
Karl Moritz Hermann
Phil Blunsom
Gabor Melis
Tomas Kocisky
Jonathan Schwarz
The NarrativeQA Reading Comprehension Challenge2018