2016 SQuAD100000QuestionsforMachineC

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Stanford Question Answering Dataset (SQuAD), Natural Language Understanding.

Notes

Cited By

Quotes

Abstract

We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000 + questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL. http://stanford-qa.com

References

BibTeX

@inproceedings{2016_SQuAD100000QuestionsforMachineC,
  author    = {Pranav Rajpurkar and
               Jian Zhang and
               Konstantin Lopyrev and
               Percy Liang},
  editor    = {Jian Su and
               Xavier Carreras and
               Kevin Duh},
  title     = {SQuAD: 100, 000+ Questions for Machine Comprehension of Text},
  booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural
               Language Processing (EMNLP 2016), Austin, Texas, USA, November 1-4,
               2016},
  pages     = {2383--2392},
  publisher = {The Association for Computational Linguistics},
  year      = {2016},
  url       = {https://doi.org/10.18653/v1/d16-1264},
  doi       = {10.18653/v1/d16-1264},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2016 SQuAD100000QuestionsforMachineCPercy Liang
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
SQuAD: 100,000+ Questions for Machine Comprehension of Text2016