2018 GLUEAMultiTaskBenchmarkandAnaly
- (Wang et al., 2018) ⇒ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. (2018). “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding.” In: Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP@EMNLP 2018). DOI:10.18653/v1/W18-5446
Subject Headings: GLUE Benchmark; Natural Language Understanding System; Natural Language Inference System, 2018 EMNLP Workshop BlackboxNLP on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP@EMNLP 2018).
Notes
Computing Resource(s):
- Repository and other information available at: https://gluebenchmark.com
Other Link(s):
- ACL Anthology: https://www.aclweb.org/anthology/W18-5446
- DBLP: https://dblp.org/rec/conf/emnlp/WangSMHLB18
Pre-print(s):
Current Edition(s):
- (Wang et al., 2019) ⇒ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. (2019). “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding.” In: Proceedings of the 7th International Conference on Learning Representations (ICLR 2019).
Related Paper(s):
- (Wang et al., 2019) ⇒ Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. (2019). “SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems.” In: Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)
Cited By
- Google Scholar: ~ 124 Citations
- Semantic Scholar: ~141 Citations
- MS Academic: ~ 442 Citations.
Quotes
Abstract
Human ability to understand language is general, flexible, and robust. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it islimited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.
References
BibTeX
@inproceedings{2018_GLUEAMultiTaskBenchmarkandAnaly, author = {Alex Wang and [[Amanpreet Singh]] and Julian Michael and Felix Hill and [[Omer Levy]] and [[Samuel R. Bowman]]}, editor = {Tal Linzen and Grzegorz Chrupala and Afra Alishahi}, title = {GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, booktitle = {Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP@EMNLP 2018), Brussels, Belgium, November 1, 2018}, pages = {353--355}, publisher = {Association for Computational Linguistics}, year = {2018}, url = {https://doi.org/10.18653/v1/w18-5446}, doi = {10.18653/v1/w18-5446}, }
----;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2018 GLUEAMultiTaskBenchmarkandAnaly | Omer Levy Alex Wang Amanpreet Singh Julian Michael Felix Hill Samuel Bowman | GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding | 10.18653/v1/W18-5446 | 2018 |