General Language Understanding Evaluation (GLUE) Benchmark

From GM-RKB
(Redirected from GLUE task)
Jump to navigation Jump to search

A General Language Understanding Evaluation (GLUE) Benchmark is a NLP Benchmark for training, evaluating and analyzing Natural Language Understanding systems.



References

2019a

2019b

The format of the GLUE benchmark is model-agnostic, so any system capable of processing sentence and sentence pairs and producing corresponding predictions is eligible to participate. The benchmark tasks are selected so as to favor models that share information across tasks using parameter sharing or other transfer learning techniques. The ultimate goal of GLUE is to drive research in the development of general and robust natural language understanding systems.

2019c

2018