Few-Shot Natural Language Processing (NLP) Task

From GM-RKB
Jump to navigation Jump to search

A Few-Shot Natural Language Processing (NLP) Task is a in-context learning NLP task that is a few-shot learning task (with only a few NLP training examples).



References

2021

  • (Yang, 2021) ⇒ Mengde Yang. (2021). “A Survey on Few-shot Learning in Natural Language Processing.” In: 2021 International Conference on Artificial Intelligence and Electromechanical Automation (AIEA), pp. 294-297 . IEEE,
    • ABSTRACT: The annotated dataset is the foundation for Supervised Natural Language Processing. However, the cost of obtaining dataset is high. In recent years, the Few-Shot Learning has gradually attracted the attention of researchers. From the definition, in this paper, we conclude the difference in Few-Shot Learning between Natural Language Processing and Computer Vision. On that basis, the current Few-Shot Learning on Natural Language Processing is summarized, including Transfer Learning, Meta Learning and Knowledge Distillation. Furthermore, we conclude the solutions to Few-Shot Learning in Natural Language Processing, such as the method based on Distant Supervision, Meta Learning and Knowledge Distillation. Finally, we present the challenges facing Few-Shot Learning in Natural Language Processing.

2020

  • (Yin et al., 2020) ⇒ Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, and Caiming Xiong. (2020). “Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment As a Start.” arXiv preprint arXiv:2010.02584
    • ABSTRACT: A standard way to address different NLP problems is by first constructing a problem-specific dataset, then building a model to fit this dataset. To build the ultimate artificial intelligence, we desire a single machine that can handle diverse new problems, for which task-specific annotations are limited. We bring up textual entailment as a unified solver for such NLP problems. However, current research of textual entailment has not spilled much ink on the following questions: (i) How well does a pretrained textual entailment system generalize across domains with only a handful of domain-specific examples? and (ii) When is it worth transforming an NLP task into textual entailment? We argue that the transforming is unnecessary if we can obtain rich annotations for this task. Textual entailment really matters particularly when the target NLP task has insufficient annotations.

      Universal NLP can be probably achieved through different routines. In this work, we introduce Universal Few-shot textual Entailment (UFO-Entail). We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream NLP tasks such as question answering and coreference resolution when the end-task annotations are limited. Code: this https URL

2020