2023 LargeLanguageModelIsNotaGoodFew

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Few-Shot Information Extraction.

Notes

Cited By

Quotes

Abstract

Large Language Models (LLMs) have made remarkable strides in various tasks. However, whether they are competitive few-shot solvers for information extraction (IE) tasks and surpass fine-tuned small Pre-trained Language Models (SLMs) remains an open problem. This paper aims to provide a thorough answer to this problem, and moreover, to explore an approach towards effective and economical IE systems that combine the strengths of LLMs and SLMs. Through extensive experiments on eight datasets across three IE tasks, we show that LLMs are not effective few-shot information extractors in general, given their unsatisfactory performance in most settings and the high latency and budget requirements. However, we demonstrate that LLMs can well complement SLMs and effectively solve hard samples that SLMs struggle with. Building on these findings, we propose an adaptive filter-then-rerank paradigm, in which SLMs act as [[filters and LLMs act]] as rerankers. By utilizing LLMs to rerank a small portion of difficult samples identified by SLMs, our preliminary system consistently achieves promising improvements (2.1% F1-gain on average) on various IE tasks, with acceptable cost of time and money.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 LargeLanguageModelIsNotaGoodFewAixin Sun
Yubo Ma
Yixin Cao
YongChing Hong
Large Language Model Is Not a Good Few-shot Information Extractor, But a Good Reranker for Hard Samples!10.48550/arXiv.2303.085592023