2023 InstructionTuningforLargeLangua

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Instruction Tuning.

Notes

Cited By

Quotes

Abstract

his paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs). Instruction tuning refers to the process of further training LLMs on a [[dataset consisting of \ textsc{ (instruction]], output) } pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. In this work, we make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications, along with an analysis on aspects that influence the outcome of IT (e.g., generation of instruction outputs, size of the instruction dataset, etc). We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research. Project page: this http URL

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 InstructionTuningforLargeLanguaFei Wu
Jiwei Li
Shuhe Wang
Xiaofei Sun
Xiaoya Li
Tianwei Zhang
Guoyin Wang
Shengyu Zhang
Linfeng Dong
Sen Zhang
Runyi Hu
Instruction Tuning for Large Language Models: A Survey10.48550/arXiv.2308.107922023