2023 SelfInstructAligningLanguageMod

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Large "Instruction-Tuned" Language Model.

Notes

  • It can generate a large and diverse synthetic instruction dataset by prompting a language model.
  • It can improve language models' ability to follow instructions by finetuning them on the synthetic data.
  • It can help build better instruction-following models with minimal human effort.
  • It provides a new benchmark for evaluating instruction-following.
  • ...

Cited By

Quotes

Abstract

Large "instruction-tuned" language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily on human-written instruction data that is often limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model. We introduce Self-Instruct, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off their own generations. Our pipeline generates instructions, input, and output samples from a language model, then filters invalid or similar ones before using them to finetune the original model. Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations. For further evaluation, we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind InstructGPT-001. Self-Instruct provides an almost annotation-free method for aligning pre-trained language models with instructions, and we release our large synthetic dataset to facilitate future studies on instruction tuning. Our code and data are available at this https URL.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 SelfInstructAligningLanguageModNoah A. Smith
Hannaneh Hajishirzi
Yizhong Wang
Yeganeh Kordi
Swaroop Mishra
Alisa Liu
Daniel Khashabi
Self-Instruct: Aligning Language Models with Self-Generated Instructions10.48550/arXiv.2212.105602023