Instruction-Tuned Large Language Model (LLM)

From GM-RKB
(Redirected from Instruction-Tuned LLM)
Jump to navigation Jump to search

An Instruction-Tuned Large Language Model (LLM) is a fine-tuned large language model that is refined using an instruction-following dataset (composed of input-output pairs and attempts to follow instructions more accurately).

  • Context:
    • It can (typically) be created by a LLM Instruction-Tuning System (that solves an LLM instruction-tuning task to adapt a base LLM).
    • It can generate more precise and focused responses (than base LLMs).
    • It can understand and respond to complex instructions.
    • It can be compared to a specialized entry-level professional who has received additional targeted training to perform specific tasks efficiently.
    • ...
  • Example(s):
    • a Conversational LLM.
    • ChatGPT Model, variant of the GPT (Generative Pre-trained Transformer) model, specifically fine-tuned for understanding and generating human-like conversational responses.
    • Dolly 2.0, an open-source, instruction-following LLM developed by Databricks, fine-tuned on a unique dataset for enhanced interactive capabilities.
    • FLAN (Fine-tuned LAnguage Net), developed by Google to improve its performance on a wide range of natural language processing tasks.
    • ...
  • Counter-Example(s):
    • A Base LLM that is not fine-tuned for specific tasks and so may not follow detailed instructions accurately.
    • ...
  • See: Large Language Model, Reinforcement Learning, Natural Language Processing.


References

2023

2023

2023

2023

2023

2023

  • (Chungon et al., 2022) ⇒ Hyung W. Chungon, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, et al. (2022). “Scaling Instruction-finetuned Language Models.” arXiv preprint arXiv:2210.11416
    • ABSTRACT: Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.

2022

2021