2023 Orca2TeachingSmallLanguageModel

From GM-RKB
(Redirected from Mitra et al., 2023)
Jump to navigation Jump to search

Subject Headings: Orca 2 LLM, Open Source LLM, Synthetic LLM Training Data.

Notes

  • It introduces Orca 2, the latest version of Microsoft's smaller language model aimed at enhancing reasoning abilities. Orca 2 significantly surpasses models of similar size and attains performance levels comparable to or better than models 5-10 times larger on complex reasoning tasks.
  • It comes in two sizes - 7 billion and 13 billion parameters. Both are created by fine-tuning the LLAMA 2 base models using tailored, high-quality synthetic data that teaches various reasoning techniques.
  • Orca 2 training data was generated such that it equips the model to choose different solution strategies based on the task, such as step-by-step processing, recall-generate, extract-generate, etc. The data is obtained from a more capable teacher model.
  • Evaluation using 15 diverse benchmarks covering language understanding, common sense reasoning, etc. shows Orca 2 matches or surpasses the performance of larger models. It has limitations common to language models but shows potential for reasoning improvements in smaller models.
  • The key insight is that tailored synthetic training data and training smaller LLM models on diverse reasoning strategies, allows them to attain capabilities typically seen only in much larger models. This underscores their value for efficiency and capability balancing.

Cited By

2023

Quotes

Abstract

Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval. In Orca 2, we continue exploring how improved training signals can enhance smaller LMs' reasoning abilities. Research on training small LMs has often relied on imitation learning to replicate the output of more capable models. We contend that excessive emphasis on imitation may restrict the potential of smaller models. We seek to teach small LMs to employ different solution strategies for different tasks, potentially different from the one used by the larger model. For example, while larger models might provide a direct answer to a complex task, smaller models may not have the same capacity. In Orca 2, we teach the model various reasoning techniques (step-by-step, recall then generate, recall-reason-generate, direct answer, etc.). More crucially, we aim to help the model learn to determine the most effective solution strategy for each task. We evaluate Orca 2 using a comprehensive set of 15 diverse benchmarks (corresponding to approximately 100 tasks and over 36,000 unique prompts). Orca 2 significantly surpasses models of similar size and attains performance levels similar or better to those of models 5-10x larger, as assessed on complex tasks that test advanced reasoning abilities in zero-shot settings. make Orca 2 weights publicly available at this http URL to support research on the development, evaluation, and alignment of smaller LMs.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 Orca2TeachingSmallLanguageModelHamid Palangi
Arindam Mitra
Luciano Del Corro
Shweti Mahajan
Andres Codas
Clarisse Simoes
Sahaj Agrawal
Xuxi Chen
Anastasia Razdaibiedina
Erik Jones
Kriti Aggarwal
Guoqing Zheng
Corby Rosset
Hamed Khanpour
Ahmed Awadallah
Orca 2: Teaching Small Language Models How to Reason10.48550/arXiv.2311.110452023