Large Language Model (LLM) Emergent Property

From GM-RKB
(Redirected from Emergence in LLMs)
Jump to navigation Jump to search

An Large Language Model (LLM) Emergent Property is an emergent property for LLMs that manifests as new capabilities or improvements in performance not evident in smaller-scale models, often appearing abruptly as the model size increases.

  • Context:
    • It can (typically) involve LLM Tasks that smaller models struggle with.
    • It can (often) be influenced by the choice of LLM Evaluation Metric, with nonlinear or discontinuous metrics potentially exaggerating the appearance of emergent properties.
    • It can (often) be observed across different LLM Architectures.
    • It can provoke discussions on the optimal ways to measure and understand model performance, emphasizing the need for metrics that accurately reflect incremental improvements.
    • ...
  • Example(s):
    • The transition from GPT-2 to GPT-3 demonstrated emergent abilities in generating coherent long texts, showcasing qualitative differences in language generation capabilities.
    • In vision tasks, altering the evaluation metrics can induce perceptions of emergent abilities, showing how metric choice can affect interpretations of model capability.
    • ...
  • Counter-Example(s):
  • See: Emergent System, Model Scaling, LLM Evaluation Measure.


References

2023

2023

  • https://windowsontheory.org/2023/12/22/emergent-abilities-and-grokking-fundamental-mirage-or-both/
    • NOTES
      • Emergent abilities in large language models: As models get bigger, they can suddenly gain new capabilities not seen in smaller models. For example, GPT-2 could generate coherent long text while GPT could not.
      • Scale and unpredictable jumps in performance: As models scale up, their performance often jumps sharply at some point in an unpredictable way. For example, models may go from trivial performance to perfect accuracy on a task as compute is increased.
      • Are emergent abilities a mirage? Recent work has shown these jumps can disappear if we change the evaluation metric to a softer one. For example, instead of binary accuracy, using edit distance as the metric can show more gradual progress.
      • Analogous to high jump example: An athlete's max jump height increases smoothly with training, but probability of clearing a fixed bar jumps sharply at some point.
      • Sharp transitions remain for complex tasks: Even if components improve smoothly, the probability of succeeding at multiple steps sequentially can still transition sharply from low to high.
      • Main conclusion: Emergent abilities are likely not a complete mirage, since many real-world tasks require multiple steps of reasoning where failing at one step derails you. So sharp unpredictable transitions in capabilities are likely to remain.

2023

2022