LLM Hallucination Error
Jump to navigation
Jump to search
An LLM Hallucination Error is an LLM error that generates factually incorrect or entirely fabricated information presented as truthful output.
- AKA: LLM Confabulation, Model Hallucination, LLM Fabrication Error, AI Hallucination.
- Context:
- It can typically generate Nonexistent Facts with confident presentation.
- It can typically create Fictional Citations appearing authoritative.
- It can typically produce Imaginary Entities as real references.
- It can often occur in Knowledge Gaps within training data.
- It can often increase with Specificity Requests beyond model knowledge.
- It can often be masked by Fluent Language and plausible structure.
- It can often mislead Domain Non-Experts through surface credibility.
- It can range from being a Minor LLM Hallucination Error to being a Major LLM Hallucination Error, depending on its factual deviation.
- It can range from being a Obvious LLM Hallucination Error to being a Subtle LLM Hallucination Error, depending on its detection difficulty.
- It can range from being a Factual LLM Hallucination Error to being a Conceptual LLM Hallucination Error, depending on its content type.
- It can range from being a Isolated LLM Hallucination Error to being a Systematic LLM Hallucination Error, depending on its occurrence pattern.
- ...
- Example:
- Citation LLM Hallucination Errors, such as:
- Inventing Academic Papers with plausible titles.
- Creating Author Names for nonexistent works.
- Fabricating Journal References with realistic formats.
- Entity LLM Hallucination Errors, such as:
- Generating Fictional People as historical figures.
- Creating Nonexistent Organizations as real institutions.
- Fact LLM Hallucination Errors, such as:
- Inventing Statistical Data without empirical basis.
- Creating Historical Events that never occurred.
- ...
- Citation LLM Hallucination Errors, such as:
- Counter-Example:
- LLM Conceptual Conflation Error, which confuses real concepts rather than inventing content.
- LLM Approximation, which estimates uncertain values rather than fabricating facts.
- See: LLM Error, LLM Conceptual Conflation Error, LLM Plausibility Bias, AI System Error, Confabulation, Information Reliability, Fact Checking Task, LLM Factual Accuracy Measure.