LLM Hallucination Pattern
(Redirected from LLM Confabulation)
Jump to navigation
Jump to search
An LLM Hallucination Pattern is an AI error pattern that is a language model failure mode where large language models generate plausible-sounding information that is factually incorrect or unsupported by training data.
- AKA: LLM Confabulation, Model Hallucination, AI Hallucination.
- Context:
- It can typically manifest as Factual Errors in generated text.
- It can typically include Fabricated Citations to non-existent sources.
- It can typically produce Inconsistent Statements across response segments.
- It can typically generate Plausible Falsehoods with confident tones.
- It can typically create Temporal Confusions about event sequences.
- ...
- It can often occur in Knowledge-Intensive Tasks requiring specific facts.
- It can often emerge during Creative Generation with loose constraints.
- It can often appear in Long-Form Responses with complex reasoning.
- It can often arise from Training Data Gaps or distribution mismatches.
- ...
- It can range from being a Minor LLM Hallucination Pattern to being a Major LLM Hallucination Pattern, depending on its LLM hallucination impact severity.
- It can range from being a Subtle LLM Hallucination Pattern to being an Obvious LLM Hallucination Pattern, depending on its LLM hallucination detection difficulty.
- It can range from being a Domain-Specific LLM Hallucination Pattern to being a Cross-Domain LLM Hallucination Pattern, depending on its LLM hallucination scope breadth.
- It can range from being a Rare LLM Hallucination Pattern to being a Frequent LLM Hallucination Pattern, depending on its LLM hallucination occurrence rate.
- It can range from being a Correctable LLM Hallucination Pattern to being a Persistent LLM Hallucination Pattern, depending on its LLM hallucination mitigation resistance.
- ...
- It can be detected through Fact-Checking Systems and verification methods.
- It can be mitigated by Retrieval-Augmented Generation and grounding techniques.
- It can be measured using HaluEval Benchmarks and evaluation metrics.
- It can be addressed through Prompt Engineering and output filtering.
- ...
- Example(s):
- Factual Hallucinations, such as:
- Reference Hallucinations, such as:
- Citation Hallucinations referencing non-existent papers.
- URL Hallucinations generating invalid web addresses.
- Quote Hallucinations attributing fabricated statements.
- Logical Hallucinations, such as:
- Entity Hallucinations inventing non-existent organizations or fictional persons.
- ...
- Counter-Example(s):
- Accurate LLM Generations, which provide factually correct information.
- Uncertainty Expressions, which acknowledge knowledge limitations.
- Creative Fictions, which are intentionally imaginative rather than unintentionally false.
- See: Large Language Model, AI Error Pattern, LLM Power User, Fact-Checking System, Retrieval-Augmented Generation, HaluEval Benchmark, Model Reliability, AI Safety, Prompt Engineering.