Agent Hallucination Pattern
Jump to navigation
Jump to search
An Agent Hallucination Pattern is an error pattern that is a factual accuracy failure where AI agents generate false information or fabricated content presented as factual statements (during agent task execution).
- AKA: AI Agent Confabulation, Agent Factual Error, LLM Hallucination, Agent False Generation, Agent Hallucination Phenomenon.
- Context:
- It can typically manifest as Factual Fabrication through non-existent entity generation, false citation creation, and invented statistic production.
- It can typically occur during Knowledge Retrieval Failure when agent knowledge bases lack required information or context windows exceed capacity limits.
- It can typically result from Training Data Artifacts including data bias, incomplete information, and conflicting sources.
- It can typically increase with Task Complexity Growth as reasoning chains lengthen and context management becomes challenging.
- It can typically propagate through Multi-Step Reasoning when initial errors compound across subsequent inferences.
- ...
- It can often emerge in Domain-Specific Contexts where specialized knowledge exceeds model training scope.
- It can often correlate with Model Confidence Levels that remain high despite factual inaccuracy.
- It can often bypass Standard Validation Checks when plausible-sounding content masks underlying falsity.
- It can often impact Critical Application Domains including legal document generation, medical advice systems, and financial analysis tools.
- ...
- It can range from being a Minor Agent Hallucination Pattern to being a Complete Agent Hallucination Pattern, depending on its error severity.
- It can range from being an Intrinsic Agent Hallucination Pattern to being an Extrinsic Agent Hallucination Pattern, depending on its source contradiction type.
- It can range from being a Detectable Agent Hallucination Pattern to being a Subtle Agent Hallucination Pattern, depending on its verification difficulty.
- It can range from being a Domain-Independent Hallucination Pattern to being a Domain-Specific Hallucination Pattern, depending on its knowledge requirement.
- ...
- It can be mitigated by Retrieval-Augmented Generation (RAG) Techniques for knowledge grounding.
- It can be detected through Hallucination Detection Algorithms using fact verification.
- It can be reduced via Agent Output Verification Systems with source validation.
- It can be monitored by Agent Performance Monitoring Systems for accuracy tracking.
- It can be addressed through Human-in-the-Loop Agent designs for critical validation.
- ...
- Example(s):
- Legal Case Hallucinations, generating non-existent legal precedents in legal document AI agents.
- Medical Information Hallucinations, creating false treatment recommendations in healthcare AI systems.
- Historical Fact Hallucinations, such as:
- Technical Specification Hallucinations, such as:
- ...
- Counter-Example(s):
- Verified Factual Output, which maintains source accuracy.
- Explicit Uncertainty Expression, which acknowledges knowledge limitations.
- Creative Fiction Generation, which intentionally produces non-factual content.
- See: Hallucinated Content, AI Safety, Fact Verification System, Knowledge Grounding Mechanism, Agent Reliability Measure, RAG Technique, LLM Limitation.