When an AI system generates plausible-sounding but factually incorrect or fabricated information. Hallucinations often occur when AI systems lack sufficient grounding or retrieve incomplete data. Managing hallucination risk is a key concern in trustworthy AI deployment. For architectural approaches to reducing hallucination, see our systems analysis of RAG architecture.
Hallucinations undermine trust in AI systems and can spread misinformation if outputs are not verified. Understanding when and why they occur is essential for safe AI deployment.
AI developers use various techniques to detect and reduce hallucinations, including retrieval augmentation, uncertainty quantification, and improved training methods.
Hallucination occurs when an AI system fabricates citations or invents facts not present in its training data.