A hallucination happens when an LLM produces fluent, confident-sounding output that simply isn't true — invented citations, non-existent API methods, wrong dates, fabricated quotes. The mechanism is straightforward: language models are trained to predict the next plausible Token, not the next true one, so absent any grounding signal they happily confabulate. Since GPT-3, hallucination has been the single biggest trust barrier to using LLMs in production, motivating mitigations like RAG, factuality fine-tuning and Citation support. Some researchers argue Confabulation is a more accurate term, since it better mirrors the human-memory phenomenon of filling in gaps with invented detail.
MEVZU N°124ISTANBULYEAR I — VOL. III
Glossary · Beginner · 2020
Hallucination
When an LLM produces fluent, confident-sounding output that simply isn't true.
- EN — English term
- Hallucination
- TR — Turkish term
- Halüsinasyon