Skip to main content
Early access — new tools and guides added regularly
Practical

Hallucination

Last reviewed: April 2026

When AI generates confident but incorrect information. The AI is not lying — it is producing statistically plausible text that happens to be wrong.

A hallucination in AI is when a model generates information that sounds confident and plausible but is factually incorrect. The AI is not deliberately lying — it is producing text that is statistically likely given its training, but that does not correspond to reality.

Why AI halluccinates

Understanding why hallucinations happen requires understanding how LLMs generate text. An LLM predicts the most probable next token based on patterns in its training data. It is optimising for plausibility, not truth. When the model encounters a question it does not have solid training data for, it fills in the gaps with plausible-sounding information — just as a confident student might guess on an exam question rather than admitting they do not know.

Common hallucination scenarios:

  • Fabricated citations: The AI generates fake academic papers, complete with plausible-sounding titles, authors, and publication dates.
  • Invented statistics: "According to a 2024 McKinsey study, 73% of enterprises..." — the study may not exist.
  • False attributions: "As Albert Einstein once said..." followed by a quote Einstein never made.
  • Incorrect technical details: Correct-sounding but wrong API documentation, code syntax, or process descriptions.
  • Merged facts: Combining true facts about different entities into a false statement about one entity.

The severity spectrum

Not all hallucinations are equally dangerous:

  • Low risk: Creative tasks where factual accuracy is less critical (brainstorming, fiction writing, ideation).
  • Medium risk: Business communications where errors are embarrassing but correctable (email drafts, presentation outlines).
  • High risk: Legal, medical, financial, or compliance contexts where incorrect information could cause real harm.

How to reduce hallucinations

While hallucinations cannot be eliminated entirely, several strategies significantly reduce them:

  • Provide source material: Give the AI the documents to work from. It is much less likely to hallucinate when answering based on provided text than when drawing from memory.
  • Use RAG: Connect the AI to your knowledge base so it retrieves real information instead of generating from training data.
  • Ask for sources: Request that the AI cite its sources. If it cannot point to a real source, that is a warning sign.
  • Verify critical claims: For any fact, statistic, or citation that matters, verify it independently.
  • Lower temperature: Reducing the temperature parameter makes the model more conservative and less likely to generate creative (and potentially false) content.
  • Break complex questions down: Simpler, focused questions produce more accurate answers than broad, complex ones.
  • Use chain-of-thought: Asking the AI to show its reasoning step by step helps it stay grounded.

Organisational strategies

Smart organisations build hallucination awareness into their AI workflows:

  • Establish clear guidelines about which AI outputs require human verification
  • Create review processes for AI-generated content before publication or distribution
  • Train employees to recognise hallucination patterns
  • Use AI output as a first draft, never as a final product for critical applications
  • Choose models known for lower hallucination rates for high-stakes tasks
Want to go deeper?
This topic is covered in our Essentials level. Unlock all 52 lessons free.

Why This Matters

Hallucinations are the single biggest risk in enterprise AI adoption. An employee who trusts an AI-generated report containing fabricated statistics can damage your company's credibility. A legal team that relies on AI-cited case law without verification can face professional consequences. Understanding hallucinations is not about avoiding AI — it is about using AI responsibly, building appropriate verification workflows, and training your team to maintain healthy scepticism of AI output.

Related Terms

Learn More

Continue learning in Essentials

This topic is covered in our lesson: Why Your Prompts Fail (And How to Fix Them)