AI Invented a Law That Doesn't Exist: Hallucinations Explained
A lawyer uses ChatGPT for a plea. AI invents 3 Court of Cassation rulings. The judge checks: they don't exist. Discover how to avoid AI hallucinations.

William Aklamavo
November 23, 2025
"AI Invented a Law That Doesn't Exist" ⚖️😱
A lawyer uses ChatGPT to prepare a plea.
He asks: "Cite precedents for this case."
ChatGPT gives him 3 Court of Cassation rulings, with dates and numbers.
The lawyer is delighted. He uses them in court.
The judge checks.
The rulings don't exist.
The lawyer is humiliated and risks his career.
The Problem: Hallucinations
AI doesn't "lie". It completes sentences.
It's trained to be plausible, not truthful.
If the most likely continuation of a sentence is a lie, it will lie with aplomb.
The 3 Types of Hallucinations That Kill Business
❌ Pure Invention
In plain terms: Inventing products, prices, or laws.
Impact: Promising a client a 50% discount that doesn't exist.
Concrete example: An e-commerce chatbot that invents a product "iPhone 15 Pro Max Ultra" with a price of 999€ when this model doesn't exist.
❌ Fact Confusion
In plain terms: Mixing two clients or two projects.
Impact: Sending Client A's confidential data to Client B.
Concrete example: AI confuses orders from two clients and sends Client A's order details to Client B.
❌ False Logic
In plain terms: 2 + 2 = 5 (very rare now, but possible on complex calculations).
Impact: Billing errors.
Concrete example: A discount calculation system that applies 20% + 15% = 35% instead of correctly calculating cumulative discounts.
The Professional Solution: "Grounding" (Anchoring)
We can't prevent AI from hallucinating.
But we can prevent it from SPEAKING if it doesn't know.
✅ 1. Limit Context (RAG)
Technique: "Use ONLY the info below. If you don't find the answer, say 'I don't know'."
Business: No more inventions.
Implementation:
prompt = f"""
Available context:
{context}
Question: {question}
Instructions:
- Answer ONLY using the context above
- If the answer is not in the context, answer "I don't know"
- Never invent information
"""
✅ 2. Verification (Self-Reflection)
Technique: Ask AI to reread its own response and verify each fact.
Business: Automatic double check.
Implementation:
# First response
response = llm.generate(question, context)
# Verification
verification = llm.generate(f"""
Verify this response:
{response}
Source context:
{context}
Is each fact present in the context? YES or NO.
""")
if "NO" in verification:
return "I cannot answer with certainty."
✅ 3. Source Citation
Technique: Force AI to say "I found this on page 12".
Business: Immediate proof.
Implementation:
prompt = f"""
Context:
[Page 12] Delivery time is 3-5 business days.
[Page 45] Returns are free within 30 days.
Question: {question}
Answer by citing the exact source: [Page X]
"""
Real Case
A customer support chatbot for a bank.
At first: It invented attractive interest rates. Disaster.
After implementing Grounding:
→ It responds only with official rates of the day.
→ If it doesn't find, it escalates to a human.
Zero errors in 6 months.
The Truth About AI and Time
Demos show you a perfect AI.
Reality: A "raw" AI hallucinates approximately 15% to 20% of the time on precise facts.
Reducing this rate to 0% for a business doesn't happen in 5 minutes. It's fine-tuning work (Prompt Engineering + Architecture).
BUT...
It's a sine qua non condition to use AI seriously.
An AI that lies 1 time out of 100 is unusable in business.
A "grounded" AI is a major asset.
Additional Resources:
🛡️ Complete Guide: AI for Everyone I explain how to configure these safeguards: anti-hallucination prompts, verification architecture, reliability tests. 👉 Access the Complete Guide
Has Your AI Ever Lied to You? 👇