More Info — Lesson 6: Why Chatbots Make Things Up
This page goes deeper on the ideas from Lesson 6 — what hallucination is, why it happens, what kinds of mistakes are most common, and how the "confident friend" analogy helps you calibrate trust.
Why does hallucination happen?
The word "hallucination" sounds dramatic. What it means is simpler: the chatbot produced something that sounds right but isn't.
This happens because of what chatbots actually do. They're predicting the most plausible-sounding next words, not retrieving facts from a verified database. When a plausible-sounding answer and a correct answer are the same thing — which is most of the time — you get reliable results. When they diverge, you get confident-sounding errors.
The chatbot has no internal fact-checker. It doesn't "know" that it's wrong the way a person might pause and say "wait, I'm not sure about that." It just produces the next most-likely sequence of words, and if those words happen to be wrong, nothing in the system raises a flag.
Common types of mistakes
Not all errors are the same. Here are the most common categories:
Invented citations and sources
This is one of the trickiest. If you ask a chatbot to recommend books, cite research, or name studies, it may generate titles, authors, and publication details that sound completely real — but don't exist. The pattern of "author name + book title + year" is something it has seen thousands of times, so it can produce that pattern convincingly even when it's making it up.
What to do: If a source matters, look it up independently. Don't paste a chatbot's citation into a report without verifying it exists.
Wrong numbers and dates
Statistics, percentages, historical dates, prices — anything numeric is a common failure point. The chatbot may have seen conflicting figures in its training data, or it may simply produce a plausible-sounding number with no reliable basis.
What to do: For any number that matters, verify it with an authoritative source.
Confident mix-ups
Sometimes the chatbot knows the right pieces but assembles them incorrectly. It might correctly know who a person is and correctly know a date — but connect them to the wrong event. Or describe a real place accurately but assign it to the wrong city.
What to do: For anything you're going to act on or share, read critically. Just because all the words are real doesn't mean they're correctly connected.
Outdated information stated as current
Chatbots have a training cutoff — there's a date after which they have no information. But they may not always flag this. They may describe a law, policy, or situation accurately as of their training date while presenting it as if it's still current.
What to do: For anything time-sensitive (legal, medical, policy, pricing), check a current source.
The confident friend analogy
Here's a useful way to think about this.
Imagine you have a friend who is extremely well-read, well-traveled, and has an opinion on everything. You can ask them almost anything and they'll give you a thoughtful, detailed, confident answer. And most of the time, they're right — or at least in the right neighborhood.
But this friend has one quirk: they never say "I don't know." Even when they're not sure, they'll answer with the same tone as when they're certain. And if they're wrong, they're wrong with complete confidence.
Would you stop asking this friend questions? Probably not — they're genuinely useful. But you'd know: for anything important, you'd want to double-check. You'd take their answers as a great starting point, not a final word.
That's the right relationship to have with a chatbot.
What you can and can't do about it
You can reduce hallucinations but not eliminate them.
Things that help:
- Asking about well-documented, widely-covered topics (less likely to be wrong)
- Staying in areas where you know enough to catch mistakes
- Asking follow-up questions: "Are you confident in that?" or "How would I verify this?"
- Requesting that it flag uncertainty: "Let me know if you're not sure about any of this." (It will try, but can still miss)
Things that don't fully help:
- Asking it to "be accurate" — it always tries, it just can't always succeed
- Assuming that a longer, more detailed answer is more reliable — it isn't necessarily
- Treating confidence in tone as evidence of correctness — tone and accuracy are unrelated
The bottom line
Hallucination isn't a bug that will be fully fixed — it's a property of how these systems work. The chatbot isn't lying; it genuinely can't distinguish between a confident-sounding correct answer and a confident-sounding wrong one. Your job is to apply your own judgment about what's worth verifying. For low-stakes questions, use the answer freely. For anything important, check.
← Back to Lesson 6: It can be confidently wrong.