Skip to main content

More Info — Lesson 7: How to Check What a Chatbot Tells You

This page goes deeper on the ideas from Lesson 7 — practical strategies for verifying chatbot output, how to decide what needs checking, and what to do when you ask the chatbot to cite its sources.


Not everything needs to be verified

Verification doesn't mean double-checking every sentence. It means asking yourself: "If this turns out to be wrong, does it matter?"

A useful way to think about it is risk level:

| Risk level | Examples | Approach | |---|---|---| | Low | General explanations, brainstorming, writing drafts, casual curiosity | Use freely — the stakes of a wrong answer are low | | Medium | Information you'll repeat to someone, decisions you'll make based on it | A quick check is worth the minute it takes | | High | Medical decisions, legal questions, financial choices, anything you're publishing | Verify with authoritative sources before acting |

This isn't paranoia — it's the same judgment you'd apply to any information you receive from a confident stranger.


Practical verification strategies

1. Search the specific claim

The fastest method. Take the key claim — a statistic, a name, a date, a fact — and search for it directly. You're not looking to disprove it; you're looking to confirm it with an independent source.

Example: the chatbot says "About 1 in 8 women will be diagnosed with breast cancer in their lifetime." Search that statistic on a site like cancer.gov or cancer.org. If you find it confirmed there, you're done.

2. Look for official or primary sources

For anything related to health, law, government benefits, or official policy, go directly to the source:

  • Medical information: .gov and .edu health sites, major hospitals, professional medical associations
  • Legal questions: your state's official government website, law library resources, or an attorney
  • Benefits and programs: the program's own website (Social Security, Medicare, your local utility company)
  • Current events: established news sources with editorial standards

The chatbot may point you toward the right organization — but visit that organization's actual website rather than relying on the chatbot's summary of what's there.

3. Cross-reference two independent sources

If two sources that have no obvious connection to each other say the same thing, that's a reasonable signal of accuracy. If they disagree, dig deeper before acting on either.

4. Apply your own knowledge

You know things. If the chatbot says something about your field, your community, your personal situation, or a topic you know well — and it sounds off — trust your instincts. You're not paranoid for noticing when something doesn't match what you know.


Asking the chatbot to show its work

You can ask the chatbot to explain where its information comes from. This won't always solve the problem, but it can give you something useful to work with.

Try:

  • "Where would I look to verify this?"
  • "What source would you point me to for this?"
  • "Can you explain how you know that?"
  • "How confident are you in that answer?"

What you'll typically get: A reputable-sounding source name or type — "the CDC," "the IRS website," "academic research on this topic." This is useful as a starting point. But it's not the same as the chatbot actually looking something up. It's suggesting where such information typically lives.

Important: Sometimes the chatbot will name a specific study, report, or article that doesn't exist. If it provides a title and author, search for that specific thing before citing it.


When the chatbot says "I'm not sure"

Some chatbots will flag their own uncertainty — saying things like "I'm not entirely certain about this" or "you may want to verify this." When you see language like that, take it seriously and check.

But the absence of that language doesn't mean it's certain. Chatbots often produce incorrect information without any hedging at all. The uncertainty flags are helpful when present, but their absence isn't a guarantee.


A quick habit for the things that matter

Before you share, send, or make a decision based on something a chatbot told you, run through this quick check:

  1. What's the key claim I'm relying on? Identify the specific fact or piece of information that's load-bearing.
  2. Can I quickly find this on an authoritative source? Spend two minutes searching.
  3. Does it check out? If yes, proceed. If not, figure out what's right before acting.

For most chatbot use — writing help, explanations, brainstorming — this step isn't necessary. But for the times it is, this three-step habit will save you from the moments when the chatbot's confident wrong answer leads to a real-world problem.


The bottom line

You don't need to fact-check everything — just the things that matter. A quick search on a reliable source handles most verification needs. When something is high-stakes, go to a primary source rather than treating the chatbot's summary as the final word. And remember that asking the chatbot to justify itself is a useful habit, even knowing it can still be wrong.


← Back to Lesson 7: Always check what matters.