Skip to main content

More Info — Lesson 8: Your Privacy and Chatbot Data

This page goes deeper on the ideas from Lesson 8 — how chatbot companies handle your conversations, how to check privacy settings on major tools, and what "opt out of training" actually means.


What typically happens when you type something?

When you send a message to a chatbot, here's a simplified version of what happens:

  1. Your message travels over the internet to the company's servers.
  2. The model processes your message and generates a response.
  3. The response travels back to you.

That's the part everyone knows. What varies by company and tool is what happens to your data after that exchange.


What companies may do with your conversations

Different chatbot providers have different policies, but common practices include:

Storing conversations for a period of time. Many services keep your chat history — both to make it available to you (if they offer history as a feature) and to support their own purposes like safety review and abuse detection.

Using conversations to improve future models. This is the part that surprises many people. Some companies use real user conversations as training data for future versions of the model. Your question and the chatbot's answer could, in some form, influence how the next model behaves.

Human review of some conversations. For quality and safety purposes, some companies have human reviewers who may look at a sample of conversations. This is typically done on anonymized or randomly selected chats, not targeted review of individual users — but it's worth knowing it happens.

Sharing data with third parties. Read the privacy policy if this matters to you — policies vary significantly. Some companies are more restrictive than others about what they share and with whom.


How to check privacy settings on major chatbot tools

Each major chatbot has its own privacy controls. They're not always easy to find, but they exist. Here's where to look for the most commonly used tools:

ChatGPT (OpenAI) Go to your account settings and look for "Data Controls." You can turn off "Improve the model for everyone" to opt out of using your conversations as training data. You can also delete your chat history from there.

Gemini (Google) Go to myactivity.google.com, or find Gemini's activity settings under your Google Account. Look for "Gemini Apps Activity." You can pause activity saving and delete past conversations.

Claude (Anthropic) Anthropic's privacy controls are in your account settings. Look for options related to conversation storage and privacy. Policies and options may vary depending on whether you're using the free or paid version.

This library's AI service (if you're using the Hub) Conversations in this service are ephemeral — they are not stored after you close the session. There is no conversation history. The library's AI service is designed specifically to protect your privacy, consistent with library values around patron confidentiality. No individual conversation logs are accessible to library staff.


What does "opt out of training" mean?

Many services let you "opt out of having your data used to train" future models. Here's what that typically means — and what it doesn't:

What it usually means:

  • Your conversations will not be fed into the training pipeline for future model updates.
  • Your data is treated as operational only — used to generate responses, not to teach the model.

What it usually doesn't mean:

  • Your conversations may still be stored temporarily for operational reasons (abuse detection, service reliability).
  • Human safety reviewers may still look at flagged conversations.
  • The company may still retain data logs for legal compliance purposes, depending on jurisdiction.

The opt-out affects model training, not all data handling. If complete privacy is essential, read the company's full privacy policy and terms — or use a service like this library's Hub that is specifically built around patron privacy.


A practical guide to what to type and what not to type

This isn't about being paranoid. It's about treating the chat window the same way you'd treat a conversation in a semi-public space — like talking with someone at a coffee shop.

Fine to type:

  • General questions about any topic
  • Writing you want help drafting (you can describe the situation without including sensitive identifying details)
  • Brainstorming, planning, exploring ideas
  • Publicly available information you want explained or summarized

Be thoughtful about:

  • Specific details about other people without their knowledge
  • Business information your employer considers confidential
  • Details that could identify you in combination (full name + address + employer together, for example)

Keep out:

  • Passwords or security codes
  • Social Security numbers
  • Bank account or credit card numbers
  • Detailed private medical information linked to your name
  • Legal case details where confidentiality matters

The bottom line

What you type goes somewhere, and different tools handle it differently. This library's AI service is specifically designed to protect your privacy — sessions don't persist. For other tools you use outside the library, it's worth taking five minutes to find and review the privacy settings. "Opt out of training" is one useful control, but full privacy requires reading the fine print.


← Back to Lesson 8: Do not type your secrets.