Skip to main content

More Info — Lesson 9: Bias in AI — What It Looks Like and What You Can Do

This page goes deeper on the ideas from Lesson 9 — real examples of how bias shows up in chatbot responses, what companies do to address it, and how you can push back when you notice something feels off.


Where bias comes from

Chatbots learn from text. That text was written by human beings — and human beings have blind spots, assumptions, prejudices, and gaps in perspective that show up in what they write.

When a model absorbs billions of words of text, it absorbs all of that too. Not intentionally, not maliciously — it simply learns the patterns in the data, and some of those patterns reflect biased assumptions.

A few of the most common sources:

Representation gaps. If certain communities, languages, cultures, or perspectives are underrepresented in the training data, the model knows less about them and may make more mistakes when discussing them. A question about a topic well-covered in English-language sources will often get a better answer than the same question about a less-documented community or tradition.

Stereotyped associations. If the training text consistently links certain roles with certain groups — doctors described as "he," nurses as "she," certain nationalities described in particular ways — the model may absorb and reproduce those associations.

Cultural defaults. A model trained primarily on Western, English-language internet content will have a default perspective that reflects that. It may answer questions about "typical" families, holidays, food, or social norms in ways that assume a particular cultural context.


What this looks like in practice

Here are the kinds of things people have noticed:

  • Asking for a story about a doctor, lawyer, or engineer — and getting a character assumed to be male unless you specify otherwise.
  • Asking about cultural traditions and getting a response that's thinner, less nuanced, or subtly inaccurate for cultures not heavily represented online.
  • Getting career advice or writing help that subtly reflects assumptions about who is "typically" in certain roles.
  • Political or social topics where the response feels slightly tilted in one direction without the chatbot noting that other perspectives exist.
  • Health information that centers certain populations (often white, Western, middle-income) as the default.

None of these are dramatic failures in every case — many responses are fair and balanced. But the potential is there, and it's useful to notice.


What companies do to address it

AI companies invest significant effort in reducing bias. The main approaches:

Diverse training data. Actively seeking out text that represents a wider range of voices, languages, cultures, and perspectives.

Human feedback and review. Hiring reviewers from diverse backgrounds to flag problematic outputs and teach the model what better responses look like. This process (sometimes called RLHF — reinforcement learning from human feedback) is how a lot of the most harmful outputs get addressed.

Red-teaming. Testing the model specifically to try to find where it fails — including bias failures — before it's released.

Content policies. Setting explicit guidelines about how the model should handle sensitive topics, stereotypes, and contested claims.

These efforts genuinely help — current models are measurably better on many bias dimensions than earlier ones were. But they can't fully solve the underlying problem, which is that biased patterns in training data are hard to fully remove without also removing useful information. It's an ongoing process, not a solved problem.


How to push back in a conversation

You don't have to accept a response that feels one-sided, incomplete, or assumption-laden. Here's how to redirect:

Ask for other perspectives

  • "What would someone with a different viewpoint say about this?"
  • "How might someone from a different cultural background see this differently?"
  • "Are there other legitimate perspectives on this that you didn't mention?"

Flag a specific assumption

  • "In your response you seemed to assume [X]. Can you redo it without that assumption?"
  • "The example you used assumed [the person was male / was American / had a certain income level]. Can you adjust for a different context?"

Ask for more nuance

  • "That answer felt a bit one-sided. Can you give me a more balanced view?"
  • "Is there more complexity here than what you described?"

Request explicit acknowledgment of uncertainty

  • "Are there things you might be missing or getting wrong here because of limitations in your training data?"

You are the critical thinker in the conversation

The chatbot cannot evaluate its own biases in real time. It can flag them if prompted, and it's been trained to avoid the most obvious ones — but it doesn't have a self-awareness that lets it catch every subtle assumption.

You do. You have lived experience, community knowledge, and critical thinking that the chatbot doesn't. When something in a response doesn't match your experience, sounds incomplete, or makes an assumption you don't share — you're probably right to notice.

Treating the chatbot as one perspective that benefits from your judgment — rather than an authoritative voice — is the healthiest and most accurate relationship to have with it.


The bottom line

Bias in AI is real, comes from real human patterns in training data, and can't be fully eliminated. It often shows up quietly — in defaults and assumptions rather than obvious errors. You can push back, ask for different perspectives, and apply your own judgment. The goal isn't to distrust everything the chatbot says, but to bring the same critical thinking you'd apply to any source of information.


← Back to Lesson 9: Bias is baked into the patterns.