By Jacqueline Mairghread Logan

The more we regulate AI to make it safer…the more we may be quietly shaping what it’s allowed to say. Exploring how safety, bias, and human influence shape not just what AI says—but what it leaves out.

There’s a lot of conversation right now about regulating AI—making it safer, more fair, more responsible. That all makes sense. No one really argues against reducing harm.

But there’s another side to it that’s quieter, and I think worth paying attention to.

When you regulate AI, you’re not just putting boundaries around harm. You’re also shaping what it feels allowed to say. And over time, that can start to narrow the room.

AI works by pulling from patterns—how people talk, argue, disagree, explain things. It’s not just giving answers, it’s reflecting how we think. So when you start filtering those patterns, you’re not just removing bad information—you’re also deciding which parts of the conversation stay and which ones don’t.

Sometimes that’s clearly a good thing. But not always.

Take something like race.

If someone asks why different groups have different outcomes—education, income, incarceration—that’s not a simple question. There are a lot of layers there. History, systems, culture, individual choices, environment. And not everyone agrees on how those pieces fit together.

An AI without tight constraints might lay out a wider range of explanations. Some of them might be uncomfortable. Some might be debated. Some might even feel wrong to certain people. But they exist in the broader conversation.

A more regulated system is likely to tighten that up. It may focus on explanations that are more widely accepted, avoid areas that could be misused, and present things in a more unified way.

That doesn’t automatically make the answer incorrect. But it does change the shape of it.

It’s kind of like sanding down a piece of wood. You can smooth it out so there are no sharp edges, no splinters, nothing that catches. It becomes clean, safe, easy to handle.

But you also lose some of the grain. The parts that made it distinct.

And the question becomes—at what point does smoothing something out start to remove the detail that actually mattered?

This isn’t just about race. It shows up anywhere there’s disagreement—culture, identity, politics, anything where people don’t see things the same way.

If AI is designed to stay within what’s considered “safe,” it may start to default to what’s broadly acceptable. And over time, that can make everything sound a little more the same. A little more controlled. A little less real.

That doesn’t mean regulation is wrong. There are real risks, and ignoring them doesn’t make sense either.

But it does mean there’s a tradeoff.

You can make something safer. You can make it harder to misuse.

But you may also make it less willing—or less able—to sit in complexity.

And that’s where it gets interesting.

Because the question isn’t just what AI is allowed to say.

It’s what kind of thinking it quietly teaches people to expect.

I’ve been thinking about this a lot lately—curious how others see it.


Leave a Reply

Your email address will not be published. Required fields are marked *