Following up on something I’ve been thinking about: If AI is shaped by human rules… can it ever really be neutral?
By Jacqueline Mairghread Logan
On the surface, neutrality sounds like the goal. Remove bias. Present facts. Stay balanced.
But AI doesn’t exist outside of human influence. It’s built from human language, shaped by human decisions, and trained on human behavior. And humans aren’t neutral.
We bring our experiences with us—successes, trauma, culture, assumptions. Most of it isn’t even intentional. It’s just there, shaping how we see things and what we think is “normal.”
That same shaping carries into AI.
It shows up in what data is used, what gets filtered out, how questions are framed, and what counts as a “safe” or “appropriate” answer. Even the idea of being neutral is, in itself, a choice about what matters and what doesn’t.
So neutrality starts to look less like the absence of bias, and more like a managed version of it.
Not removed. Just organized.
What Do You Do With Bias You Can’t See?
The harder problem isn’t obvious bias. It’s the kind people don’t realize they have.
You can’t regulate unconscious bias the way you regulate a rule. You can’t point to it directly and say, “remove that.” Most of the time, it doesn’t announce itself.
Instead, it shows up in patterns.
In what gets emphasized.
In what gets left out.
In what feels like the “default explanation.”
So the way it’s handled isn’t by eliminating it. It’s by trying to balance it.
Pulling from multiple perspectives instead of one.
Testing outputs instead of assuming intentions.
Adjusting over time as patterns become visible.
But even then, something is always being shaped.
Some viewpoints are easier to include.
Some are easier to exclude.
Some are framed as standard, others as exceptions.
And that shaping doesn’t go away just because the system is regulated. In some ways, it becomes more structured.
What Gets Lost, and What Gets Gained
At that point, the conversation shifts again.
It’s not just about safety versus risk.
It’s about range versus control.
Regulation can reduce harm. It can make systems more predictable, more consistent, more careful.
But it can also narrow the range of what shows up in the first place.
Fewer sharp edges.
Fewer outlier perspectives.
Fewer answers that sit fully in contradiction.
And depending on how it’s implemented, that can either feel like clarity—or like something has been flattened.
Shaping the Answer Without Changing the System
There’s another layer to this that sits on the opposite side of regulation.
Even when AI systems are constrained, people still have influence over what they get back—not by changing the system itself, but by changing how they ask.
The framing of a question matters. The assumptions built into it matter. The language used, the direction it leans, even what is left unsaid—all of that can guide the response.
Two people can ask about the same topic and receive very different answers, not because the system changed, but because the path they took to get there did.
In that sense, regulation doesn’t fully close the space. It just reshapes it.
Some areas may be narrowed. Some responses may be more cautious. But there is still room for interpretation, for emphasis, for selectively exploring certain angles over others.
That means AI can still be used to reinforce a perspective—not necessarily by overriding safeguards, but by navigating within them.
Not through obvious misuse, but through alignment.
And at that point, the line starts to blur.
If the system is shaped by human boundaries, and the output is shaped by human input, then the interaction itself becomes part of the result.
It’s not just what the AI is allowed to say.
It’s how people learn to ask.
And maybe the better question isn’t whether AI is biased or regulated at all—
but whether, over time, it starts to quietly reflect back exactly what we’re looking for…
and how often we’d recognize that if it did.
Curious how others think about this—especially as AI becomes something we rely on more and more.

Leave a Reply