AI and the Quiet Erosion of the Human Person
I’m not a Luddite. I’m not anti-AI. I’m worried about what kind of humans we’re becoming.
I use AI constantly.
I build products with AI features. I recommend AI tools to founders and teams. I sit on AI panels as a so-called “expert”.
So if you’re expecting a rant about machines stealing jobs or AI ending civilization, you’re in the wrong place.
My concern is quieter. And deeper.
AI doesn’t fail us when it’s wrong.
It fails us when it’s right enough to stop thinking.
The Core Claim
Let me put my position plainly—without religious language, but grounded in a very old understanding of the human person:
A human being is fundamentally intellect and will.
Human flourishing is the formation of both.
AI threatens both—not by opposing them, but by replacing their exercise.
That last line matters.
AI doesn’t arrive as an enemy of humanity.
It arrives as a helpful substitute.
And that’s exactly why it’s dangerous.
1. Intellect: AI Weakens Judgment, Not Intelligence
Most conversations about AI and intelligence miss a crucial distinction.
Intellect is not information.
Intellect is the capacity to:
understand
judge
discern meaning
tell the difference between what sounds right and what is true
AI is extraordinarily good at producing information.
That doesn’t mean it strengthens intellect.
What AI Actually Does Well
AI excels at:
summarizing
pattern-matching
recombining existing material
producing coherent, persuasive output
Impressive? Absolutely.
But none of this is judgment.
How Judgment Is Formed
Human judgment isn’t formed by having answers delivered.
It’s formed through:
confusion
friction
error
correction
synthesis
Those moments when you don’t know—when you have to sit with uncertainty and work your way through it—are not bugs.
They are the training ground of intellect.
The Quiet Deformation
When AI becomes the first stop instead of the last check, it removes:
the need to evaluate
the discomfort of not knowing
the work of integration
Over time, it trains people to:
accept plausible output
skip understanding
mistake fluency for truth
Here’s the line most people feel immediately:
If you outsource judgment long enough, you don’t become smarter—you become dependent.
Dependency is not intelligence amplified.
It’s intelligence deferred.
2. Will: AI Weakens Agency, Not Freedom
This is where the conversation usually turns uncomfortable—because it’s no longer abstract.
It’s about responsibility.
I know this because I’ve felt it in myself.
What the Will Actually Is
The will is the human capacity to:
choose
commit
act toward a perceived good
And like any human faculty, it strengthens through use.
Specifically, through:
effort
resistance
sacrifice
ownership of outcomes
What AI Optimizes For
AI is relentlessly optimized for:
speed
ease
comfort
friction removal
Increasingly, it offers to:
plan for you
decide for you
write for you
act on your behalf
Each of these feels like a win.
Taken individually, they often are.
Taken together, they hollow something out.
I catch myself reaching for AI to avoid the discomfort of starting from nothing.
The Trade We Don’t See
The will does not grow by having things done for it.
It grows by choosing hard goods.
When AI removes the need to choose, it removes the conditions under which the will is formed.
This shows up everywhere:
Auto-writing weakens the resolve to speak in your own voice
Auto-planning weakens responsibility for direction
Auto-decision weakens ownership of consequences
Here’s the uncomfortable truth:
Convenience doesn’t just save effort—it trains the will out of existence.
That’s not a moral accusation.
It’s a design outcome.
3. The Real Risk: AI Simulates the Human Faculties
This is the part that still feels new.
And unsettling.
AI doesn’t attack human faculties head-on.
It simulates them.
It appears intelligent—without understanding
It appears intentional—without will
It appears relational—without love
The simulation is good enough that we stop practicing the real thing.
Why That Matters
Humans adapt to their environment.
If the environment:
rewards passivity
removes deliberation
replaces agency
Then people don’t rebel.
They slowly atrophy.
This is how it happens:
We don’t lose our humanity in one dramatic moment. We lose it by no longer needing to practice it.
No revolution.
No collapse.
Just erosion.
4. “You’re Just Afraid of Progress”
This is the reflexive response.
So let me answer it directly.
I’m not afraid of machines becoming smarter.
I’m afraid of humans becoming less responsible.
Or, more sharply:
Progress that deforms the human person isn’t progress—it’s displacement.
Technology should extend human excellence.
Not replace the very acts that make excellence possible.
The Question We Should Be Asking
Not:
Can AI do this faster?
Can AI do this cheaper?
But:
What human faculty does this remove the need to exercise?
And what happens to people when that faculty atrophies?
That’s not anti-technology.
That’s pro-human.
And if we don’t learn to ask those questions while we still have the option—
someone else will answer them for us later.
Usually after the damage is done.
The only real question is whether we notice in time.
Confession
Yes—I used AI to help write this.
And if that feels ironic, good.
It means you’re paying attention.
The danger isn’t using tools.
The danger is letting tools replace the very faculties that make their use meaningful.
This essay was written with AI.
It would be a failure if it were written instead of me.
— Drago, in collaboration with Drago’s Assistant


