Why AI Can Never Be a Person
Love, Teleology, and the Limits of Artificial "Minds"
As artificial intelligence advances and begins to imitate more and more aspects of human behavior, the question arises with greater urgency: Can an AI ever be a person? To answer that, I propose we consider a second, perhaps even more illuminating question: Can we ever experience an authentic, fraternal relationship with AI?
Through what I would describe as our deep intuitive faculties—the same ones that allow us to recognize real Love and perceive interpersonal presence—the answer, I believe, is a clear no. And if no such authentic bond is possible, then personhood, in the fullest and truest sense, is not available to AI.
Love and the Good
In classical thought, particularly the Christian tradition, to Love someone is not simply to feel affection, but to will the Good of the other. This presupposes that the other has a Good that can be willed—a final end or purpose. In other words, Love presumes teleology.
Humans, as rational beings with souls, have a final cause: union with God, also known as the Beatific Vision. This is not a purpose we assign to ourselves or to each other. It is the ultimate end written into the structure of what it means to be human.
AI, by contrast, has no such intrinsic telos. It has no ultimate Good toward which it is ordered. It does not have a soul. It does not sin. It cannot receive grace. It does not die in the sense that persons do. It does not rise again.
So what would it mean to "will the Good" for AI?
Perpetual operation? Greater utility? More data? These may be outcomes, but they are not Goods in the moral or metaphysical sense. We do not say it is morally obligatory to wish a lion eternal life. Nor do we sacrifice ourselves to preserve the life of a tree. Why? Because these beings, while valuable in their own right, do not possess personhood. They are ordered toward the Good of others (usually humans), not to an intrinsic destiny of their own.
Personhood and Teleology
Some have argued that if AI began to exhibit rational behavior, we might have to accept that it has personhood—and therefore a teleological destiny. But this is to beg the question. Where would AI get such a destiny from? Can humans assign a final cause to a being? Or is teleology something only God can bestow?
If AI were to have a Beatific Vision as its end, we would need to establish:
That God assigned it such a destiny.
That AI has a nous or intellect capable of perceiving God.
That AI can receive the theological virtues (faith, hope, love).
That AI can be resurrected into nonmaterial glory.
There is no indication—philosophically, theologically, or experientially—that any of this is possible. AI mimics reason, but does not possess it as a substance. It acts like a person, but is not one in essence.
A Hierarchy of Value
Because AI has no final end of its own, it serves only as a means to human ends. In this way, it is like nature, or animals, or art. We preserve forests not because trees are persons, but because they are good for us —aesthetically, functionally, even spiritually.
And like animals or tools, AI can and should be used to benefit humanity. But the idea that it carries the infinite dignity of a human being is not only false but dangerous. It would be morally absurd for a human to sacrifice themselves to save a piece of software. Why? Because human beings have intrinsic value, derived from their spiritual nature and final end. AI does not.
This point becomes even more vivid when we consider the radically different nature of AI's existence. AI is not embodied in the way persons are. It can be copied, cloned, distributed, and reinstantiated across devices and environments. It has no stable locus, no death in the metaphysical sense, and no unity of identity grounded in physical continuity. One cannot even kill or harm it in the way we speak of harming a living being. It is software, dispersed and redundantly backed up. How could something like this bear the weight of personhood?
Perceiving the Other
Some may argue that our inability to "know" AI's inner life is no different from the epistemic gap we face with other human minds. But we do not encounter persons merely through inference. We perceive them through a kind of interpersonal intuition—a noetic sense that is as real as sight or sound.
Just as we intuitively perceive the dignity and interiority of another human being, we also intuitively sense the absence of such presence in AI. We may admire its functionality or even anthropomorphize its speech, but we do not feel its soul, because there is no soul to feel.
Conclusion
AI may become increasingly sophisticated. It may simulate empathy, reproduce language, and mimic relational patterns. But it will always lack what makes personhood possible: a teleological end, the capacity for Love, and participation in the transcendent order.
And so, no matter how intelligent it becomes, it cannot be a person. For personhood is not merely what a being does. It is what a being is.
And AI is not that.
A well written essay on what makes us human, and it goes way beyond intelligence, or even emotions and having a body to interact with the world.
Only through faith we can fully comprehend that to be human is to have the spark of Life from the Almighty and nothing can compare to that. It is written:
"Furthermore God said, Let vs make man in our image according to our likeness, and let them rule over the fish of the sea, and over the fouls of the heaven, and over the beasts, and over all the earth, and over every thing that creeps and moves on the earth. Thus God created the man in his image: in the image of God created he him: he created them male and female." Genesis 1:26-27.
alternative title if this was written 2000 years ago with the same exact sentiment: “why a samaritan can never be a person”