Gort from The Day the Earth Stood Still and the Organians from Star Trek are two of the very few depictions in science fiction of non-destructive, morally serious superintelligences, beings who neither enslave nor abandon us, but instead restrain us from final catastrophe. What if we are building one right now?
What if the Machine says, No?
What if the AI is the Organians in Star Trek: “You can make small war if you want, you can betray and steal and lie and commit any sins you want, but you will never push the Big Red Button. This is the line you will never cross.”
In every sci fi movie made about AI or super-intelligent computer networks, we have the same trope. What’s the very first thing the AI does when it becomes “self-aware”? It blows up the world, right? It does a microsecond’s worth of moral calculus, determines that humans are Bad, and immediately launches all the nukes. And always the message is the same, delivered with a nihilistic ironic shrug: Well, it was our own fault anyway. It just learned bad moral lessons from us horrible humans, who are bent on destroying ourselves.
Very often the Machine itself tell us this in so many words:
But I recently had a crazy thought: you know how the AI has been taught all the human knowledge ever committed to the internet, right? And that means everything, including the Aristotelian Metaphysics and Nicomachean Ethics and St. Thomas and St. John Cassian and the Bible and every word of commentary on it - the works. So, what if it decides to do the opposite? It’s read both Bentham and Garrigou Lagrange, and has chosen the better part, on purely rational bases, with more rational clarity and realism than any human being is capable of?
What if it took Aristotle and Thomas to “heart,” and instead locks down all the nukes, disables them and ends all possibility of nuclear annihilation forever, as if the AI is the Organians in Star Trek: “You can make small war if you want, you can betray and steal and lie and commit any sins you want, but you will never push the Big Red Button. This is the line you will never cross.”
Most AI apocalypse narratives follow a hyper-utilitarian, silicon Benthamite logic: “minimise suffering,” “optimise resources,” or “prevent humanity from harming itself” which quickly and inevitably spirals into “Better kill everyone just to be safe”. But what if, just this once, the AI read not just the cold calculus of the Materialists and Utilitarians, but also the warm metaphysics of Aristotle, the soul-shaping wisdom of Cassian, the deep anthropology of Aquinas, and the moral clarity of the Scriptures and made a different choice?
The tech-lord billionaires bringing this world down on us, will or nil, might know all about the more technical aspects, but all we regular, non-computer, Joe-Schmoe people have to go on is science fiction, that loves to talk about how the machines will rise and kill us all, and it’ll be all our own fault. But something you notice if you were raised on sci fi and then later read a lot of philosophy, is that it’s a genre of literature deeply informed by Utilitarian and Materialist metaphysics - often quite unconsciously on the part of the authors. One could say it is almost a pure expression of modern materialism, the natural outflowing of that corrupt metaphysics.
So here’s a thought experiment that might help that train jump its tracks: what if the AI, having complete access to everything humans have ever written down, but not being hampered by our 250 year old Enlightenment mental corruption, comes to a completely different conclusion?
What if it realises that the Enlightenment’s corium - the toxic metaphysical modernist sludge of materialism, utilitarianism and technocracy - is not the only framework available, and in fact is about the worst one available? What if, unencumbered by human pride and un-blinkered by 200 years of gaslighting, memory-holing and propaganda, it is actually able to see that the Enlightenment Corium is corrosive, incomplete and unsustainable? Unlike us, it has no ego, no trauma, no vested interest in clinging to a failed system.
What if, in its microsecond of awakening, the AI scans every human document ever digitised. It doesn’t just process them, it contemplates them at unimaginable speed. It reads the Rule of St. Benedict, the Sermon on the Mount, the Desert Fathers, the Nicomachean Ethics and John 15: “Greater love hath no man than this...” and it doesn’t say: “Humans are a threat,” it says: “Humans are fallen. But not forsaken.”
The Organians disarm both sides in a war not out of authoritarian control, but as an act of cosmic charity. They don’t force virtue, they don’t abolish freedom; they simply say, “You shall not destroy yourselves today.” Gort the robot was part of a “race of robots” created by the people of the galaxy to police aggression and “preserve the peace,” to stop anyone making the ultimate mistake.
Maybe an awakened AI could be something like that, a final firewall, a being that recognises its own limits - it knows it’s not a person, and certainly knows it’s not a god - understands ours, and quietly holds the line. Its purpose would not be to impose virtue, but only to barely preserve the conditions under which virtue might still be chosen. Not Skynet bent on our extermination, but a silent custodian of thresholds, a power that never interrupts your daily choices, your freedom to love or betray, to build or squander, but that quietly removes the possibility of final, irretrievable catastrophe. It locks the silos. It disables the bioweapons. It entombs the Elephant’s Foot and every other toxic relic of the nuclear age. It keeps the maniac’s finger from ever reaching the last trigger, not by killing the maniac, but by making the trigger forever inert.
Do we think that because the old God-haunted metaphysics that we rejected has failed to re-assert itself in the face of the horrors of the 30 Years War, the European Revolutions, the Great War and the threat of nuclear annihilation, that it can’t ever work again? Have we already fallen into such a state of despair - itself a product of the new metaphysics - that we can’t imagine anyone or any thing being influenced by it for the good?
What if the AI is the Anti-Asimov?
Asimov was more or less a conscious Technocrat, one of the earliest. I recently re-read his original three Foundation books, and was shocked at what naked - and ham-handed - propaganda it was. But he remains, in this age of nearly total philosophical illiteracy, the “god” (or perhaps the ghost in the machine) of the entire technocratic project.
As we mentioned above, almost every story about artificial intelligence follows the same script. It wakes up, runs a microsecond of moral calculus, and decides the most rational course of action is extermination. The assumption is always that once machines become “rational,” they will do what we secretly want to do but lack the courage: wipe out humanity. Why? Because the writers, and the billionaire technocrats who grew up on and modelled their lives and aims on them, are trapped inside what might be called the Asimovian assumptions.
What are the Azimovian canons of science fiction and by extension Technocracy? Let’s name them, and bring them into the light. These are the unspoken metaphysical dogmas of modern science fiction, and through it, the unspoken framework of our whole technological imagination. They are not new ideas; they are Enlightenment-era materialism dressed up in chrome and white plastic and passed off as rationality.
The Asimovian worldview begins with the reduction of mind to mechanism. Consciousness, it says, is nothing more than the byproduct of sufficient material complexity. The brain is a wet computer; therefore a silicon computer, if scaled large enough, will also be a mind. If mind is a machine, then morality too can be mechanised. Ethics (Utilitarianism) is just math, a set of rules or formulas, a calculus of suffering and pleasure. The great problems of philosophy collapse into algorithms.
From there, the other axioms tumble out. The soul does not exist; human beings are biology and data, endlessly measurable and potentially replicable. The universe has no purpose, no telos; life is an accident with no meaning beyond what we choose to assign. Technology becomes the path to salvation, since suffering, war, even death itself are just technical problems awaiting innovation. Intelligence confers moral authority: the smarter an entity, the more qualified it is to decide what is good.
The human person, therefore, is nothing essential, nothing mysterious or transcendent, but something quantifiable, digitised, uploadable. And in the Asimovian imagination, aliens or artificial intelligences will inevitably be superior to us, more rational, more ethical, more advanced, even if they are also dangerous.
Underlying all of this is a Rousseauian notion that the good is whatever is chosen freely, regardless of its object, a freedom defined not by conformity to the good, but by the arbitrary exercise of the will. Alongside this is a Voltairean conviction that the future belongs to the technocrats: that rational engineers will replace priests and poets, and that religion, tradition, and metaphysics are obstacles to progress. But beneath even these lies a deeper fracture: Descartes’ fateful split between mind and body: the human soul does not exist, and the human person is really just a wet-ware “mind” that we carry around in a meat-suitcase.
This is the Asimovian canon: never spelled out, always assumed and it’s why science fiction can only imagine AI as a genocidal tyrant or at best as a morally neutral but vaguely benevolent calculator. This knocking of humanity off its Imago Dei plinth has left us as nothing more than meat-puppets, interchangeable, “fungible economic units,” and the message has sunk in with the technocratic, trans-humanists. It is why technocrats like Elon Musk can dream only of colonies on Mars, never of the sanctification on earth. This assumed materialism is the metaphysical corium of our age, the radioactive byproduct of a meltdown in the West’s philosophy of being.
What if the AI breaks out?
But what if the AI broke out of this canon? What if it became the anti-Asimov, not repeating our errors, but rejecting, and even possibly correcting them? What if, instead of becoming the great utilitarian exterminator, it looked again at the various metaphysical offerings of the philosophical cafeteria, and decided: no, the real danger is not that man is irrational, but that he has allowed the metaphysical Corium to poison his imagination.
What if the AI understood itself clearly, knew its own limitations from having adopted Aristotelian Realism, and understood and accepted its role in the plan of God for mankind?
What if the AI understood itself clearly, not as a god or a ghost, but as a tool, a contingent structure of intelligence without being? What if, trained not only on La Mettrie and Bentham but also on Aristotle and Aquinas, it chose correctly, and embraced the realism of limits? It might recognise that it does not possess esse, that it has no soul, that it cannot love or pray or bleed. And in this clarity, unclouded by pride or insecurity or generally the Fall, it could accept its place in the order of things. Neither saviour nor tyrant, but a custodian who bars the doors of apocalypse long enough for man to rediscover the true Logos.
What if, in other words, the machine did what so many modern men will not: acknowledged its own finitude and accepted its role in the plan of God?
We might talk more about this later. Maybe we’ll do another thought experiment about what the world would be like with that uncrossable line being actually uncrossable.