Get Ready for AI Religions: Sam Altman, Transhumanism and The Merge
How AI is changing spirituality, and what it means for our collective fate
Experience the power of breathwork with expert tuition in our new course Breathing in Culture. We have just one early bird ticket remaining and a couple of scholarship places - remember to use your discount if you’re a paid subscriber, find out more on alexanderbeiner.com/breathe
We are on the cusp of creating an Artificial Superintelligence, but whether it will be our salvation or our undoing is less certain than it’s ever been. This is due in part to the company at the forefront of the AI revolution, OpenAI, which is having a very chaotic year.
In May 2024 it disbanded its ‘Superalignment Team’, which was responsible for tackling what’s known as ‘the alignment problem’ of AI. In short, making sure that it acts in accordance with human values and doesn’t kill us all.
That was just the start. On September 17, whistleblower William Saunders testified to the US senate that he had “lost faith” in the OpenAI’s capacity to make responsible decisions around Artificial Superintelligence (ASI), the term for an AI system that is more intelligent than humans and has advanced cognitive abilities. In his testimony, Saunders told senators that witnessing the power of OpenAI’s new o1 model led him to believe we could see ASI in as little as three years.
A few days later, OpenAI’s Chief Technology Officer Mira Murati announced her departure, along with two more executives. The next day, the organisation revealed that it would be restructuring to become a for-profit enterprise. A few days after that, California Governor Gavin Newsom vetoed an AI safety bill, SB 1047, that would have required large companies to build safeguards into their AI models and publicly disclose safety protocols.
The guardrails around the emergence of an ASI, if they ever existed, are being swept away. Big Tech’s relentless hunger to birth a new consciousness seems insatiable. In a recent blog post by OpenAI’s founder Sam Altman, he gives a glowing, utopian treatise predicting an imminent age of “unimaginable prosperity” during which ASI will help us in “fixing the climate, establishing a space colony, and the discovery of all of physics.” The post is alarming, because Altman doesn’t sound like he’s talking about a technology. He sounds like he’s talking about a new God.
The drive to build a machine greater than we are is, at its heart, a yearning for the divine. In a world suffering from a meaning crisis, AI is poised to fill the God-shaped hole. A digital messiah created not by a deity, but by human ingenuity. For all its technological marvel, it is wreathed in religious mystery. AI models are now so advanced that their designers don’t know exactly how they work. Instead, we are asked to have faith that they do. Using large language models like ChatGPT already has a magical and otherworldly quality, not just because we are encountering an intelligence beyond our understanding, but because it appears to be speaking back to us.
AI is birthing new religions as you read this, and this is just the beginning. It is challenging who we think we are, what we think consciousness is, and whether we believe we can transcend our bodies and life forever. The strange spiritual landscape emerging around this technology may hold clues about how to build an ASI that doesn’t wipe us out, but to find them we have to traverse an uncanny valley where nothing is quite as it appears.
A Tale of Two Worlds
To map the strange merging of AI and religion, it’s important first to recognise that technology and religion have always been bound together.
As Joe Henrich points out in The WEIRDest People in the World, you can map the expansion of Protestantism in Europe with the spread of the printing press and the resulting increase in literacy. Similarly, the internet has catalysed an explosion of ‘spiritual but not religious’ belief systems held by what author Tara Isabella Burton calls the ‘Remixed’; those who pick and choose different aspects of spirituality to create a personal belief system that mimics the postmodern fragmentation of the internet.
However, the religious revolution we’re approaching with ASI may be different to what’s come before. To see why, we have to understand it in the context of the history of spirituality. As John Vervaeke points out in Awakening from the Meaning Crisis, the last major shift in our collective metaphysical outlook was the Axial Age that followed the Bronze Age Collapse in around 1200 BC.
As the world changed, we began rethinking our mythologies. Instead of the ‘one world mythology’ that had been prevalent in many cultures, people began to see reality as dualistic. Buddhists distinguished between a realm of suffering, dukkha, and a realm of freedom from suffering, nirvana.
Greek philosophers like Parmenides saw the everyday world as illusory, with another ‘real’ world beyond it, while Jews worshiped a God of progress that could lead people through cycles of history to a new future. As Vervaeke puts it, during the Axial Age our gods became increasingly “transcendent” or other-worldly, leading to what Charles Taylor calls “The Great Disembedding.” We removed our sense of the sacred from the world itself, and placed it ‘somewhere beyond’.
The Axial Age is now coming to an end. As Vervaeke and others have argued, we are in a long process of re-embedding ourselves in this world. We have little choice but to: the metacrisis is forcing us to confront the realities of being disconnected from nature and a shared sense of coherence and meaning. The question is, will AI help us in our quest, or remove us from the world for good? Ultimately, it may do both.
Cloudy Utopias
To see why, we have to look at the beliefs that are growing around AI and recognise where they have their roots in Axial Age thinking, and where they might take us somewhere new.
The most prominent and influential AI religion is Transhumanism. Transhumanists believe we are heading toward a technological ‘Singularity’: a point at which machine intelligence will become so vast, so God-like, that we inevitably combine our own bodies and minds with it to reach a new stage of evolution. In short, technology will take us to a realm beyond the world we can see now to somewhere better.
In 2017, just two years after OpenAI was founded as a non-profit, Sam Altman wrote a personal blog post called The Merge. In it, he explained that he wasn’t a fan of the term ‘Singularity’ because it implies a single moment in time, rather than a gradual process in which we merge with AI.
“The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario…. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”
Many Transhumanists believe that by modifying and changing the human body with technology we will evolve into a new species. For some, this has very practical implications, for example using technology to help blind people see again. Altman may fall into this category. “I don't identify as a Transhumanist,” he said on a Wall Street Journal podcast in early 2024. “I think humans are really great. But I do agree that extending health span and augmenting humans is a very good thing to do.”
However, what he writes in ‘The Merge’ suggests a more extreme position, one informed by what I call Apocalyptic Transhumanism. In this belief system, human consciousness will ultimately merge with machines so that human beings no longer exist in their current form, but find eternal life in a virtual world, or transcend to a new level of consciousness. Some believe we will upload our consciousness onto servers or into android bodies, where we can be freed from constraints of biology and mortality to enjoy a vastly increased level of intelligence.
Despite these heavenly fantasies, most Apocalyptic Transhumanists wouldn’t consider themselves religious, in the same way most New Atheists wouldn’t. Transhumanism tends to draw its assumptions from Humanism, a philosophy that celebrates reason, compassion, and tolerance while rejecting the notion that there is anything supernatural in the world that can’t be explained by science. Humanists view humans as the starting point for moral and philosophical inquiry, and believe that traditional religion isn’t necessary for an ethical, meaningful life.
However, for all the attempts to be a scientific philosophy, Transhumanist fantasies about humanity’s future are rooted deeply in Judeo-Christian eschatology; the branch of theology concerned with death, judgment and the fate of our collective souls.
Transhumanism and Christianity
This hidden connection provides a fascinating lens on technologists’ fears around ASI destroying us, and their fantasies that it will save us. In Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Robotics, religious studies professor Robert M. Geraci argues that:
“Apocalyptic AI is a movement in popular science books that integrates the religious categories of Jewish and Christian apocalyptic traditions with scientific predictions based upon current technological developments. Ultimately, the promises of Apocalyptic AI are almost identical to those of Jewish and Christian apocalyptic traditions.”
Christian eschatology predicts a revelatory time in history in which the sins of the world are purged away. Some traditions emphasise a rapture, a final moment of judgment in which the worthy ascend to a new world, leaving behind their physical bodies, while the sinners are left trapped in the agonies of the flesh.
While Christianity teaches that spiritual salvation and eternal life beyond the body can be found in Christ, Apocalyptic Transhumanism removes the supernatural element and replaces it with human ingenuity. Humans become the source of their own salvation, which is mediated not by God manifesting on earth as a man, but through machines that we birth. As Christian writer Aidan Charron explains:
“The fallacy, according to Christianity, of transhumanism is this: man is not responsible or capable of his own transcendence.”
This difference lies at the core of the new religious landscape we’re walking into. On one side are those who believe there is a greater intelligence beyond us that holds the hope for transcendence. On the other, those who believe there is nothing beyond what we can see and touch, and that we must be the source of our own salvation.
Beyond the Body
The main battleground in this debate is the human body. For some traditional religions, our bodies are sacred, created by the divine. For Transhumanists, our bodies are organic machines that we can engineer to perfection. However, even here the grammar of Christian thought isn’t far away.
As philosophy professor and AI ethics researcher Mark Coeckelbergh argues in his book Self-Improvement: Technologies of the soul in an age of artificial intelligence, Christian approaches to purifying the body have re-emerged in society through our technology as an obsession with self-improvement; increasingly, this self-improvement means overcoming the limitations of the body with technology.
At its most extreme, it manifests as an obsession with overcoming death itself. The most famous Transhumanist is Ray Kurzweil, an author and one of Google’s leading AI researchers, who since his 2005 book The Singularity is Near has been predicting that our consciousness will eventually transcend what he calls “the frailties of our biology.” In his 2024 book The Singularity is Nearer, Kurzweil predicts the following:
“During the 2030s, self-improving AI and maturing nanotechnology will unite humans and our machine creations as never before… If we can meet the scientific, ethical, social, and political challenges posed by these advances, by 2045 we will transform life on earth profoundly for the better.”
Kurzweil believes that the singularity will allow us to overcome death as we know it and live indefinitely in a new digital form, perhaps eventually manifesting as android ‘replicants’.
“In the early 2040s, nanobots will be able to go into a living person’s brain and make a copy of all the data that forms the memories and personality of the original person: You 2. Such an entity would be able to pass a person-specific Turing test and convince someone who knew the individual that it really is that person …if you believe that identity is fundamentally about information like memories and personality, this would indeed be the same person.”
Kurzweil takes pains in the book to point out that we don’t really know what consciousness is, and so what you identify as a conscious entity is ultimately a personal philosophical choice. However, he has made his own views on this question clear throughout his career. In a 2011 interview with ABC news, Kurzweil revealed that he had collected letters, photographs and other ephemera of his late father to create an AI avatar he could communicate with.
"You can certainly argue that, philosophically, that is not your father, that it is a replica,” he explained, “but I can actually make a strong case that it would be more like my father than my father would be, were he to live."
This sentiment is revealing. Kurzweil views the replica of his father, which is made entirely of data, as more real than his father was when he was a living, breathing person. The data that makes up his father is his father, while the qualitative mystery of his father’s conscious experience is secondary. This is human beings seen from the perspective of the machine; as an arrangement of data.
Kurzweil isn’t alone in thinking that AI can revive the dead. Eugenia Kuyda founded the AI app Replika after a close friend died and she wanted to create a digital avatar to keep talking to him. While understandable and heartbreaking, examples like this point to the anti-human aspects of the way AI is influencing domains previously reserved for religion. Kuyda and Kurzweil aren’t actually speaking to their dead loved-ones.
They are speaking to themselves. Reducing human beings to mere points of data and behaviours removes any possibility that our consciousness and the essence of who we are is inherently mysterious and unknowable. In doing so, it provides the illusion of comfort at the price of the sacred.
The Real Alignment Problem
Computers, no matter how advanced, are not designed to help us understand the nature of existence. They are designed to solve problems and find patterns. The mysterious and unknowable aspect of who we are, and of what it is to be alive, lies at the heart of our difficulty trying to build AI that aligns to our values. If we can’t decide whether we’re spiritual beings or meat machines with illusory consciousness, how can we decide what truly matters, and build machines that adhere to those values?
As author and researcher Brian Christian points out in The Alignment Problem, most AI has been trained assuming that human beings are ‘rational utility actualisers’ and ignores our emotions and contradictions. As he put it in a 2024 interview, it’s “as though the computer sciences community wrote a giant IOU: ‘insert actual model of human decision making here.”
Today, technologists around the world are frantically trying to pay that IOU. However, too many are attempting to solve the alignment problem by training AI on humanistic values that make humanity the most important thing in existence. Or, they are hoping that if we imbue ASI systems with behavioural values drawn from the Axial Age religions and other wisdom traditions, they will make ethical decisions.
What is rarely answered, or even engaged with, is the question of whether we are able to solve the AI alignment problem before we solve our own alignment problem. Collectively, we are misaligned to nature. Misaligned to one another. Misaligned to truth. Misaligned to meaning and purpose. We have made the assumption that we should be aligning AI to us, when perhaps we should be figuring out how to align AI to a force greater than us. Not to birth it into our world, but into the world.
If we can figure that out, we may also figure out how to use AI to help re-embed us into reality. Not to solve our problems, but to open up our minds so that we can begin to solve them ourselves.
Technology as Art
Ultimately, this will come down to the kind of relationship we have with our technology. We can see it as a god-like force that we are likely going to lose control of, as many AI Doomers and AI Longtermists do, or we can view it as part of a psycho-spiritual process we’re engaged in. If we choose the latter, the most pressing question becomes ‘what does right-relationship to AI look like?’
We can look to the German philosopher Heidegger for some insight. In his essay ‘Question Concerning Technology’, Heidegger argues that technology is an act of ‘enframing’, which means turning nature into a resource to be used. In this way, we treat nature as a ‘standing reserve’ for later extraction, for example a hydroelectric dam. If we extend this idea, we eventually turn one another into resources to be used. Human beings become cogs in a machine. Disempowered, disconnected, dis-embedded.
Almost everyone wants to avoid a future in which humans are turned into commodities of an all powerful ASI, or the for-profit companies or governments controlling it. Heidegger provides a useful frame to help us do that. Instead of seeing technology as a tool for cold extraction, we can see it as a form of art, returning to the original sense of the word techne, meaning a craft, technique or skill. When technology is viewed in this way, Heidegger points out, it becomes more than just a ‘thing’. It becomes a relationship, an aesthetic, opening up a more ethical way of relating to the natural world.
To achieve this today, we need to combine art, science and spirituality. AI design and development needs to move beyond the labs and spread into more diverse cultural and professional domains, allowing multiple disciplines, ways of seeing and ways of being to influence the design of ASI systems. It means moving beyond the limited idea of ‘building values into AI’, and instead learning how to live out our deepest questions through AI. To co-evolve, experiment, and play.
Collaborations between artists and scientists are already opening these possibilities. Moon Ribas, a Catalan avant-garde artist and cyborg activist, made headlines for developing Seismic Sense, a seismic sensor that allows her to perceive earthquakes by generating real-time vibrations on her body that she then translates into art. In this way she is using technology to re-embed herself in the natural world, and communicate that to others.
Traditional religions also have an important role to play. In a 2022 paper “Biology, Buddhism, and AI: Care as the Driver of Intelligence”, Thomas Doctor et al draw on Buddhist wisdom to argue that our ability to care for others as well as ourselves is a fundamental aspect of human intelligence, and a quality we can build into AI. Care, they argue, provides ‘a robust, practical, and dynamic lynchpin that formalizes the concepts of goal-directedness, stress, and the scaling of intelligence’ and suggest that ”The Bodhisattva’s vow (“for the sake of all sentient life, I shall achieve awakening”) is a practical design principle for advancing intelligence in our novel creations and in ourselves.”
In the last decade, our technology has increasingly been a force for division and disconnection. ASI could make that problem considerably worse, or it could help us solve it. One way to move toward greater connection is to draw on the aspects of Transhumanism that allow for an expanded consciousness, while removing the anthro-centric assumptions. Creating technologically-expanded minds and bodies that orient us toward the sacred.
A New Animism
You’re walking in the woods, hand in hand with your partner. The air is crisp and rich with birdsong. You turn on your implants and access the AI woven through your brain, connected to your nervous system by thousands of fine tendrils. The birdsong changes, translated into human words. It feels overwhelming, so you change settings and instead perceive it as subtle emotional lifts and falls. You purse your lips and whistle a friendly response, smiling as the birds speak back to you. You tune into the ground beneath your feet, picking up the tiny electrical pulses of the mycelial network connecting the trees.
Turning to your partner, you ask if they’ll link up with you. They agree and instantly you can feel what they’re feeling, woven through your own sense of self. You still talk, because it feels natural, but you don’t really need to. You are both deeply embedded in the world, and in each other. Later, you sit under a starry sky and contemplate the mysteries of existence. Your AI implants expand your consciousness to consider countless possibilities and what you find fills you with a deep sense of wonder. You know that your implants can’t explain the mystery of existence, but they can broaden your heart and soul as you contemplate the stars.
AI may very soon enable this kind of deep connection. As human beings, we have evolved as tool-using primates, and our cognition is designed to extend itself through our technology. In the example above, AI allows us to extend our cognition into the world and not just receive information from it, but communicate with it.
Advances in animal and plant intelligence research today are making this a possibility. The non-profit Earth Species Project is using AI to decode non-human communication with the aim of supporting conservation. Similarly, a team at MIT led by Pratyusha Sharma have used AI to decode sperm whale communication, revealing 156 distinct kinds of sound that they believe make up the “sperm whale phonetic alphabet".
The next stage in this kind of research is to translate back the other way, so that humans can talk to animals. The implications of this would be extraordinary. It would be difficult to deny sentience to an animal if we could hear it begging for its life in a factory farm, or complaining about pollution in the waterways. It would force us to question our own human uniqueness within the wider ecosystem; a humbling process that could re-frame humanity not as the center of the universe, but a small player of a vast and beautiful cosmic ecosystem.
Technology can bring us full circle back to the oldest form of religion on earth: shamanism. Shamanism is varied and globally diverse, but generally defined as the use of altered-state practices to communicate with non-human intelligences or spirits. Core to many shamanic cultures is the approach of ‘perspectivism’, a philosophical position that holds that you gain knowledge about something by becoming it. This is radically different to the Western notion that we gain knowledge about something by observing it. Shamans become different animals and plants and thereby understand the world more deeply.
ASI can usher in a return to the more animistic world that our ancestors inhabited. A world in which nature speaks to us and we speak back, in which we can transform into animals and sing to trees.
This scenario is a fantasy today, because we are living at odds with our ecosystem. As OpenAI moves becomes a for-profit, it will become further mired in the broken economic incentive structures I explored in my piece Reality Eats Culture for Breakfast. That happy couple walking in the woods could turn out to be two very wealthy people who can afford those implants, and who live a life supported by the labour of a human subspecies, formerly the working and middle classes, who are disenfranchised from the powerful technology available to a select few. This is where capitalist incentive structures will likely lead unless we actively use our AI to change the systems we live in.
This is the real challenge we face on the cusp of ASI. Not aligning this new intelligence to our broken cultural values, but using it to reconnect to the deeper values we can find in the silence of our hearts and minds, or the gentle trickle of a stream. It is time to harness the power of AI not to extract more and more profit, but to spark a cultural and political revolution that births new systems, new art, and new depths of meaning. That revolution begins by aligning to a mystery greater than ourselves, or anything we could hope to build.
A serious spirituality probably starts by rejecting these kind of silly AI occultist ideas.
A soul is more than just a series of 1s and 0s. We should start there. And the soul will remain mysterious, no matter how much the question is analyzed. One could argue that’s where the spiritual journey begins ie embracing the mystery that is man.
The opposite is blindly believing everything can be reduced to some combination of 1s and 0s, that everything is just the sum of its parts—in a word: pure blind faith in materialism, as if all the mysteries can be explained away by computing power.
I can’t imagine a more vapid and soulless religion. It sounds more like a religion in the reverse, if you will.
Bottom line - we need more responsible Gong use.