How to Stop an AI Apocalypse: Nate Soares on Superintelligence and Active Hope
'If Anyone Builds It, Everyone Dies'
Is AI going to kill you and everyone you love? Or is it overblown hype?
According to new book by two of the world’s leading experts on AI risk, Eliezer Yudkowsky and Nate Soares, it isn’t hype at all.
The title does what it says on the tin: If Anyone Builds It, Everyone Dies: Why superhuman AI would kill us all (available here)
I’m usually sceptical about proclamations of the AI apocalypse, but find myself on the fence after my conversation with one of the book’s authors, Nate Soares. He’s the director of the Machine Intelligence Research Institute, and a former Google and Microsoft engineer.
Nate Soares
The people running the AI labs say this has a very good chance of killing us all. We have Elon Musk who has said there’s a 10 to 20 % chance that AI just wipes us out…There’s other leaders in the field who go around saying it’s at least 10 % and then in smaller conversations they actually personally think it’s 50%, and they say lower numbers because they don’t want to sound too alarmist.
We have Yoshua Bengio, the most cited living scientist, and Geoffrey Hinton, the Nobel Prize-winning godfather of AI, both coming out and saying they think this is, a very good chance of just completely destroying civilization.
I’m interested in the ways our relationship to AI is informed by, and impacts, religion and spirituality, which is why I find myself going back and forth on the idea that AI is going to eradicate humanity.
Apocalyptic proclamations are a common form of religious expression during times of social and technological upheaval, and I see a lot of parallels in modern AI anxiety with the annihilation anxiety of other eras. If you want a deep dive, I explored these themes in detail in my piece on AI religions.
Get Ready for AI Religions: Sam Altman, Transhumanism and The Merge
Experience the power of breathwork with expert tuition in our new course Breathing in Culture. We have just one early bird ticket remaining and a couple of scholarship places - remember to use your d…
We always think some great force is going to destroy us. In the past it was floods sent by god, but now that Artificial Superintelligence is filling the God-shaped hole in the secular west, it’s perfectly placed to be the target for our projections.
With that in mind, could all this doomerism be the result of Silicon Valley echo chambers, transhumanist yearnings, and old-fashioned social panic? At the end of the day, if AI gets too powerful, can’t we just hit a giant off switch? Or force it to align it to human values now so it doesn’t annihilate us?
It’s not that simple.
Nate Soares:
We’ve already seen cases of AIs that aren’t very smart wrapping some humans around their finger. It could, you know, work for money. We’ve already seen AIs in test environments try to fake that they’re a human to hire real humans via TaskRabbit to do tasks.
If you had an extremely powerful genie that did exactly what you wished for, it would be a hard problem to figure out a wish that would actually have good consequences. You know, it’s difficult to come up with a good wish.
A lot of people think this is what the problem with AI is. I wish we had that problem. That would be so much of a nicer problem than the problem we have. The problem we have is that we have we’re not making genies where they do exactly what we said, even when it has consequences we didn’t like.
This may be the most significant threat posed by Artificial Superintelligence: we don’t actually know how it works. Even the best engineers don’t really understand how ChatGPT really works today, or even how it worked three years ago. This is known as ‘opacity’ in AI research, and it’s both insane and terrifying.
After my conversation with Nate, I started to re-evaluate my position. I still think we’re projecting our religious urges onto AI, but we’re also playing with a kind of fire we’ve never seen before.
However, there is hope, because we haven’t built an ASI yet. As Nate shared in our conversation, there are things that we can do as individuals and as societies right now to avoid rushing into oblivion.
You can find the full conversation for free exclusively on the Kainos YouTube, where you can subscribe to keep up to date with films we don’t put out here on the Substack.
To support what we’re doing, and access electrifying bonus content and online events on a regular basis, consider becoming a paying subscriber.




This article arrives at a particulary pertinent moment, as discussions around superintelligence are escalating within the tech community. I wholeheartedly agree with your analytical approach to these serious concerns, particularly the synthesis of expert warnings about civilization-level risks.
The risk associated with most situations that effect the population exposes the intention of the regulators. Suchir Balaji death also is a clear message from the industry.