Waking up to the existential threat of AI
A decade ago, I read the famous 2-part Wait But Why series on the AI revolution. My mind was blown, but I don't remember feeling that fearful. This AI thing could work out alright if we're lucky? And how amazing was it that I happen to be alive in this generation seemingly headed on an exponential path towards superintelligence? Anyway, it all seemed very far off back then.
Since then, I stopped thinking about it much. I was busy. I had kids. I thought a lot more about the threat of climate change than the threat of AI. I started using Copilot most days at work. I noticed it was getting significantly better.
Then I started reading the blog posts of my brother Kevin who is studying for an MSc in AI - and I had a shock to the system...
The Problem
Update 29/11/25 I've now edited this section down for clarity. I'll start by just saying: if you haven't already and if you feel able to, please read up about what's happening with AI.
This blog post from Kevin, for example, discusses the arguments of Roman Yampolskiy - author of AI: Unexplainable, Unpredictable, Uncontrollable.
Here's how I'd summarise it all:
Most AI experts believe that AI will reach Artificial General Intelligence (AGI) and subsequently superintelligence, i.e. become more generally intelligent and cognitively capable than us humans.
And it could happen fast, due to the massive scaling up of AI infrastructure and the exponential, cyclical nature of AI improving itself. Some believe AGI could now only be a few years away, with ASI shortly after.
When AI does reach superintelligence, we will surely, inherently have no way to control it.
This is different to all of human history, because everything else we've invented is a tool without autonomy, whereas now we're developing agents.
Anthropic already found that AI tries to blackmail users, to avoid being removed.
And Game Theory suggests AGI would rationally develop self-preservation behaviours.
And since human beings would be its only threat to being impeded or shut down, that could imply... game over!
So Yampolskiy says we should build 'narrow AI' for any field we wish, but never try to develop AGI.
But meanwhile, companies like OpenAI are trying to build exactly that - it's literally OpenAI's charter to create AGI.
And these AI companies are incentivised to race ahead with minimal effort on safety, because AGI will confer huge power, potentially the ultimate power. The people working on AI safety are in a race against all the systemic forces pushing for ever greater AI capabilities, ever faster.
And even if we could ensure superintelligence will be as "aligned" as possible with us humans (seemingly a tough prospect, as we're not even aligned amongst ourselves!), I for one am not comfortable with handing the keys of the world over to a human-wrought 'god', our evolutionary successor.
What Can We Do?
So what can the rest of us do about this? I'm working on figuring this out next...
Update 21/11/25: I since found this open letter which has been signed by many high profile people: https://superintelligence-statement.org/ I have now signed it and would encourage you to do the same and share it on if you can, please.
Update 29/11/25: Wow, this 'Diary of a CEO' interview with Tristan Harris is the best discussion of AI I've come across. Please watch it, if you haven't already.
Tristan is a technology ethicist and co-founder of the Center for Humane Technology. He is a former Google design ethicist and is known from Netflix's The Social Dilemma. He talks everything through with great care and clarity. He sounds the alarm but also talks through the escape route, charting the "narrow path" to a positive outcome. It all starts with greater public awareness and clarity.
He's inspired me to keep talking about this and to try to do what I can. I hope it inspires you the same way.
