For years, the debate around advanced AI has been framed as a hypothetical. A distant risk. Something still safely contained within research labs and future-scenario whitepapers. That framing is gone. Superintelligence is no longer a theoretical conversation. It is a strategic question that will define national security for the next decade.
And the uncomfortable truth is this: the UK and Europe are in danger of approaching it from the wrong angle entirely.
A moment that revealed the gap
In 2023, ahead of the Bletchley Park AI Safety Summit, I briefed the Prime Minister’s team at Number 10. We demonstrated the risks of Collective Intelligence using our early agentic AI system and laid out a simple argument.
There are only two ways to defend ourselves from advanced AI:
1. Return to a pre-digital world, where offensive and defensive capabilities are artificially constrained.
2. Invest in building systems where good AI can defeat bad AI, at speed, in the real world.
The response that followed was revealing: £100 million directed into exploring bias in AI models. Important, yes. But painfully misaligned with the strategic risk profile.
The point wasn’t just missed. It was inverted.
Europe is repeating the same mistake
Today, we are watching European regulators follow a similar path: treating Superintelligence as something that can be constrained through paperwork, penalties, and procedural controls.
This is wishful thinking.
Regulation cannot stop an adversary. It cannot prevent the misuse of advanced capabilities. It cannot change the speed at which others innovate.
Look at what DeepSeek has achieved in a remarkably short period. Look at the scale and urgency of Chinese investment into national AI acceleration. Read the 33-page US National Security Strategy, which is unambiguous about one thing: America is stepping back from Europe’s approach while it doubles down on its own race toward Superintelligence.
While others build, we regulate. This is not a sustainable posture for a sovereign nation.
We don’t regulate our way out of existential threats
If a new pandemic emerged tomorrow, the UK would not respond by trying to regulate the virus out of existence. We would build vaccine factories. Stockpile antivirals. Accelerate R&D.
Superintelligence is no different.
You cannot write legislation to remove existential risk. You can only develop the technologies, infrastructure, and capabilities that keep the nation secure when others move faster than we do.
The choice is not between “no regulation” and “high regulation”.
The real choice is between:
- Strategic capability, or
- Strategic vulnerability.
And right now, Europe is drifting toward the latter.
Why we are building toward Superintelligence
At Whitespace, we are one of the few companies in Europe actively building toward Superintelligence. Not out of hubris, not out of techno-optimism, but because the risks are sufficiently serious that the only sensible response is to engage directly.
We build sovereign AI because:
- Our adversaries will not slow down.
- Our institutions cannot rely on imported black-box systems.
- Our security depends on having capabilities that match or exceed those who seek to undermine us.
We understand the risks deeply, which is precisely why we are choosing to build. The alternative is paralysis.
A call for national clarity
The UK does not need more caution. It needs conviction. It needs a strategic posture that recognises the gravity of the moment and responds accordingly.
Regulation has a place, but it is not a shield. It will not protect our democracy or our way of life from those who seek to destabilise it. Only capability will do that. Only leadership will do that.
The world is moving into dangerous territory. Those who act now will define the balance of power for decades. The question is whether the UK wants to be among them.
Written by Paul Jenkinson, CEO and Co-Founder at Whitespace
Apply now for your limited 30-day free trial of Collective and experience the benefits today!