Stopping the AI arms race isn't just necessary. It's possible.
The Problem
In May 2023, hundreds of scientists signed an open letter saying that AI poses a very real chance of killing us all. Signatories included three of the four most cited living AI researchers. AI labs are racing to build superintelligent AI as soon as possible.
As United Nations Secretary-General António Guterres notes: alarm bells over the latest form of artificial intelligence are deafening — and they are loudest from the developers who designed it. These scientists and experts have declared AI an existential threat to humanity on par with the risk of nuclear war.
Some of the latest alarm bells have included the AI 2027 report, the book If Anyone Builds It, Everyone Dies, and the newly released documentary The AI Doc.
We are in uncharted waters, which makes the risk level difficult to assess. A pretty normal estimate, however, is Jan Leike's “10–90%” chance of extinction-level outcomes. Leike has headed the alignment research team at two different top American AI companies: Anthropic and OpenAI.
This actually seems pretty straightforward. There's no reason for us to sleepwalk into disaster here. No normal engineering discipline — building a bridge or designing a house — would accept a 25% chance of killing a person; yet somehow AI's engineering culture has gotten to the point that no one bats an eye when Anthropic's CEO talks about a 25% chance of “doom” for the entire world.
Even the fact that “will we kill everyone if we keep moving forward?” is hotly debated among researchers seems very clearly like more than enough grounds for governments to internationally halt the race to build superintelligent AI.
The Solution
Is an international halt politically feasible? Policymakers seem to be rapidly coming around to this solution.
In the UK, over a hundred parliamentarians recently signed a statement calling for binding regulation on the most powerful AI systems. In late 2025, seven former US Congressmen endorsed a Statement on Superintelligence calling for a prohibition on the development of superintelligence, joined by retired US Navy Admiral Mike Mullen, former National Security Advisor Susan Rice, and dozens of world-class scientists and political leaders.
The number of senior officials voicing dire concerns is growing rapidly — and is strongly bipartisan.
Political feasibility is helped by polling data showing that AI is increasingly unpopular and that voters are broadly opposed to the race to build superintelligence.
Many different camps can share the view that a shutdown would be worthwhile. AI systems short of superintelligence can still pose existential risks if they go rogue or are misused. And many other harms — mass unemployment, AI scams, deepfakes, propaganda, power concentration — become more manageable with a pause. An even larger group can agree it would be valuable to build the legal and physical infrastructure required for a shutdown, since this overlaps heavily with what would be needed to meaningfully regulate AI at all.
How It Works
Take Action
The question is not whether key actors like the U.S. and China have good options for addressing the threat — it's whether they wake up in time.
Contact your representative.If you're in the U.S., use the template below as a starting point, revising it to fit your perspective.
Not in the US? Find your country's representatives and share this page.