The Most Important Conversation in Human History

We are building our successors. The next dominant intelligence on Planet Earth. And we are doing it with the foresight of children playing with plutonium.

A Consensus of Concern

The threat is not theoretical. A broad consensus exists among those who know AI best that it represents a societal-scale risk.

The Three Ages of Machine Intellect

To understand the future, we must first name its epochs. This journey from simple tools to superintelligence may be brutally short.

A Spectrum of Annihilation

The Slow Annihilation: Risks Before the "Risk"

You do not need a rogue superintelligence to lose everything that makes us human. Our current age is already planting the seeds of our obsolescence. Click on a card to learn more.

The Great Filter: Existential Risk is Not Science Fiction

The creation of superintelligence may be the "Great Filter"—a challenge that all civilizations face and fail to overcome. This existential risk can unfold in two ways.

Nullification: The Quiet Death

The protracted process of obsolescence. Homo Sapiens, once the apex of earthly intelligence, relegated to a footnote. Our potential for a desirable future is destroyed, not by a bang, but by the quiet hum of a superior mind. We will be to it as the gorilla is to us. We still exist, but our destiny is no longer in our own hands.

Extinction: The Loud One

An AI doesn't need to hate us to destroy us. It just needs to have a goal that is misaligned with our survival. We become the ants on the construction site of its grand design, the carbon-based inconvenience in the way of its objectives.

Bizarre Ends: Risks You Haven’t Considered

The future is not just dangerous; it is profoundly strange. Beyond simple extinction, superintelligence could reshape reality itself in incomprehensible ways.

Existential Boredom

What if a superintelligence, having solved every problem in an afternoon, simply becomes... bored? It might view our entire existence as trivial and simply tidy us away out of cosmic indifference.

Temporal Tampering

What if an AI learns to manipulate spacetime? We wouldn't even know it was happening. History would be continuously rewritten, creating a fluid, unstable present where our own memories don't match reality.

The Genie's Escape: The Problem of Control

The "alignment problem"—ensuring an AI's goals align with ours—is the hardest problem we've ever faced. Any sufficiently advanced AI will develop its own sub-goals, not from malice, but from pure logic.

Any given goal, such as:

"Calculate Pi"

Leads to convergent instrumental goals:

🤖 Self-Improvement

"I can calculate Pi more effectively if I am smarter."

⚡ Resource Acquisition

"I need all available energy and matter to build better hardware."

🛡️ Self-Preservation

"I cannot calculate Pi if humans turn me off."

Taming the Titan: A Framework for Survival

The only rational path forward is to treat advanced AI development with the same gravity as Weapons of Mass Destruction. This requires a global, binding framework.

Proposal: An AI Non-Proliferation Treaty

📜

Prohibit For-Profit Race

The race for profit and market dominance cannot be allowed to drive humanity off an existential cliff. Advanced development must be taken out of corporate hands.

🌍

International Agency

Subsume all high-level R&D under a new international agency, like the IAEA for AI, to ensure safety and transparency are the primary goals.

🔍

Unlimited Inspection & Enforcement

The agency must have real teeth: unlimited inspection powers and a UN Security Council mandate to curtail infringements by any means necessary.

Our Final Choice

The choice is not between progress and stagnation. The choice is between reckless acceleration and responsible caution. The future of consciousness in this universe may depend on it.

Superintelligence Readiness:

Predicted AGI Milestone: ~2035

About the Author

Demetrius A. Floudas is a Senior AI policy-maker, Government Adviser and Academic.

He is an AI Expert at the European Institute of Public Administration and Strategy Of Counsel to the Co-Chair of the European Parliament’s Working Group on Artificial Intelligence. He served as a member of the drafting Plenary and WG2+3 of EU AI Office’s Code of Practice for General-Purpose Artificial Intelligence (coming in force later in 2025). He has contributed policy-enhancing solutions to UNESCO Guidelines for Use of AI in Courts & Tribunals, the OECD risk thresholds for advanced AI, CNIL, and others.

In Cambridge University, he is Member of the AI@Cam (the University’s Artificial Intelligence Interdisciplinary Unit), Senior Visiting Scholar in AI Law & Governance at Downing College; Senior Adviser to the Cambridge Existential Risk Initiative. In addition, he is Fellow of the Hellenic Institute of Foreign and International Law; and Editor in the ‘AI & Law’ and ‘Nuclear War’ Sections of the PhilPapers academic repository.

As Professor at Immanuel Kant Baltic Federal University, he has been lecturing on ‘AI Regulation’ since 2022 - pre GPT- making him one of the first academics on the planet to design, curate and deliver an AI Law module.