Today’s post is a bit long. If you’d prefer the shorter (and smarter, thanks to Stephen!) version, please see this op-ed I wrote with Stephen Toope in The Hill Times.
The Assumptions
AI technology is advancing on an ultra-fast cycle, outpacing traditional regulation. Nations are locked in AI arms races, prioritizing strategic advantage over cooperation. The usual 20th-century liberalist multilateral institutions (UN, G7, OECD, etc.) are either ineffective or collapsing, unable to keep up with the pace of change. The result is a fragmented governance landscape; a patchwork of national strategies and ad-hoc principles that create more spectacle than substance. With no enforceable global rules, actors risk a “race to the bottom” where the drive for AI supremacy overrides ethical safeguards. In short, no single global body today can tackle the multifaceted challenges of AI on its own. We need to consider unconventional strategies that reflect these new power dynamics and move as fast as the technology does.
Beyond Multilateralism: Polycentric Governance Models
Instead of relying on slow, consensus-based institutions, we might adopt polycentric governance: multiple centers of decision-making that coordinate flexibly. What if we collectively facilitated the growth of an organic “regime complex” for AI: a web of overlapping forums, alliances, and norms that collectively guide AI development. What could that look like in practice?
Minilateral Compacts: Small groups of like-minded influential players (states or even cities) forge quick agreements on AI norms. For example, a coalition of “AI frontrunner” countries could agree on shared safety standards and testing protocols, even if global treaties lag. Such nimble alliances could address issues in real time, rather than waiting for universal buy-in. This mirrors how governance is trending: the challenges of AI are so varied and complex that a decentralized model with common standards is seen as more viable than any single authority.
Technical Standards Bodies and Consortia: International standards organizations (like ISO/IEC or new AI-specific bodies) can act as governance mechanisms by publishing rapidly updated best practices. These standards, adopted voluntarily by industry and enforced through market pressure, could function as de facto regulation across borders. A real example is how a group of major tech firms formed the Coalition for Secure AI (CoSAI) under the OASIS Open standards consortium, to set security standards for AI systems. Such bodies operate faster than treaties and can continuously adapt technical guidelines.
Multi-Stakeholder Frameworks: Include not just states but also companies, academia, and civil society in governing AI. A network of networks might emerge; for instance, the World Economic Forum’s AI Governance Alliance unites industry leaders, governments, universities, and NGOs to collaboratively shape norms. These platforms, though informal, harness diverse expertise and distribute influence beyond traditional state-centric models. Crucially, they align actors around shared principles (like fairness or safety) without needing a formal global regulator.
A polycentric approach leverages multiple venues to keep pace with AI. It accepts that power is necessarily, and unavoidably, distributed (tech giants, great powers, city-states, innovators) and creates channels for each to contribute to governance. Over time, these interlocking efforts can harmonize into a de facto global framework; not via one grand treaty, but through converging standards and norms spread across many nodes.
Private-Sector Coalitions and Informal Alliances
In a future where states compete fiercely, private sector and informal alliances can become key levers to steer AI’s trajectory. Presently, the most advanced AI systems are developed by a handful of big tech companies and research labs. By organizing themselves, these actors could fill the governance void (though the incentives to do so are presently weak at best):
Industry Self-Governance Pacts: Major AI developers can form coalitions to set voluntary rules of the road. We already see prototypes of this: the Frontier Model Forum (launched by OpenAI, Google, Microsoft, Anthropic) and initiatives like CoSAI bring competitors together to agree on safety testing, transparency standards, and risk research. Such coalitions, if expanded, act as an informal regulatory cartel for responsible AI. They can pledge, for example, not to release certain high-risk capabilities without safety vetting, creating a norm that puts collective safety above one-sided advantage. While these commitments are voluntary, they carry weight if they include the dominant players (who might collectively control most cutting-edge AI development).
Backchannel Tech Diplomacy: When formal diplomacy falters, informal alliances and backchannels can preserve stability. A lesson from the Cold War is the Pugwash Conferences, where scientists from rival nations met privately to reduce nuclear risks. Perhaps we need more “AI Pugwash” equivalents. Leading scientists and AI experts worldwide could convene to identify dangerous AI trajectories and agree on baseline safety measures, independent of (but eventually influencing) governments. This informal expert diplomacy can build trust and share knowledge across geopolitical divides. Similarly, backroom agreements among tech CEOs from different countrie, even competitors, could establish red lines (for example, a mutual promise not to weaponize AI in certain ways, or to disclose discovery of any truly uncontrollable AI). These wouldn’t be public treaties, but informal agreements that shape corporate behavior behind the scenes.
Alliances of the Willing (and Able): Small clubs of states with high technological capability can act as pace-setters. For instance, the “Five Eyes” intelligence alliance (US, UK, Canada, Australia, NZ) is (was?) an informal club that successfully cooperated on security and technology for decades. They could extend their cooperation to AI by sharing research and even coordinating rules for military AI use. Such informal alliances can move faster than global bodies and exert peer pressure: if a group of tech-leading nations standardize operating practices and technical specifications for AI weapons or surveillance, it forces others to react or join. Likewise, a coalition of tech-centric cities (an “alliance of tech hubs”) could agree on local AI ordinances (for privacy, for instance) that become a model others copy. The key is leveraging like-minded, high-capability actors who trust each other enough to act jointly, even without formal treaties.
Public-Private Partnerships and Coalitions: Blurring the line between government and industry, we might see task forces that combine tech companies, government agencies, and NGOs tackling specific issues (e.g., an AI safety board that includes top AI firms and independent ethics organizations). This creates a web of accountability: companies bring expertise and resources, while civil society brings transparency and ethical oversight.
By mobilizing these private and informal channels, we align governance with the reality that power in AI is distributed among states and corporations. These alliances can act in their enlightened self-interest: they mitigate risks that could tarnish everyone or provoke draconian regulation. However, we must ensure these private alliances don’t simply entrench corporate power. That means reshaping incentives so that companies and governments are rewarded for prioritizing public safety over short-term gains. For instance, investors and consumers could favour companies that are part of safety coalitions, creating market pressure for responsibility. Informal alliances must also be transparent enough to earn public trust, lest governance simply shift from slow democracies to unaccountable boardrooms.
AI-Driven Coordination and Intelligence Sharing
An AI-accelerated world demands AI-accelerated governance tools. We can harness AI itself to coordinate policy and share intelligence at a speed and scale humans alone cannot match. Several cutting-edge mechanisms can shape AI’s impact in real time despite global instability:
Real-Time Threat Intelligence Sharing: Just as cybersecurity communities share virus signatures, AI stakeholders can rapidly share information on AI incidents, vulnerabilities, and misuse. A concrete example is MITRE’s AI Incident Sharing Initiative, a platform where more than 15 companies collaborate to exchange anonymized data on AI system failures, attacks, or near-misses. This kind of rapid, standardized information-sharing allows the entire community to learn of emerging threats and defences almost instantly. In a fragmented world, establishing a global AI threat exchange, possibly federated across alliances, would act as an early warning system to all participants regardless of nationality.
AI-Assisted Policy Coordination: Policymakers can use AI tools to analyze and coordinate policies on the fly. For instance, an AI governance dashboard could aggregate data on AI development worldwide (from research publications, investment trends, etc.) and highlight where risks are rising or where norms are diverging. Such a system, broadly accessible, would help target interventions quickly. AI can also simulate the impact of different regulatory choices at high speed, giving decision-makers evidence to adapt rules dynamically. Imagine a continuous AI policy simulation: it ingests input from various countries’ AI progress and outputs suggestions for joint actions to avoid negative outcomes (like an arms spiral or market crash). This could guide informal alliances in adjusting their strategies faster than any UN deliberation ever could. It has been noted that AI can bring science to the art of policymaking, providing data-driven insights and near-real-time feedback on what’s working.
AI-driven coordination mechanisms acknowledge that information is power and sharing information (securely) is often the quickest way to neutralize threats. By improving collective situational awareness, they could help a fragmented world act in concert when it truly matters (for example, containing a self-spreading AI virus or responding to a catastrophic misuse). Importantly, these mechanisms can be stood up by coalitions of the willing (industries or alliances) without universal agreement, yet their benefits spill over globally by reducing shared risks.
Networked, organic, polycentric, governance
This strategy deliberately avoids reliance on legacy institutions and instead leverages the actual power centers of the 21st century: technology companies, small agile coalitions of states, and the self-organizing capacity of networks. It speaks the language of a hyper-realist world: incentives, power, and survival. By getting the big players into a cooperative equilibrium (even if motivated by self-preservation), it reduces the likelihood of disastrous competition. At the same time, it inserts new guardians of the public interest (scientists, cities, civil society) into the mix to ensure the alliance’s actions serve humanity broadly, not just those at the top.
Critically, this strategy can operate in a world of fragmented geopolitics. It does not require universal harmony or the resurrection of the UN. It only requires that enough key actors see the value in a coordinated approach over a chaotic free-for-all. Given the stakes, where an unchecked AI race could spin out of control for everyone, this recognition is plausible. Indeed, experts argue that no single nation or body can handle AI alone, reinforcing the need for decentralized yet collaborative governance. By harnessing informal alliances and private governance, we turn the very forces that drive fragmentation (national interest, corporate interest) into forces that can also drive coordination, through carefully aligned incentives and mutual checks.
In essence, I am betting that the system of global disorder, with perhaps a little push in the right direction, can hack itself to produce emergent order: a self-organized, adaptive ecosystem of governance networks for AI; loose structures that can move at the speed of technology, survive geopolitical shocks, and keep AI development pointed toward a future where humanity can thrive.
I can feel the eye rolls at my optimism. Yet, this may be the only kind of strategy that stands a chance in a future defined by rapid AI change and fragmented global power. By thinking outside the 20th-century policy box, we give ourselves a shot at guiding AI wisely through the 21st. The endgame is a world where even without a formal world government, we have a web of commitments and collaborative systems ensuring AI remains beneficial, safe, and aligned with human values; an outcome that this hyper-realist world would otherwise struggle to achieve.