Imagine a world where each of the eight billion people on Earth carries a personal assistant possessing the cognitive prowess of a team human geniuses—an “IQ 500” system available at negligible cost. Every white-collar job that once demanded years of education becomes automatable, and a rural villager can marshal the same R&D capacity as a Silicon Valley executive. If artificial general intelligence (AGI) truly becomes cheap and ubiquitous, the world’s political, economic, and cultural orders might shift in ways as profound as the Industrial Revolution, the nuclear revolution, and the digital revolution combined.
Inspired again by the ongoing techno-optimism of Logan Kilpatrick, we continue with our ongoing thought experiment on life in an era of ubiquitous intelligence. We’ve talked about what this might mean “in the small” for individuals, universities, and markets; today we zoom out to the global scale. Will we collectively choose an era of boundless human flourishing, or a scramble to contain and monopolize “nukes of the mind”?
Let’s consider this environment through the three frameworks—Realist, Liberal, and Constructivist—that have dominated modern international relations theory, followed by a few ideas that might help chart a stable course for a civilization on the brink of post-scarcity intelligence.
The realist reckoning: nukes of the mind
Realism, the perennial lens through which great powers confront one another, probably categorizes near-free AGI as a security nightmare. If “IQ 500” assistance is cheap and globally distributed, the capacity to design advanced weaponry, conduct cyberattacks, and orchestrate sophisticated espionage no longer belongs solely to superpowers. A technologically savvy terrorist cell, or even a single rogue actor, might be able to unleash havoc once reserved for state arsenals1.
Arms Racing to Nowhere
Whereas nuclear proliferation raised fears of a handful of states or rogue entities acquiring bombs, near-free AGI would be far more diffuse. The security dilemma—the Realist concern that states, in trying to protect themselves, end up provoking arms races—spirals to new heights. Every government, sensing that offensive AGI capabilities can nullify standard defences, will scramble for preemptive advantage.
Regulated Scarcity Redux
Yet Realists also understand that states, faced with existential threats, try to enforce new forms of scarcity. They may attempt to lock down advanced computing hardware, surveil the infrastructure that powers large-scale AI models, and clamp down on data access. The greater the perceived risk, the stronger the impetus toward re-imposing artificial limitations. Whether such measures can truly contain a technology that is largely information-based remains questionable at best.
Liberal optimism: flourishing in an era of shared gains
Liberal international relations theory sees, as ever, a more optimistic path. When humanity collectively holds “IQ 500” in the palm of its hand, opportunities for cooperation are equally colossal. The same technology that can design bioweapons can also design universal vaccines; the same tool that can destroy supply chains could revolutionize them for speed, efficiency, and sustainability.
All Boats Lifted
A core Liberal-aligned argument is that cheap superintelligence could make innovation itself a public good. If every individual can access the frontier of R&D, small states can leapfrog industrial giants, and entrepreneurs in lower-resource nations might pioneer technologies that rival those of Silicon Valley. Economic interdependence—long credited with promoting peace—intensifies as national prosperity depends on shared compute infrastructure, open data pipelines, and collaborative R&D networks.
Mega-Institutions for Planetary Problems
Liberal theorists might predict that states, multinational corporations, universities, and civil-society groups will be compelled to form new institutions capable of coordinating massive, transnational AI projects. Climate change, global health, and macro-engineering challenges become solvable—if harnessed collectively. Institutions like the UN or the World Bank may evolve into orchestrators of planetary-scale initiatives, deploying near-free AGI to eradicate diseases or decarbonize the atmosphere.
Constructivist visions: identity and norms in a post-anthropocentric intelligence paradigm
A constructivist lens, focusing on the power of ideas, norms, and social meaning, might suggest that cheap AGI will overturn the very identity of the “expert” and transform societies’ sense of purpose. When a rural villager can access the same cognitive capacity as a Nobel laureate, the social meaning of “cognitive elites” dissolves.
New Foundations of Identity
If routine white-collar tasks become automated, people may seek fulfillment elsewhere—through relationships, the arts, or moral and spiritual pursuits. Norm entrepreneurs will rush to define what “responsible AI usage” looks like, seeking moral high ground in a world where genuine, material advantage is fleeting. Nations and corporations would likely compete not just for computational supremacy, but for moral authority—who becomes seen as the "guardian" of ethical AI or the leader in applying AGI to humanity's greatest challenges.
Constructing Shared Narratives
Cheap AGI amplifies a long-standing constructivist insight: it is not just what a state does with technology that matters, but how it is perceived. Nations, corporations, and advocacy groups will vie to shape the narrative around AI, branding their policies as “ethical,” “humanitarian,” or “pro-innovation.” These discursive battles—and the norms that emerge—could contribute to the choice between conflict or cooperation.
Five ideas
How might nations and multilateral organizations respond? At the risk of stating the obvious, I shall state the obvious.
1. Reimagine Global Economic & Social Safety Nets
If automated assistants out-compete humans in myriad fields, the mass labor market will shift unpredictably. Policymakers should anticipate a post-work reality by, e.g., considering universal basic income programs and funding large-scale retraining efforts that emphasize necessarily human2 aptitudes, e.g., moral judgment, empathetic communication, cultural creativity.
2. Launch Ambitious, Positive-Sum Global Projects
From eradicating malaria to reversing desertification, near-free AGI would empower humanity to pursue mega-projects that were once the stuff of science fiction. Coalitions of governments, corporations, and universities could coordinate grand missions—simultaneously generating technological breakthroughs and reinforcing norms of cooperation.
3. Promote Rigorous AI Ethics Education Worldwide
At the heart of responsible technology use lies public understanding. National curriculums, adult retraining programs, and global awareness campaigns are needed to cultivate a responsible citizenry adept at harnessing “IQ 500” capacity without inadvertently causing harm. Such an educational revolution, possibly led by bodies like UNESCO or the OECD, would nurture a global baseline of AI literacy and responsible use.
4. Adopt New Norms Around Compute
Computing power is the engine of AI capability. Policymakers could push for frameworks that promote the transparent, equitable use of large-scale computing facilities. Just as the Nuclear Non-Proliferation Treaty helped prevent worst-case scenarios during the atomic age, new agreements could thoughtfully seek to monitor the development and deployment of advanced AI systems. This might involve establishing some level of control over the "fissile material" of the AI age—advanced computing hardware—but more likely would focus on transparent reporting and monitoring (multilateral orgs have demonstrated themselves much more effective at the latter than the former).
5. Cultivate Shared Narratives of Responsibility and Flourishing
Policymakers, cultural leaders, and civil society alike must craft new stories that inspire global solidarity rather than defaulting to zero-sum narratives of competition. Summits, youth exchanges, and artistic endeavours can emphasize the potential of “AGI for good,” building a sense that humanity’s destiny rests on combining near-infinite intelligence with empathy and moral responsibility.
Notably missing…
What is missing above is, of course, a breathless and impassioned demand for a Global AI Council that would bring together not just nations but also leading technology companies, research institutions, and civil society organizations. A body with real global authority3 to strictly control high-risk deployments and coordinate international responses to AI-related crises.
Alas, such a proposal would be even more lacking in novelty than my previous suggestions as there are now dozens of NGOs competing to be the source of sage advice on this matter.
From one point of view, this is a shining proof point of the liberalist spirit (everybody is committed to multilateral co-operation, hurray!); from another, some of the infighting and territory-grabbing one observes amongst NGOs resembles more the behaviours predicted by a realist4 framework than a joint chorus of We are the world5.
The stakes of brainpower-on-tap
In a future where “IQ 500” intelligence flows as freely as tap water, the lines between utopia and dystopia may be razor-thin. Realist fears of an uncontainable arms race, Liberal hopes for unprecedented cooperation, and Constructivist belief in the power of shared meaning each illuminate aspects of the path forward.
If major actors embrace draconian clampdowns and arms-race logic, the result could be a fractured, heavily surveilled order rife with fear. If institutions fail to adapt, near-free AGI could deliver chaos rather than prosperity. Yet there is also the tantalizing possibility of a civilization-scale leap in problem-solving capacity—from climate interventions to cosmic exploration—if we can coordinate effectively on security, governance, and a moral framework for wielding such extraordinary power.
Cheap, abundant superintelligence would be not just another technological advance; it would be a potential inflection point in the human story. Whether that story trends toward zero-sum conflict or shared triumph would hinge on the decisions policymakers make today—decisions informed by the Realist’s vigilance, the Liberal’s faith in institutions, and the Constructivist’s attention to the intangible power of norms, ideas and narratives. If this new era dawns, the world would stand at a historic juncture, with the fate of our species tethered to how we harness an intelligence that might grow greater than our own.
Of course, the above is predicated on the “superintelligent pocket calculator” hypothesis; a world in which AI remains strictly a tool, strictly under human control. The political calculus would look very different if we developed AI that made a claim to self-sovereignty.
Indeed, we’ve already had our first introduction to this asymmetry through the rise of old-fashioned cyberwarefare.
This is not to say that machines can’t reason morally, display empathy, or have creativity; but rather that these are areas where it seems there may be human preference for these things as done by humans.
This is also a victory for constructivists who would be unsurprised to find a battle royale for the moral high ground.
I do not write this to impugn the noble, and good-faith, efforts of so many individuals and organizations. This work is exceptionally important and I am grateful to those taking it on. But good faith and hard work alone are not enough to counter natural human incentives to compete for status and power. Competition can be a great motivator, but effort spent on competition is also effort diverted from achieving objectives.