The ‘Skynet’ AI doom scenario has, unsurprisingly, been popular with the press. Killer robots get clicks. But it would be unjust (and foolish) to dismiss existential risk concerns just because they also happen to make a compelling narrative. Very serious, very knowledgeable, AI researchers like Geoff Hinton and Yoshua Bengio have articulated strong opinions on the nature of this risk (and equally serious and knowledgable researchers like Yann LeCun have rejected it).
If one were to believe that the (robot) horse were far enough out of the barn at this point that the slow/stop vs. accelerate debate was largely academic, one might want to consider reasons why some actors may continue to experience strong incentives to propagate catastrophic risk scenarios.
Defining a Public Good
In economics, a public good is defined as a good that is both non-rivalrous and non-excludable. Non-rivalrous means that one person's consumption of the good does not reduce its availability to others. Non-excludable means that it is difficult or impossible to exclude individuals from consuming the good.
Classic examples of public goods include national defense, public parks, and lighthouses. AI, at first glance, does not neatly fit this definition. AI systems can be proprietary and access to them can be restricted.
The Rapid Explosion Scenario
However, the public goods framing becomes more relevant when considering the possibility of a rapid explosion in AI capabilities following the achievement of AGI1. If you are genuinely worried about existential risk from AI, it follows that you are likely to believe that such a rapid explosion is possible. It would be irrational to hold a ‘doomer’ belief if, for example, you believed that AI would never get any better than GPT-4 turbo.
It is this potential ‘rapid explosion’ element that makes AI feel perhaps different from other technologies — a scenario in which an AGI is able to begin tirelessly and effectively making itself more intelligent and capable without human intervention. If one believes in this kind of exponential bootstrapping, then AGI is a potentially temporally excludable good. That is, once one group reaches the AGI threshold, it may be impossible for any other group to 'catch up'; the progenitor-AGI will always be exponentially more advanced. In this scenario, whoever controls the progenitor AGI wields enormous power. For those organizations at the forefront of AI research, this presents an overwhelmingly strong incentive to race for AGI — assuming that one is successful in capturing favourable regulatory regimes2, this could be a winner-takes-all race.
(Note the hypotheticals and conditionals above: none of this is destiny. There is a portfolio of possible AI futures and I am engaging here with only one scenario of many!)
Concentrating the Probability
Calls to regulate the development of AI or "pause" research are, viewed through the lens of this particular scenario, calls to concentrate the probability of AGI being discovered and contained by a select group. But is this desirable?
The often-made, but imperfect, nuclear weapons analogy can illustrate the stakes. Imagine if nuclear weapons were perfectly excludable — if only one country had ever obtained them and could ensure that no one else ever would. Would we want to live in that world? The risks of corruption and misuse, even by an initially relatively benevolent state, would be immense. My inner game theorist dislikes the Nash equilibrium of this hypothetical world.
The AI situation is not directly analogous — AGI is not inherently destructive the way nuclear weapons are. But the concentration of power is still concerning. We should be wary of any plan that, even inadvertently, would make a technology as powerful as AGI excludable, with access granted only for an elite privileged group.
Paths Forward
What's the alternative? There are no easy answers, but some key considerations emerge:
1. Open source AI. Efforts to ensure broad access to and participation in the development of AGI. The more distributed the capability, the lower the risk of a single group obtaining a permanent decisive strategic advantage.
2. International cooperation and treaties to mitigate race dynamics and ensure the benefits of AGI are shared globally.
3. Research into AI safety techniques to increase the probability that AGI systems are beneficial to humanity as a whole.
4. Ongoing public dialogue and democratic governance of AI development to ensure social values are reflected. Decisions concerning this important technology should not be made by a handful of policymakers and technology leaders behind closed doors; a broad diversity of voices should inform our paths forward.
Ultimately, AI and particularly AGI should be developed with a public goods mindset — with the goal of broad benefits and mitigation of risks of concentration of power.
We occupy a world in which more than 54% of humans, globally, own a smartphone. The technological infrastructure to make frontier-level AI broadly accessible to the majority of humanity is already in place. The barriers to choosing ‘AI as a public good’ are lower than for many previous technologies. Indeed, one can imagine a world in which governments invest in sovereign AI initiatives, training and serving frontier-level AI systems as a public service to empower their citizens. If one were feeling even more optimistic, one might imagine the impact on human development of multilateral initiatives to do the same globally.
These choices are ours to make.
I dislike the term AGI (Artificial General Intelligence) as it is a poorly-defined concept that has been frequently abused and misused — as I am doing now. For the purposes of this note, let’s define AGI as an AI that can outperform the most competent humans at a broad spectrum of cognitive tasks.
While the leaders of many frontier AI companies openly reject the notion that they seek regulatory capture, human behaviour is often better modelled by understanding incentive structures than listening to words spoken.