Strategic Dynamics in the Race to AGI: A Time to Race Versus a Time to Restrain

A global race for transformative AI is underway, defined by intensifying competition between the United States and China. How should we think about these dynamics? In this essay, the authors use game theory to add clarity to the nature of the race and to draw practical policy implications. Competition is not inevitable, they argue, but a shift to cooperation is not automatic, nor easy.

The Race to AGI

Game theory has long influenced how policymakers approach high-stakes technological competition. In the 1950s, RAND mathematicians Merrill Flood and Melvin Dresher developed what would later be known as the Prisoner’s Dilemma—a model showing how rational actors, pursuing their own self-interests, can end up with collectively worse outcomes.1 The story is familiar: Two suspects are interrogated separately and must choose whether to cooperate (stay silent) or defect (betray the other). While mutual cooperation would yield the best collective outcome, individual incentives drive each toward defection.

In the following decades, RAND researchers’ application of the Prisoner’s Dilemma to the nuclear arms race between the United States and the Soviet Union provided insights into the logic of deterrence, arms control, and crisis management.2 The model’s enduring relevance lies in its ability to distill complex strategic interactions into tractable, policy-relevant lessons.

Today, game theory provides a useful framework for understanding the global race to develop transformative artificial intelligence, including artificial general intelligence (AGI). Modeling the AGI race as a strategic game is helpful for informing policy decisions that could have far-reaching consequences. At the core is a question central to all future policy decisions: What game is actually being played, and how should our understanding of the game shape these high-stakes policy decisions? Answering these questions is critical for understanding the parameters that govern cooperation versus competition in AGI development.

The race is not simply about who develops transformative AI first; it is about shaping the rules and norms that will govern technological leadership for decades to come.

The stakes for this could not be higher. AGI’s transformative potential promises decisive first-mover advantages,3 but also poses potentially significant, high-stakes risks: misalignment with human interests, misuse for weaponization, and international instability.4 As in the original dilemma, the allure of “winning” can drive countries to accelerate AGI development, even in cases when mutual restraint would be preferred.

This challenge is not theoretical; it is unfolding in real time. In the United States, major tech companies such as OpenAI, Anthropic, Meta, and Google have ramped up investment and research to develop more powerful models and applications. The White House’s recent AI Action Plan calls for national leadership and investment.5

At the same time, China’s government has pursued an AI strategy backed by substantial funding, policy support, and a coordinated approach that integrates research and deployment. Chinese firms are investing heavily in foundational models, while state institutions facilitate the integration of AI across sectors. The launch of DeepSeek, an advanced large language model, disrupted Western assumptions about China’s AI capabilities and underscored the fluidity of technological leadership.

The strategic choices that governments have in terms of AI research and development, diffusion, and international collaboration have real-world implications. AI investment decisions not only affect the pace of innovation, but also the regulatory environment, the allocation of resources, and the broader trajectory of technological progress. The race is not simply about who develops transformative AI first; it is about shaping the rules and norms that will govern technological leadership for decades to come.

 

Using Game Theory to Model the Race

Modeling the transformative AI race is difficult precisely because of its many complexities. Previous attempts at modeling these dynamics using game theory have tended to focus on discrete aspects of the race. Some have emphasized the role of technological capability, others the impact of regulatory frameworks, and yet others the influence of strategic rivalry.6 These models have reached divergent conclusions. For example, Dung and Hellrigel-Holderbaum’s analysis suggests that countries would not achieve a decisive strategic advantage with AGI, making slower, risk-averse development more appealing.7 This supports Katze and Futerman’s argument that it is strategically better for nations to coordinate internationally on AGI development, rather than risk destabilizing international order, loss of control, or greater domestic instability.8 On the other hand, Kreps encourages accelerated development as a means of resolving uncertainty about AGI’s military capabilities and economic impacts.9 These divergent policy recommendations reflect the difficulty in abstractly representing the race.

One source of complexity is the diversity of actors involved. Private firms have become the driving force in AI development, eclipsing academia and government in both expertise and resources.10 Companies such as OpenAI, Anthropic, DeepMind, and their Chinese counterparts possess the technical capacity, data, and financial resources needed to push the boundaries of AI. This shift has societal implications given that it is far more difficult for private firms, whose primary incentives are commercial, to internalize the social costs of AI. The concentration of expertise within private firms complicates regulatory efforts, as governments may lack both the technical capacity and the access needed to design effective oversight.

The concentration of expertise within private firms complicates regulatory efforts, as governments may lack both the technical capacity and the access needed to design effective oversight.

Even abstracting to the country level still has challenges, such as identifying the key competitors. While the United States and China are often treated as the primary competitors, Europe’s role is ambiguous but potentially significant. The European Union’s AI Act,11 the world’s first comprehensive legal framework for AI, prioritizes risk mitigation but has been criticized for sidelining Europe from the technological frontier. Another argument, outlined by Garicano and Saa-Requejo, highlights that Europe was not primed to be in the race given fundamental disadvantages relative to its American and Chinese counterparts, and discusses how Europe could still be an important player in diffusing and providing competitive markets for commercial AI models.12 And beyond Europe, there are other potential regions that could be an important part of strategic alliances going forward, such as Saudi Arabia, the UAE, and key emerging markets.

These complexities compound when one considers that the development of AGI is just one part of the “first mover” equation. While the United States has historically been more heavily focused on developing frontier models and maintaining technological leadership, the Chinese government’s strategy appears more aimed toward widespread application and integration of AI into society. This raises questions about the very nature of the “race.” Is it a contest to develop the most advanced model, or, rather, to achieve the broadest and most effective commercial deployment?13

Layered on top of all these complexities are incentives that go well beyond the narrow logic of technological competition. In US-China relations, the lack of cooperation on AGI risks is additionally influenced by broader strategic competition, security tensions, and historical mistrust. The potential for “linkage”—bundling diverse issues to facilitate cooperation between countries—is a fundamental part of the US-China dynamic.14

Finally, the fundamental characteristics of the technology and uncertainty of the development timeline have led to ambiguity about the transition path to AGI. Even consensus on what AI models constitute AGI is up for debate given that definitions of “human-level” intelligence may vary. This uncertainty complicates efforts to forecast, regulate, and coordinate around AGI milestones.

 

Insights from the Model

History shows that strategic modeling, even in simplified form, can be valuable for clarifying the incentives and risks that drive national choices. During the Cold War, the Prisoner’s Dilemma distilled the logic of the nuclear arms race into a tractable framework. Today, a similar approach can illuminate the AGI race.

An initial framework for representing these dynamics is undertaken in a recent RAND report. Abraham, Kavner, and Moon develop a stylized, mathematically neutral model to analyze the race to AGI.15 This approach is valuable for two reasons. First, it enables policymakers to abstract away from the complexities and focus instead on the core strategic question: What choices should countries, such as the United States and China, make to shape the regulatory environment to their advantage? Second, it clarifies which challenges constitute a social dilemma among different actors versus which are intrinsic to the transformative nature of technology.

What choices should countries, such as the United States and China, make to shape the regulatory environment to their advantage?

Abraham, Kavner, and Moon’s model makes several deliberate simplifications. It focuses on two principal actors that are assumed to have symmetric information and capabilities. Each country selects one of two strategies in a one-shot game: a Baseline approach that prioritizes risk mitigation or an Accelerated approach that prioritizes speed and competitive advantage. These choices are made simultaneously, with each country initially unaware of the other’s move. Crucially, it is assumed that if either country accelerates, both share the increased risk of adverse outcomes, reflecting the global, systemic nature of AGI risk. The payoff structure bundles together the long-term, discounted rewards of both developing and deploying AGI, rather than distinguishing between them. The winner of the AGI race is determined probabilistically, with the likelihood of success shaped by the chosen strategies.

Under these assumptions, the central insight of the model is that the strategic behavior of countries in the AGI race hinges on the balance between the rewards of being first and the risks associated with rapid development. This trade-off reduces to a threshold condition16 between the additional expected benefit of accelerating versus the expected cost from increased risk. If the benefits outpace the risks, then both countries are incentivized to accelerate, even at the expense of a greater likelihood of adverse outcomes. This dynamic mirrors the classic Prisoner’s Dilemma: Each country fears losing out if it chooses caution while the other accelerates, leading both to defect and intensify collective risk. Conversely, if the risks outpace the benefits, then the expected cost is perceived as larger than the expected reward. In this case, there are two equilibria: one in which both countries accelerate development of AGI, and another where both countries prioritize risk mitigation over acceleration.

Why do two equilibria emerge in the latter case? Intuitively, when the costs of acceleration are high relative to the rewards, neither country wants to unilaterally shift its strategy without assurance that the other will do the same. If both countries choose accelerated development, neither has an incentive to switch to baseline (non-accelerated) development alone, since unilateral restraint would result in lower rewards without guaranteeing safety. By contrast, if both countries choose baseline development, neither is tempted to accelerate, as the additional cost is not justified by the potential reward. This mutual dependence creates a situation where both cooperative (Baseline, Baseline) and competitive (Accelerated, Accelerated) outcomes are stable, and the challenge becomes one of coordination: aligning on the cooperative equilibrium and ensuring both sides have confidence in each other’s commitments. Thus, this condition produces the well-known Coordination game.17

Of course, this stylized model does not fully capture that the real-world AGI race is an ongoing process. Countries and firms make incremental investments, update their strategies, and respond to new information over time. This temporal dimension introduces dynamic incentives for both cooperation and defection over time.

When the model is extended to a repeated setting, the probability of AGI being developed in any given round is uncertain, and interim rewards from ongoing AI progress can be substantial. Folk theorems established in game theory18 illuminate conditions under which mutual prioritization of risk mitigation over acceleration can emerge. In particular, long-term cooperation becomes stable and attractive as long as the probability of AGI emergence in any round is sufficiently low and the interim rewards for cooperation are relatively high. Conversely, if the timeline shortens, the probability of being a first mover under acceleration is high, or interim rewards from cooperation are relatively low, then competitive pressures intensify. In this case, long-term cooperation destabilizes as countries prefer to accelerate their AGI development.19

 

Strategic Implications for the Real World

For policymakers, a useful insight from Abraham, Kavner, and Moon’s analysis is that a race that results in unaligned AGI is not a foregone conclusion. Instead, the incentives driving countries to accelerate AI development are contingent on perceptions of the magnitude of the first-mover advantage, the likelihood of adverse risks, and their severity. When the expected risks posed by unaligned or uncontrolled AGI are judged to be sufficiently grave relative to first-mover expected rewards, the incentives for national leaders shift toward prioritizing risk mitigation.

In such a scenario, the rational choice for each country can shift away from competitive acceleration and toward cooperation. However, this shift from rivalry to restraint is neither automatic nor easy. The strategic landscape transforms from a dilemma of defection—where each side is incentivized to “go it alone” for fear of being outpaced— to a complex challenge of coordination, where the benefits of cooperation exist, but must be actively realized.

If incremental progress in AI brings significant interim benefits to economic growth, health care, or national security, then pursuing measured, coordinated development that prioritizes risk mitigation becomes more attractive.

For cooperation to be robust and sustainable, several preconditions would likely need to be met. First, there must be alignment in how countries perceive both the risks and rewards of AGI development. This may require dialogue, technical exchange, and a shared understanding of what constitutes trustworthy AGI. Second, credible mechanisms for information-sharing and transparency would be useful for monitoring actions toward agreed-upon development. Third, understanding how monitoring translates into trust (e.g., through potentially feasible verification mechanisms) is important for cooperation.20

Abraham, Kavner, and Moon’s model also highlights that the tipping point—the moment when cooperation becomes viable—is itself dynamic and context dependent. Factors such as advances in technical safety, new information about AGI timelines, shifts in national priorities, or the introduction of new global governance mechanisms can all influence this balance. For example, if incremental progress in AI brings significant interim benefits to economic growth, health care, or national security, then pursuing measured, coordinated development that prioritizes risk mitigation becomes more attractive. Mechanisms that share the costs of this risk can further lower barriers to cooperation. On the other hand, if the likelihood of AGI emergence increases rapidly, the perceived advantage of being first grows, or companies insulate themselves from liability, then the incentive to accelerate development may dominate.

Ultimately, the trajectory of the AGI race is shaped not just by national ambition or technological capability, but by the evolving structure of incentives, the quality of information and mechanisms to communicate that information, and the perceived magnitude of shared risks. Given this, it will be important to continually reassess the balance between competition and cooperation as new information emerges and the strategic environment shifts.

 

Charting a Path Forward

The accelerating competition between the United States and China has become the central axis of the global AGI race. Both countries are investing heavily to secure technological leadership, driven by concerns over economic advantage and influence on international norms. As each country seeks to outpace the other, the stakes continue to rise.

The US-China relationship has far-reaching consequences, shaping not only the pace of innovation but also the development of global standards and the potential for international collaboration. Their actions set the tone for other countries and organizations involved in transformative AI development, influencing strategic decisions worldwide. The choices made by these leading actors will continue to reverberate throughout the international system.

That is why understanding the incentives and risks inherent in this race is essential for effective policy. Policymakers must be able to clearly define the strategic dynamics at play to identify opportunities for cooperation, manage escalation, and establish risk mitigation when necessary. Only with this clarity can leaders balance national interests with the broader goal of global stability.

While game theory offers a valuable starting point for analyzing these dynamics, it remains an evolving tool. Continued research that reflects the ever-changing complexity of AI competition will be crucial for guiding sound policy and fostering international understanding. Developing a deeper understanding of the strategic game and applying these insights to confront the related real-world challenges would go a long way to ensuring that opportunities for both cooperation and competition are not squandered on the path to transformative AI.

Developing a deeper understanding of the strategic game and applying these insights to confront the related real-world challenges would go a long way to ensuring that opportunities for both cooperation and competition are not squandered on the path to transformative AI.

1. Merrill M. Flood, Some Experimental Games (U.S. Air Force Project, RAND Research Memorandum 789-1, revised June 20, 1952), https://www.rand.org/content/dam/rand/pubs/research_memoranda/2008/RM789-1.pdf.

2. Steven Kuhn, “Prisoner’s Dilemma,” in Stanford Encyclopedia of Philosophy, published September 4, 1997, substantive revision April 2, 2019, https://plato.stanford.edu/archives/fall2025/entries/prisoner-dilemma/.

3. Jim Mitre and Joel B. Predd, Artificial General Intelligence’s Five Hard National Security Problems (RAND Corporation, Expert Insights, 2025), https://www.rand.org/pubs/perspectives/PEA3691-4.html; Colin H. Kahl and Jim Mitre, The Real AI Race: America Needs More Than Innovation to Compete With China (RAND Corporation, Foreign Affairs, 2025), https://www.foreignaffairs.com/united-states/china-real-artificial-intelligence-race-innovation.

4. Dan Hendrycks, Mantas Mazeika, and Thomas Woodside, “An Overview of Catastrophic AI Risks,” preprint, arXiv.org, October 2023, https://doi.org/10.48550/arXiv.2306.12001; Jiaming Ji et al., “AI Alignment: A Comprehensive Survey,” preprint, arXiv.org, April 4, 2025, https://doi.org/10.48550/arXiv.2310.19852; Mitre and Predd, Artificial General Intelligence’s Five Hard National Security Problems; Miles Brundage et al., “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,” preprint, arXiv.org, April 20, 2020, https://doi.org/10.48550/arXiv.2004.07213, 2020.

5. The White House, “Winning the Race: America’s AI Action Plan,” July 2025, https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.

6. See, for example, Stuart Armstrong, Nick Bostrom, and Carl Shulman, “Racing to the Precipice: A Model of Artificial Intelligence Development” AI & Society 31, no. 2 (2016): 201–206, https://doi.org/10.1007/s00146-015-0590-y; The Anh Han, Luis M. Pereira, Tom Lenaerts, and Francisco C. Santos, “Mediating Artificial intelligence Developments Through Negative and Positive Incentives,” PLOS ONE 16, no. 1 (January 26, 2021): e0244592, https://doi.org/10.1371/journal.pone.0244592; McKay Jensen, Nicholas Emery-Xu, and Robert Trager, “Industrial Policy for Advanced AI: Compute Pricing and the Safety Tax,” preprint, arXiv.org, February 23, 2023, https://doi.org/10.48550/arXiv.2302.11436; Robin Young, “Who’s Driving? Game Theoretic Path Risk of AGI Development,” preprint, arXiv.org, January 25, 2025, https://doi.org/10.48550/arXiv.2501.15280.

7. Leonard Dung and Max Hellrigel-Holderbaum, “Against Racing to AGI: Cooperation, Deterrence, and Catastrophic Risks,” preprint, arXiv.org, July 29, 2025, https://doi.org/10.48550/arXiv.2507.21839.

8. Corin Katzke and Gideon Futerman, “The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating,” preprint, arXiv.org, December 22, 2024, https://doi.org/10.48550/arXiv.2501.14749, 2024.

9. Sarah Kreps, “Racing to Clarity: How Accelerating AGI Development Could Enhance Strategic Stability,” in The Artificial General Intelligence Race and International Security (RAND, 2025): 5–8.

10. Nur Ahmed, Muntasir Wahed, and Neil C. Thompson, “The Growing Influence of Industry in AI Research,” Science 379, no. 6635 (March 2023): 884–886, https://doi.org/10.1126/science.ade2420; Nur Ahmed and Neil C. Thompson, “What Should Be Done About the Growing Influence of Industry in AI Research?,” Brookings, December 5, 2023, https://www.brookings.edu/articles/what-should-be-doneabout-the-growing-influence-of-industry-in-ai-research/.

11. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on Artificial Intelligence, OJ L, 2024/1689, December 7, 2024, http://data.europa.eu/eli/reg/2024/1689/oj.

12. Luis Garicano and Jesús Saa Requejo, “The Smart Second Mover: A European Strategy for AI,” Silicon Continent, July 9, 2025, https://www.siliconcontinent.com/p/the-smart-second-mover.

13. Julian Gerwitz, “Global AI Rivalry Is a Dangerous Game, Financial Times, July 28, 2025, https://www.ft.com/content/9f5cbee8-b09c-4274-bda9-7245ca97352e.

14. George W. Downs, David M. Rocke, and Randolph M. Siverson, “Arms Races and Cooperation, World Politics 38, no.1 (1985): 118–146, https://www.acsu.buffalo.edu/~fczagare/PSC%20504/DownsRockeSiverson.pdf.

15. Lisa Abraham, Joshua Kavner, and Alvin Moon, “A Prisoner’s Dilemma in the Race to Artificial General Intelligence,” RAND, forthcoming in 2025.

16. This threshold condition is, where is the payoff to the first mover (economic, strategic, and geopolitical rewards), is the payoff to the second mover, is the shared cost from increased risk, and is the probability that a country that chooses the Baseline strategy is the first to achieve AGI if the other country accelerates. It is assumed that , reflecting that the country that prioritizes risk mitigation over acceleration is less likely to be the first mover. See Abraham, Kavner, and Moon, “A Prisoner’s Dilemma in the Race to Artificial General Intelligence” for more detail.

17. Duncan Snidal, “Coordination versus Prisoners’ Dilemma: Implications for International Cooperation and Regimes,” American Political Science Review 79, no. 4 (1985): 923–942, https://doi.org/10.2307/195624.

18. Drew Fudenberg and Eric Maskin, “The Folk Theorem in Repeated Games with Discounting or with Incomplete Information,” Econometrica 54 (May 1986): 533–554, https://doi.org/10.2307/1911307.

19. See Abraham, Kavner, and Moon, “A Prisoner’s Dilemma in the Race to Artificial General Intelligence” for more details on the model.

20. Mauricio Baker et al., “Verifying International Agreements on AI: Six Layers of Verification for Rules on Large-Scale AI Development and Deployment,” preprint, arXiv.org, July 2025, https://doi.org/10.48550/arXiv.2507.15916v1.

Previous
Previous

Open Global Investment as a Governance Model for Transformative AI

Next
Next

Beyond Rivalry: A US-China Policy Framework for the Age of Transformative AI