Playing for the Future: A Game-Theoretic View of the AI Race and National Security
- Michelle A. Santiago Negrón
- hace 5 horas
- 10 Min. de lectura

Michelle A. Santiago Negrón
Estudiante del Departamento de Ciencia Política,
Universidad de Puerto Rico, Recinto de Río Piedras
Resumen: El presente artículo explora, mediante el dilema de seguridad, los elementos de la carrera por dominar los avances de la inteligencia artificial y mantener la seguridad nacional de los Estados modernos. Se recurre al dilema del prisionero, modelo de juego teórico en donde, en última instancia, las decisiones se toman en función de la probabilidad de obtener beneficios, condicionadas por la desconfianza entre los participantes. Además, se considera el modelo de juego teórico bayesiano, que considera que el escenario en donde los participantes operan no conoce o tienen la información completa de las otras partes. En ambos modelos, las decisiones del juego teórico se adoptan racionalmente, considerando los potenciales desarrollos de la inteligencia artificial con fines no pacíficos y el dilema de seguridad en el que se ven involucrados los Estados. Estas decisiones se explican por la desconfianza ante escenarios cada vez más complejos o catastróficos para las partes implicadas. El análisis se centra en explicar, mediante los modelos mencionados, cómo diversos Estados asumen sus roles en contextos de información incompleta y en escenarios geopolíticamente determinantes para el avance tecnológico.
Abstract: This article explores through the security dilemma the elements of the race to dominate the advances of artificial intelligence and maintain the national security of modern states. The prisoner's dilemma is used, a model and theoretical game where, ultimately, decisions are made on the probability of benefit due to mistrust among participants. In addition, the Bayesian theoretical game model is considered, which considers that the scenario in which the participants operate does not know or have complete information about the other parties. In both models, decisions in the theoretical game are made rationally, considering the potential developments of artificial intelligence for non-peaceful purposes and the security dilemma in which states are involved. These decisions are made due to mistrust of increasingly catastrophic or complex scenarios for the parties involved. The analysis focuses on explaining, through the models, how various states assume their roles in scenarios of incomplete information and geopolitically decisive for their progress.
Palabras claves: seguridad nacional, inteligencia artificial, dilema de seguridad, geopolítica
Citación:
Nota: Michelle A. Santiago Negrón, “Playing for the Future: A Game-Theoretic View of the AI Race and National Security,” en Análisis Emergentes: Compilación de Ensayos Académicos, ed. Instituto Caribeño para el Estudio de la Política Internacional (ICEPI), vol. 1, núm. 1, edición especial (San Juan, PR: ICEPI, 2025), 12–18.
Bibliografía: Santiago Negrón, Michelle A. “Playing for the Future: A Game-Theoretic View of the AI Race and National Security.” En Análisis Emergentes: Compilación de Ensayos Académicos, editado por Instituto Caribeño para el Estudio de la Política Internacional (ICEPI), vol. 1, núm. 1, edición especial, agosto–diciembre 2025, 12–18. San Juan, PR: ICEPI, 2025.
Introduction
Artificial Intelligence (AI) refers to a series of advanced digital-purpose technologies that enables machines to simulate human learning and effectively perform a series of complex tasks. (Stryker & Kavlakoglu, 2025) It is one of the many tools for competing in the data-driven age and is considered a strategic high point in the progress of science. AI is forecast to play a pivotal role in improving national competitiveness, and as such, countries around the globe have raced to accelerate the pace of its development. (Yu & Carroll, 2022) Worldwide, AI investments have surged at least 62% in the past year, with companies such as Databricks and OpenAI accruing around $10 billion and $6.6 billion, respectively. (Lunden, 2025) Needless to say, AI is here and is already more prevalent than is immediately apparent. AI powers smartphones, optimizes the delivery of packages, answers phone calls– its applications are vast and seemingly endless. Within the realm of national security, AI research is underway in the fields of intelligence collection, analysis, logistics, cyber operations and information operations. It is truly transformative technology. However, while AI holds significant promise of addressing many of the challenges that plague the modern world, it must be noted that the quick progress of AI has opened a plethora of security issues and ethical challenges, particularly where it is concerned with national security.
AI is both a boon and a double-edged sword; it may promote social progress and liberate humans from repetitive labor while its unregulated and careless use may lead to catastrophic consequences. (Yu & Carroll, 2022) Due to this, what many have characterized as an AI arms race has erupted, nations competing to either be the ones to develop the most advanced tech, or be the first to regulate it, setting the rules of the game. More specifically, this discourse centers around AI components of autonomous weapons, and other similar AI applications. Nevertheless, this metaphor is applied to AI technologies more generally. (Schmid, 2025) Setting aside how these heavy-handed metaphors may blur the lines between economy and security in contemporary power-politics, the widespread adoption of military AI and its race to accelerate development could push nations to cut corners on testing, leading to the deployment of unsafe AI systems. (Scharre, 2025) These AI systems, in turn, drum up fears around the globe, causing other nations to escalate. Therein lies the crux of the matter, and the heart of the game. To properly comprehend the interactions between the nations in the AI innovation race, game theory is crucial. As a framework, this paper draws on a post-positivist methodology and a hybrid theoretical approach combining elements of realism and constructivism.
Game theory is one of the most conventional theoretical frameworks to model decision-making processes in many aspects of our lives. The basic concept of game theory is simply that– a game, which is any situation where its outcome depends on the choices of two or more decision makers. Those who make the choices are called players, individuals or groups who generally operate as a coherent unit. (Ho et al., 2022) Most studies make use of the concepts found within noncooperative game theory. In essence, it is any game in which “the players are unable to irrevocably commit themselves to a particular course of action, for whatever reason.” (Zagare, 2019, p. 8) As the international system lacks an overarching authority that can enforce commitments or agreements, noncooperative game theory holds a particular allure to many theorists. Game theorists have devised numerous ways to represent a game’s structure, the first, and one of the most basic being the strategic form. In it, players chose strategies– a complete contingency plan that specifies a player’s choice at every situation that might arise– before the actual play of the game. This is also known as the Arm’s Race Game, or more commonly, the Prisoner’s Dilemma. In this case, each state has two strategies: to cooperate by not arming, or to defect from cooperation by arming. If neither chooses to arm, the outcome is a compromise. However, if both decide to arm, they both lose while the race takes place. In both cases a military balance is maintained, but at significantly distinct costs. Finally, if one state chooses to arm whereas the other does not, one state gains a strategic advantage, where the other is put at a military disadvantage. (Zagare, 2019) The two players in this situation are assumed to be rational actors, which means that they will be intending to maximize their utility– that is, they are pursuing the outcome that will best serve their interests– not that they are necessarily intelligent. As both are assumed to be rational actors, they will have a dominant strategy within the game, which means that they will always choose the outcome that will best suit them. This is where what is known as the Nash Equilibrium comes into play: both players will choose to arm, as the alternative, which is not arming, leaves them with a significantly worse individual outcome, and open to risk. (Ho et al., 2022)
The strategic form was one of the first ways scholars modeled these kinds of dilemmas. In 1960, Lewis Richardson developed a model of arms races, which described the dynamic interaction between two players in a conflict situation. Richardson’s model examined how the arms levels of two countries change over time, influenced by each other’s actions and internal constraints. His starting point consisted of three hypotheses in which nations will increase or decrease their armaments:
(1) Out of fear of military insecurity, country A will make increases in its “armaments” proportional to the level of country B's armaments. B will respond in a similar way to A's armaments. (2) The burden of armaments upon the economy of the country imposes a restraint upon further expenditure. This restraint is proportional to the size of the existing force. (3) There are hostilities, ambitions, and grievances that drive nations to arm at a constant rate in the absence of a military threat from another nation. (Caspary, 1967, p. 64)
Richardson’s model, as the Arm’s Race Game, provided a clear framework to understand security dilemmas and arms races, this time using feedback loops based on observed behaviors. However, William R. Caspary (1967) proposed that Richardson’s model assumes rational escalation but lacks strategic nuance. Nations will always strive to keep some minimum safe ratio between their forces and their opponents, even if the other side totally disarms, especially if they hold aggressive motives. Arms races are molded by many more factors than Richardson initially stated. Smaller nations would naturally behave differently from larger ones, but he observed that generally, states would allocate a wide scope of resources purely on weaponry. Social, political and ideological motives often would override economic logic. (Caspary, 1967) While Caspary’s model offers a significant and constructive critique to Richardson’s simplification, some defenses of Richardson’s formulations have arisen, suggesting that the mathematical underpinnings of Richardson’s model may still yet hold some explanatory power under specific conditions. (Banks, 1975) Regardless, both Caspary’s and Richardson’s work offers a comprehensive framework of analysis for the current AI competition occurring around the globe. However, what occurs when players have incomplete information regarding the motives of the other? According to Shmuel Zamir (2008), a Bayesian game is “[a]n interactive decision situation involving several decision makers (players) in which each player has beliefs about (i.e. assigns probability distribution to) the payoff relevant parameters and the beliefs of the other players.” (Zamir, 2008, p. 1) In other words, Bayesian games assume that the decision makers have only partial information regarding the data of the game, and about other players, a more accurate reflection of real-life situations. Players have beliefs regarding the beliefs of other players, who have beliefs in their beliefs. In short, this creates what is termed an infinite hierarchy of beliefs. (Zamir, 2008) Turtles all the way down, so to speak. Bayesian games are relevant to the matter at hand as they emphasize the reality that state actors operate under uncertainty, and update their beliefs based upon their perceptions of the actions of others.
As mentioned at the start, it is crucial to avoid over-utilizing metaphors of arms races to prevent oversimplification, or worse, fall into fallacious Cold War historical analogies that do not serve to properly comprehend the contemporary situation at hand. However, game theory provides a framework to understand state behavior when it comes to AI development. Richardson’s model grants a foundational view of competitive escalation, whereas Caspary’s critique warns of classifying such situations as purely reactive. The United States (US) increased its annual and supplemental funding on AI spending over $2.8 billion over the fiscal years 2021 and 2025, a growth of 6% per year. (Holohan, 2025) Following it is the People's Republic of China, spending around $9.6 billion in 2024, and its tech giants seeking to boost capital expenditure in 2025. The US and China are seemingly embroiled in this innovation competition, each responding to the actions of the other. In 2017, China launched the Next Generation AI Development Plan, which seeks to establish the nation as a global hub for AI innovation by 2030. (World Economic Forum, 2025) The US responded by issuing export controls on AI semiconductors and boosting AI investment. (Kennedy, 2025) However, as Caspary noted, these races are not purely reactive, though they might initially appear so. Political discourse within the United States has grown increasingly isolationist, taking a hard pivot away from liberal institutionalism and towards protectionism and border controls. Powerful trends such as changing demographics and burgeoning automation have fueled this, reinforcing the drift towards American unilateralism. (Beckley, 2025) All of these factors equally fuel the evolving competitive dynamics observed in the modern era, especially concerning American interactions with technological giants like China. Even if a nation wasn’t overtly hostile regarding its investments in AI, a state updates its beliefs accordingly and acts in response to such. However, a state can only respond and act to the information at its disposal, and if such information is limited, it can lead to irrational or distorted assumptions regarding the motives of another. Take, for example, once more the case of China and the United States. While the United States is significantly more transparent regarding its investments and information, China is less so. Ergo, the United States can only act in response to China’s public announcements or the occasional leak, which may not be a complete picture of the nation’s public policy. Between obfuscation and ignorance, fear rises. Although noncooperative models are more common in international relations, as Caspary’s critique of Richardson’s model observed, smaller nations may exhibit different behavior in terms of approaching their foreign policy. To illustrate, consider the case of Rwanda and Singapore. In September of last year, at the United Nations Summit, Rwanda and Singapore collaborated to launch the world’s first AI playbook. This highlights how small but ambitious nations can make a significant global impact on AI development and regulation. (Ng, 2024) Singapore is lauded for its leadership in the digital economy, while Rwanda is a nascent tech hub in Africa– the collaboration is a powerful one, and they are trailblazers to other nations. The AI Playbook, in essence, aims to provide guidelines for countries, particularly developing nations, on how to responsibly integrate AI into their economies, via addressing key aspects of AI deployment, such as data privacy, ethical standards, and the need for transparency in their models. (First AI Playbook for Small States to Shape Inclusive Global AI Discourse, 2024) Both Rwanda and Singapore comprehend that they lack the capacity to engage in an AI race at the length and scale that larger nations such as China and the United States are embroiled in. With incomplete information, they choose to cooperate, pooling their resources and achieving a better outcome than they would have done so individually.
National security in the context of technological advancement is best understood through a hybrid framework that integrates non-cooperative—and in some cases cooperative—game theory, critiques of classical escalation models, and Bayesian uncertainty. Analyzing the strategic cooperation of smaller states such as Singapore and Rwanda in contrast to the more aggressive behaviors of larger powers, leads one to understand how states navigate resources constraints and uncertainty differently. As nascent technologies continue to mold and shape our world, further study is necessary to assess how states may appropriately respond to the security dilemmas that arise in a way that does not lead to an accelerated staccato tempo of warfare.
Biografía en documento adjunto.
