In January 2024, Eoin McNamara from FIIA (Finnish Intitute of International Affairs) published a briefing paper discussing the lessons nuclear arms control may have taught us in how to approach regulation of AI advancement.
“There is policy value for AI safety to be gained from incorporating some parallels from nuclear arms control,” explains Eoin. “Great power leaders frequently equate AI advancement with arms racing, reasoning that powers lagging behind will soon see great power status weakened. This intensifies competition, risking a spiral into more unsafe AI practices. With the current absence of formal international treaties, enhanced AI safety remains possible through informal management where great powers reciprocate risk-reduction measures in AI’s more dangerous developmental areas.”
Abstract
Multiple nuclear arms control treaties have collapsed in recent years, but analogies associated with them have returned as possible inspiration to manage risks stemming from artificial intelligence (AI) advancement.
Some welcome nuclear arms control analogies as an important aid to understanding strategic competition in AI, while others see them as an irrelevant distraction, weakening the focus on new frameworks to manage AI’s unique and unprecedented aspects.
The focus of this debate is sometimes too narrow or overly selective when a wider examination of arms control geopolitics can identify both irrelevant and valuable parallels to assist global security governance for AI.
Great power leaders frequently equate AI advancement with arms racing, reasoning that powers lagging behind will soon see their great power status weakened. This logic serves to intensify competition, risking a spiral into more unsafe AI practices.
The global norm institutionalisation that established nuclear taboos can also stigmatize unethical AI practices. Emphasising reciprocal risk reduction offers pragmatic starting points for great power management of AI safety.
Full citation
McNamara, E.M., “Nuclear Arms Control Policies and Safety in Artificial Intelligence”, Finnish Institute of International Affairs (FIIA) Briefing Paper 381, January 2024, https://www.fiia.fi/en/publication/nuclear-arms-control-policies-and-safety-in-artificial-intelligence.