Year Ahead – The Hurdles to International Regulation of AI Tools

by | Jan 5, 2023

Autonomous weapons

In 2023, non-governmental organizations such as Human Rights Watch and Stop Killer Robots will continue their calls for a new international legal framework to regulate autonomous weapons systems. Some States and scholars are optimistic about the possibility. These optimists often analogize to nuclear weapons regulation to illustrate that States sometimes have been willing to limit their own flexibility in strategic and sensitive areas – such as the one posed by the AI “arms race.”

However, this analogy is flawed. There are good reasons to be skeptical about the prospects that States will achieve a new, robust multilateral agreement that implicates development of lethal autonomous systems or other “national security AI.” The threat posed by these systems seems less tangible and, for now, less existential than nuclear weapons. The leading AI States perceive themselves as differently situated from each other (unlike the United States and Soviet Union did in the nuclear setting). Further, unlike with nuclear weapons, the development and use of AI tools are closely-guarded secrets.

There is a better analogy to be made, one that I think more accurately predicts what will happen in international discussions about autonomous weapons: efforts to regulate hostile cyber operations (HCO). Because HCO and autonomous weapons share important features, State efforts to identify certain HCO as internationally impermissible suggest several features about efforts to regulate autonomous weapons internationally.

Specifically, those efforts suggest that: (1) a binding global agreement containing new rules about autonomous weapons will be beyond reach unless and until there is a major crisis; (2) basic agreement about how existing international law applies to autonomous weapons may be possible, though subject to contestation; (3) close allies can do useful work to set guardrails around certain uses of autonomous weapons, at least among themselves; (4) there may be very narrow areas (such as nuclear command and control) in which the United States, Russia, and China might agree not to deploy autonomous tools; and (5) most work to sketch out expectations about how States should conduct themselves will be done unilaterally, in the form of government statements and indictments.

It will be very difficult, absent an international crisis, to achieve a global agreement about what types and uses of autonomous weapons are acceptable. I anticipate that, as with HCO, the bulk of the work in establishing norms for the use of autonomous weapons will, at least in the near term, take place domestically and unilaterally. In fact, both for HCO and autonomous weapons, it is more likely that small groups of likeminded States will simply focus on developing their tools in a way that comports with their own values, while using levers such as espionage, covert action, sanctions, and criminal prosecution to slow and contest their adversaries’ perceived misuse of those tools.

***

Ashley Deeks is a Professor of Law at the University of Virginia Law School, where she teaches international law and national security law.

 

Photo credit: Pexels

Print Friendly, PDF & Email