Why Binding Limitations on Autonomous Weapons Will Remain Elusive

by | Jan 27, 2022

Limitations on Autonomous Weapons

Recent calls to abandon existing and ongoing legal processes to develop regulations for autonomous weapons should be evaluated cautiously or even suspiciously. This post proposes a more productive path toward effective regulation and important international consensus based on a practically categorized definition of autonomous weapons.

Efforts to Regulate Autonomous Weapons: Clarion Call or Siren Song?

The warnings from civil society are increasingly dire. As a headline for a briefing paper published last month by Human Rights Watch (HRW) contends, it is now “Crunch Time on Killer Robots.” According to the joint HRW and Harvard Law School International Human Rights Clinic paper, the “emergence of autonomous weapons systems and the prospect of losing meaningful human control over the use of force are grave threats that demand urgent action.”

If the States Party to the 1980 Convention on Certain Conventional Weapons (CCW) are “unable to reach consensus on a mandate to negotiate a new protocol on autonomous weapons systems when they convene in December,” the paper suggests that “an independent process” outside of the CCW framework “may prove to be the most inclusive and efficient alternative.”

The December meeting of CCW States Party indeed did not produce a new protocol on autonomous weapons. Although the latest round of CCW meetings has been characterized by the International Committee of the Red Cross (ICRC) as “a real missed opportunity and not in our view what is needed to respond to the risks posed by autonomous weapons,” lessons from past efforts to regulate or ban weapons outside the CCW framework suggest the plea to do so now is a siren song rather than a clarion call.

It is in the strategic interest of the United States and its allies to remain committed to the CCW process rather than to entertain proposals for another ad hoc treaty. I made this case several months ago in a guest post for Maj. Gen. (ret.) Charlie Dunlap’s Lawfire blog site. The main point of that argument—that “truly outlawing autonomous weapons is not a likely outcome” of a standalone treaty—remains just as pertinent today as it was then.

While the focus of advocacy organizations such as the ICRC and HRW has been to limit the development and implementation of emerging technologies on humanitarian grounds, this is not the exclusive priority of States that are willing and able to invest in autonomous weapons capabilities. If future efforts at building consensus and shared understanding among States are to be successful, several limitations hampering current discourse should be addressed.

This is the goal of a long-term research project I have begun, and the output of that project will be a full-length law journal article manuscript. For now, the “missed opportunity” of the December CCW meeting presents a productive occasion to highlight some central limitations in public discourse. Describing and addressing these limitations may well encourage more constructive future discussions.

What Are We Trying to Regulate, and Why?

Perhaps the most significant and enduring challenge in current discourse involving autonomous weapons is a lack of consensus regarding what exactly “autonomous” weapons are and precisely what it is about them that demands regulation. Colonel Alexander Bolt, the current Deputy Judge Advocate General, Operational and International Law for the Canadian Armed Forces, noted in 2013, “There is no obvious definition of ‘autonomous weapons’, but the definition is key to a meaningful discussion of legal advice in autonomous weapons use.” Former fighter pilot and current robotics professor Misty Cummings more recently observed that present debates are “filled with a lack of technical literacy and emotional rhetoric, often made worse by media and activist organizations that use fear to drive exposure and funding.”

This prevailing lack of technical literacy and consensus related to defining autonomy is a persistent impediment to productive engagement. For their part, the official U.S. Department of Defense (DoD) definitions for various degrees of autonomy in weapons systems are not adequately precise because many existing weapons qualify for multiple definitions depending on the mode in which they operate.

For example, a Counter-Rocket, Artillery, Mortar (C-RAM) system could qualify as a “semi-autonomous” or “human-supervised” or “autonomous” weapons system, depending on how it is used in a given combat application. This progressive description of autonomy in weapons systems, from lowest degree of autonomy to highest, is similar to the “human-in-the-loop” to “human-on-the-loop” to “human-out-of-the-loop” categorization scheme published by Human Rights Watch in 2013 and still widely used today.

The DoD and the HRW methods of defining autonomous weapons are insufficiently precise because both are centered on only one technical dimension: the degree of human involvement in the general functioning of the weapons system. This focus limits current discourse because multi-modal systems may qualify for multiple categories and because reduced or nonexistent human involvement leads to divergent concerns depending on the application.

From “Meaningful Human Control” to a Meaningful Categorization Method

Likewise, the “meaningful human control” description that has emerged among humanitarian advocates as the standard from which autonomous weapons must not deviate is not adequately precise. While it is common to express concern that algorithms may soon “make life-and-death decisions” for machines, it is not exactly clear how this condition would amount to a lack of meaningful human control.

To return to the C-RAM system, the weapon processes data received by sensors and applies pre-programmed software code to determine whether to engage an incoming target. In doing so, it autonomously applies the judgment of human software engineers who have analyzed data to develop computer programs that can determine whether the incoming object constitutes a threat. So, it is humans, not algorithms, who truly make the targeting decision—even if it is a machine that later autonomously carries out the function it has been programmed to perform. This system is capable of performing these functions with a human “in” or “on” or “out of” the loop, depending on the preference of the operator.

As such, existing methods of categorizing autonomous weapons systems based on a single dimension—the degree of human involvement in the general functionality of the weapon—are too imprecise to be useful in practice. To alleviate this limitation, a standardized method of defining autonomous weapons based on their technical and functional characteristics must be developed and implemented.

This is one of the primary objectives of my ongoing work involving autonomous weapons. Drawing on discussions with technical experts, scholarly research, and my own experience in the fields of combat arms and military law, I am working to develop a method of categorizing autonomous weapons based on specific, identifiable technical and functional characteristics of the systems.

To date, the main categories in this emerging categorization arrangement include: (1) the principal purpose(s) of the weapon, (2) the degree of autonomous functionality the system can achieve (in various modes, if applicable), (3) the contribution of autonomous capabilities in performing the functions of the system, and (4) the method of computer programming used to achieve specific autonomous functions of the system. By crafting standardized techniques by which to quantify and describe these features, it is possible to develop a consolidated framework to categorize autonomous weapons based on their technical and functional characteristics.

While this more detailed method of categorizing weapons may initially seem too complex to successfully apply in practice, doing so is an essential first step to achieving more productive discourse in the future. Absent enhanced clarity provided by a more detailed method of categorization, consensus related to proposed limitations will remain elusive.

How Does Existing International Law Apply to Autonomous Weapons?

An enhanced understanding of how existing legal frameworks will apply to emerging weapons can also improve current discourse and regulatory efforts. While it is a common refrain in official U.S. publications and CCW reports that existing law of armed conflict rules are adequate to regulate the development and use of autonomous weapons, this sentiment is not entirely accurate.

The tendency to mischaracterize how LOAC rules apply in the targeting context is one primary factor that renders inadequate current attempts to apply existing rules to the development of autonomous weapons. As but one example of this phenomenon, the final report of the National Security Commission on Artificial Intelligence (NSCAI), published in March 2021, is rife with inaccurate expressions of basic LOAC targeting rules.

For example, in articulating the “principle of proportionality,” the report states that the rule “prohibits attacks which would cause incidental loss of civilian life excessive to the anticipated military advantage.” Although the articulation cites to the glossary of the ICRC IHL Casebook for support, this formulation constitutes an inaccurate expression of the actual LOAC proportionality rule.

Longstanding DoD doctrine, which parallels the rule expressed in Additional Protocol I, articulates that personnel engaged in hostilities “must refrain from attacks in which the expected loss of civilian life, injury to civilians, and damage to civilian objects incidental to the attack would be excessive in relation to the concrete and direct military advantage expected to be gained.” While the difference between these two expressions may seem pedantic in principle, in reality they are entirely incompatible.

The correct, doctrinal articulation of the proportionality rule is centered on the ex ante expectation of personnel involved in an attack. Accurately assessing compliance after the attack requires an evaluation of the incidental harm and the military advantage expected by the personnel based on the “information reasonably available to them” at the time. The incorrect, nondoctrinal expression of the proportionality rule presented in the NSCAI final report would instead assess compliance based on the outcome of the attack.

In the context of autonomous weapons, the prevailing concern is not whether emerging systems can comply with the existing, doctrinal LOAC proportionality rule. Rather, the primary concerns are whether autonomous weapons can reliably parse military objectives from civilian objects and whether machines are capable of making intrinsically human value judgments regarding whether any expected incidental damage is “excessive” in relation to the military advantage anticipated.

This discrepancy in perspective has a profound impact on evaluating whether autonomous weapons are capable of complying with existing LOAC rules. For example, the NSCAI final report contends that if such systems are “properly designed, tested, and used, they could improve compliance with International Humanitarian Law (IHL) by reducing the risk of accidental engagements, decreasing civilian casualties, [and] minimizing collateral infrastructure damage.” While these assertions present purported advantages to fielding autonomous weapons, they do not actually address improved “compliance” with LOAC.

This failure to present a doctrinal expression of basic LOAC rules impugns the conclusion that “accountability for actions and compliance with IHL” is “no different” for autonomous weapons “than for any other weapons system.” As long as an autonomous weapon is not programmed or used to deliberately attack civilians, or with knowledge that an attack for which incidental damage is anticipated to be excessive in relation to the military advantage expected, an attack would comply with the fundamental distinction and proportionality LOAC rules.

In other words, whether attacks in fact cause “incidental loss of civilian life excessive to the anticipated military advantage” is not the standard by which human operators are held “accountable” pursuant to basic LOAC rules. If causing excessive incidental loss is a principal concern with autonomous weapons, current concepts of ensuring “accountability for actions and compliance with IHL” will need to be reformulated to adequately address this concern.

The Productive Path Ahead and Countering the Siren Song

Absent improved clarity regarding what constitutes an autonomous weapons system and why they should be regulated, how existing international law may need to be adapted, and relatedly, how concepts of accountability must be reformulated in the context of autonomous weapons, consensus regarding potential constraints will inevitably remain elusive.

In the meantime, it is in the strategic interest of the United States and other States that invest heavily in military capabilities—ally and competitor alike—to actively resist appeals to initiate processes apart from or even parallel to the CCW framework. While calls from humanitarian advocacy organizations to pursue “an independent process [that] may prove to be the most inclusive and efficient alternative” will grow increasingly zealous, “efficiency” from this perspective is measured by progress toward an instrument that would “prohibit autonomous weapons systems that target humans” and thereby “reduce the dehumanization of warfare, promote respect for human dignity, and avoid algorithmic bias.” This appeal aligns well with the “central purpose” of LOAC articulated by the ICRC, which is to “limit and prevent human suffering in times of armed conflict.”

By contrast, the DoD Law of War Manual observes that “military necessity underlies” central aspects of the law of war, and that military necessity, in turn, may be defined as “the principle that justifies the use of all measures needed to defeat the enemy as quickly and efficiently as possible that are not prohibited by the law of war.” From this perspective, it is reasonable to pursue autonomous weapons capable of targeting adversarial military objectives, whether human or object, while permitting enhanced survivability of a State’s own armed forces.

Just weeks before the December meeting of CCW states party, the government of New Zealand announced that it “will remain open to other opportunities to make progress, including by building and working with a coalition of states, experts and others” if existing processes fail to generate “new international law to ban and regulate autonomous weapons systems.” In doing so, New Zealand joins a list of “at least 20 individual states, from Africa, Asia-Pacific, Europe, Latin America, and the Middle East” currently making “the case for a legally binding instrument on autonomous weapons systems.”

It is in the interest of humanitarian organizations seeking to “stop killer robots” to collaborate with States that do not invest heavily in defense capabilities to appeal for an “independent process” if the CCW framework fails to generate new international law to ban and regulate autonomous weapons. States such as the United States that invest heavily in defense must recognize this apparent clarion call for what it is: a siren song luring States away from the CCW toward an “independent process” that may result in a multilateral treaty but not international consensus.

The existing CCW framework is not defective simply because it has not produced a legally binding instrument on autonomous weapons. It is still a useful forum for States to collaborate and continue to explore potential areas for consensus. Nonetheless, delegates to the CCW process and related fora can and should endeavor to achieve ever more productive discussions and seek consensus where it is possible. Reflections and suggestions presented in this blog post and in future similar work on the topic may well encourage attainment of these rather more modest, though realistic, objectives.

***

Brian L. Cox is a doctoral candidate lecturer and J.S.D. candidate at Cornell Law School, a visiting scholar at Queen’s Law in Ontario, and a retired U.S. Army judge advocate.