LAWS Debate at the United Nations: Moving Beyond Deadlock

The United Nations is once again hosting a Group of Governmental Experts tasked to report on emerging technologies in the area of lethal autonomous weapons systems (LAWS). Further debate on this subject at the conceptual level, such as its definition, accountability, and the notion of human control is unlikely to break the stalemate reached in the inter-governmental process. Instead, diplomatic efforts should shift the focus to technological parameters for the lawful operation of autonomous systems.
This post identifies three technological parameters the Group may wish to translate into legal requirements to advance debate beyond currently stalled processes. These parameters concern: accuracy and predictability in targeting; the protection of the victims of war; and safeguarding to mitigate adverse humanitarian impacts.
The Current State of Debate
In 2019, the Group affirmed 11 Guiding Principles as the basis for their continued work on this subject. This year, the Group is working toward consensus recommendations on the normative and operational framework, in preparation of a report to the Sixth Review Conference of the Convention on Certain Conventional Weapons.
Although these efforts are laudable, technological advances are already outpacing diplomacy with a variety of lethal autonomous weapons systems developed and deployed in combat operations on the battlefield. The Turkish company STM, for example, developed a quadcopter drone, Kargu-2, with autonomous capabilities of selecting and engaging human targets. It was reportedly deployed during the 2019 conflict in Libya. The same company has also successfully tested a fixed-wing autonomous tactical attack drone, Alpagu. That system has demonstrated autonomous manoeuvres capable of neutralizing targets with pinpoint accuracy.
Nations are still divided as to how the development and use of autonomous systems should be regulated under international law. Canada, for example, believes that fully autonomous weapons systems are incapable of complying with the law of armed conflict, whereas the United States emphasizes various ways autonomous functions may be lawfully used. Other states, including the non-aligned movement, urge adoption of a legally binding instrument, while Russia, Israel, and the United Kingdom consider that existing legal regulation is sufficient.
Due to these divergent approaches, States are likely to reach consensus only in broad terms. Further debate on LAWS at the conceptual level, such as their definition, accountability, and the notion of human control, may go some way to assure the public (especially, activist NGOs) that their perceived concerns are being addressed. However, such debate will miss the real opportunity to develop a shared understanding on more fundamental issues, including how legal requirements are to be translated into technical parameters as these systems assume tasks that human soldiers traditionally performed on the battlefield.
With the rapid development of artificial intelligence and robotic technologies, it is imperative to revisit the law governing the conduct of hostilities to determine how each rule could be translated to set technical parameters for the lawful operation of autonomous systems. This task is particularly pressing for the following three technological issues: (1) accuracy and predictability in targeting; (2) the protection of the victims of war; and (3) safeguarding to mitigate adverse humanitarian impacts.
Accuracy and Predictability
One of the primary concerns with lethal autonomous weapons systems is their ability to distinguish lawful targets from civilians and civilian objects that are accorded legal protection. The existing laws already prohibit weapons that are inherently indiscriminate. The principle of distinction, as a norm of customary international law and also as provided in Article 51(4) of Additional Protocol I, requires an attack to be directed at specific military objectives. There is no question that each State has an obligation to ensure that any autonomous functions introduced for targeting purposes meet these basic requirements.
It is feared that addition of autonomous functions to a weapon system in the selection and engagement with human targets will introduce an element of uncertainty and unpredictability inconsistent with the law of armed conflict. These concerns arise because of technological challenges to cognitive functions, contextual decision-making, and algorithmic bias embedded in the statistical operations of the computerized program.
However, these technological challenges do not necessarily impair the ability of autonomous systems to operate lawfully. As discussed elsewhere, technological challenges can be circumvented to ensure lawful operations. For example, limiting a weapon’s operating parameters to a particular battlefield environment or restricting the range of targeting options to high priority targets can mitigate such concerns.
It is a mistake to consider that these cognitive and epistemological limitations are unique to autonomous systems. In the battlefield environment, human soldiers and their commanders must make decisions based upon limited information and imperfect situational awareness. Further, it is not uncommon for modern military forces to launch attacks without line of sight or direct observation of targets.
According to the cognitive framework set out by Professor Michael Schmitt and Major Michael Schauss, the issue of uncertainty permeates the application of existing standards under the law of targeting. Recall, for example, practical challenges to the determination of whether particular individuals are lawful military objectives on account of their direct participation in hostilities. Views are divided as to the requisite level of certainty necessary to render an attack lawful when identifying and engaging persons on the basis of participation in hostilities.
This does not suggest that the ability of autonomous systems to comply with legal requirements should be assessed by analogy to how human operators might act on a reasonable belief under the attendant circumstances. Artificial intelligence—the primary technological innovation driving the expansion of autonomous systems—is not, at the current stage of its development, a technology that makes a machine think like a human. Rather, it relies on a method of data analysis and statistical learning by processing a large volume of datasets according to algorithm-based statistical operations to produce probabilistic outputs (pp. 6-7). As such, the way in which autonomous systems identify a target is qualitatively different from human cognitive processes. In fact, algorithm-based probabilistic reasoning enables certain complex tasks to be performed with greater accuracy and efficiency than they would be by humans.
Because of this fundamental difference, the ability of autonomous systems to comply with legal requirements cannot be measured against the traditional legal standard for target identification, which has been based on the subjective belief of individuals. Instead, the focus of our inquiry should shift to how the target identification capacity of a weapon system can be expressed in objective and measurable terms (p. 495). But the critical question is: what degree of reliability or predictability is considered sufficient to render a lethal attack by autonomous systems lawful?
A strict standard could be advocated. Canada appears to argue for a “high degree” of reliability and predictability out of ethical concerns associated with the lethal use of autonomous systems. However, imposing a strict standard for autonomous systems while maintaining the existing standards for traditional types of weapons would mean setting double standards in weapons regulation. As Finland points out, such double standards could encourage the use of older systems that are less precise and less capable of distinguishing targets, rather than investment in the development of more precise and discriminatory capabilities enabled by autonomous functions.
In this respect, it is important to bear in mind that the degree of reliability and predictability is context-dependent and cannot be measured in isolation to determine the ability of autonomous systems to operate in compliance with the law of armed conflict. As Schmitt and Schauss point out, there are various ways to deal with uncertainty involving a multifaceted situational assessment through different stages of military operations, from planning to target acquisition and execution. The range of lethal impact that an autonomous system is designed to achieve and its likely interaction with its environment are also critical variables to be taken into account for the purposes of this contextual assessment.
Identification of the Intention to Surrender
Under the existing law of armed conflict, autonomous systems must also comply with the obligation to recognize victims of war—individuals who are wounded, sick, or otherwise express the intention to surrender—and disengage from attacking them. Deploying lethal autonomous weapons systems on the battlefield without appropriate capabilities to identify and spare victims of war would run contrary to the prohibition of no quarter orders—showing no mercy or clemency to spare life in return for surrender.
Technological means may become available to enable autonomous systems to identify such victims of war. For example, advanced biometric sensors and algorithms could be developed to perform automated medical screening. On the other hand, technological solutions for determining the intention to surrender require more careful consideration.
Consider, for example, autonomous systems equipped with weapons detection capabilities and that are programmed not to attack individuals who have abandoned firearms. Those who are fighting such autonomous systems may suspend the attack by simply abandoning their firearms, even without a genuine intention to surrender. The act of perfidy—the feigning of an intent of surrender by taking advantage of the adversary’s confidence derived from protections under the law of armed conflict, with the intention to betray that confidence—is generally not prohibited against autonomous systems. Perfidy requires death, injury – or, for parties to Additional Protocol I, capture – of a person resulting from the betrayal of confidence.
This problem arises when there are no accompanying means to constrain the target’s behavior. In the absence of any constraint, enemy fighters will be free to resume combat operations once the danger posed by autonomous systems dissipates. Given that a growing variety of counter-drone capabilities are becoming readily available, such a loophole has the potential to significantly reduce the military value of autonomous systems.
There is a missing link in the current debate on human-machine interaction, which focuses on command oversight of autonomous systems. The machine’s interaction with the human target is equally important. Effective regulation requires defined circumstances and ways in which surrendering individuals are effectively identified and taken out of action. Various options could be usefully explored in diplomatic forums by combining technological solutions with effective mechanisms for the protection of victims of war. For example, by developing a universally recognized surrender protocol (not quite as simple as waving a white flag or a gesture) that machines are capable of recognizing and executing.
Safeguard
The existing law of armed conflict requires combatants to take feasible precautions in planning and conducting attacks to reduce the risk of harm to civilians and civilian objects. The affirmative obligation to exercise precautions is derived from Article 57 of Additional Protocol I and is also widely recognized as a rule of customary international law.
In light of this general obligation, one may consider that risk assessment and mitigation measures should be part of the design, development, testing, and deployment of autonomous systems. A number of States are indeed urging safeguards be put in place to mitigate the risk of harm that autonomous systems may pose to civilians and civilian objects that are not legitimate military objectives.
The idea of safeguarding is not alien to the law of weaponry. Under Amended Protocol II to the Convention on Certain Conventional Weapons, for example, anti-personnel landmines must be equipped with self-destruct and self-deactivation mechanisms, except in areas where their presence is clearly marked, protected, and monitored to ensure the effective exclusion of civilians (see also a previous post on the U.S. anti-personnel landmines policy). Likewise, automatic submarine contact mines are required to become harmless as soon as control over them is lost.
It is possible to see diplomatic negotiations reaching an agreement to impose a similar safeguard obligation upon the use of autonomous systems designed for lethal attacks. Such an obligation would be consistent with the requirement of cancellation or suspension under Article 57(2)(b) of Additional Protocol I.
Feasible safeguard mechanisms might involve self-destruction, the deactivation of munition trigger devices, or re-direction away from the initially intended point of impact—the practice known as “shift cold.” These mechanisms can be programmed to activate, for example, in the event that it becomes apparent that the selected target is not a military one or is otherwise legally protected, or when anticipated collateral damage exceeds the pre-designated threshold due to the unexpected presence of civilians and civilian objects within the range of anticipated impact.
It is debatable whether these safeguard mechanisms should be triggered when the autonomous system misses the originally identified target. On one hand, autonomous systems may continue to operate until their pre-designated mission is accomplished, as long as their target acquisition capabilities remain within the operational parameters originally set.
On the other hand, those who are concerned about their risk of becoming a hazard to civilians and civilian objects may demand that autonomous systems become harmless, much like torpedoes, when they have run their course. However, there will be practical difficulties, due to self-guiding navigation, in determining the point at which autonomous systems are considered to have missed their target or completed their run (pp. 87-88).
The Way Forward
There is no question that it is ultimately human command that activates autonomous systems and sets their operational parameters. The ability of autonomous systems to operate within the bounds of the law is also dependent upon the type of munition on board, which determines the nature and scope of lethal effects.
With so many context-dependent variables at play, further debate at the abstract level is unlikely to yield any meaningful outcome. Instead, diplomatic efforts at the United Nations could make a useful contribution by building a shared understanding of technological solutions for ensuring that autonomous systems are capable of complying with legal requirements.
To that end, there is a need for foundational research to define the sufficient degree of accuracy and predictability required for target recognition, the circumstances and ways in which victims of war can be identified and taken out of action, and the point at which safeguard mechanisms must be programmed to activate. In the absence of such foundational research, it would be premature (and unnecessary) to consider and discuss the need for a new legally binding instrument to regulate the development and use of autonomous systems.
***
Hitoshi Nasu is a Professor of Law at the United States Military Academy.