Reentering the Loop

by , | Feb 9, 2022

Reentering the Loop

The warfighting advantages of using lethal autonomous systems, and the potential costs of not using them, seem to guarantee their role in future armed conflict. This post argues that optimizing their effectiveness involves not only improving their independent functioning but also providing meaningful opportunities for humans to reenter the decision-making process.

The State of the Art: The Law and the Debate

In the not-too-distant future, advanced lethal autonomous weapons systems may be given decision making deference over their human counterparts due to a heightened ability to observe and analyze adversarial behaviors in combat, adapt to changing circumstances, and engage in lethal (or non-lethal) conduct that strategically weakens targets. This deference will also be based in part on these systems’ ability to perform cost-benefit analyses that humans cannot perform in the same timeframe (if at all). The extent to which a transition from human decision making to non-human decision making will occur and can be done in compliance with international law is the source of ongoing and often contentious debate between States—as exemplified in the deliberations of the States Party to the Convention on Certain Conventional Weapons.

Currently, the international community has not settled on an agreed upon standard for human/machine decision-making interaction in combat. This ongoing debate centers primarily on the level of human involvement, if any, in a weapon’s decision-making processes. Conceptualizing the decision-making process to use lethal (or non-lethal) force as the “loop”, discussions have centered around whether a human should be “in the loop”—meaning the machine can’t make the decision without human input, “on the loop”—meaning a machine can’t make the decision without a human observing the machine’s decision and having the ability to intercede, or “out of the loop”—meaning the human is not directly part of the decision-making process and the machine can act without contemporaneous human input.

Some States, and organizations such as the International Committee of the Red Cross, contend that human input regarding lethal decisions is required by the law of armed conflict (LOAC). Under this interpretation, the legal standard requires, at the least, a human on the loop for weapon systems that use artificial intelligence (AI) or machine learning (ML) and have the capacity to operate autonomously. Counter arguments state that while the law requires competent and reasonable analyses, LOAC does not mandate human analyses.[1] In other words, the law does not prohibit non-human analyses and decision making.

Out of the Loop Operations

Regardless how this debate resolves, there are strong indications that States will increasingly depend on AI/ML autonomous capabilities to establish or maintain warfighting dominance. Simply put, if the survival of States depends on employing autonomous AI/ML technology, States will surely do so, even to the extent of taking humans out of the loop.

While future reliance on AI/ML autonomous technology in combat seems likely (and lawful), it also raises legitimate outcome-based concerns that can and should be addressed now. Advocates urge a human-out-of-the-loop approach because autonomous systems couple increased survivability with a heightened ability to process more information more accurately without the limitations of human emotion, endurance, and external persuasion. Despite these benefits, there may be unforeseeable situations that exceed the comprehension of data-driven analytics in which human intervention is warranted.

For example, post-Cold War revelations indicate that a supposed “responsive” launch of nuclear weapons was only narrowly averted when a Russian officer declined to report an incoming missile strike, later confirmed as non-existent, even though computer readouts clearly registered the incoming missiles and indicated that the data was of the highest reliability. Notably, the officer would have been within the bounds of the law to act on the computer data and send the report to the chain of command to launch a retaliatory strike. However, the officer’s decision to consider and then act on information outside the scope of the computer’s comprehensive capabilities ultimately averted nuclear warfare between the United States and the Soviet Union.

Human Reentry

 In the rare situation where data analytics fail to account for atypical variables, and a human intervenes or reenters the loop of an autonomous AI/ML weapon system, an appropriate structure of authority and accountability must be in place to guide this process. Given the assumption that fully autonomous AI/ML weapons will eventually be fielded, States must enable human reinsertion into the loop and establish mechanisms to respond appropriately in such circumstances.

Our assumption is that when fully autonomous and lethal AI/ML weapon systems are employed, States will have assessed and confirmed that these systems can comply with LOAC. Furthermore, to ensure compliance, they will build “reentry” command pathways into these systems to control weapon functions. Based on these assumptions, there are three primary issues to consider. First is the issue of placing a check on unquestioned deference to machine outputs. Second, preserving operational success in a commander/subordinate dynamic. Third, establishing accountability when a human is allowed to intervene and the human override turns out to be erroneous. In other words, what circumstances warrant holding the human accountable when a decision to intervene is unfounded?

Human deference to AI/ML autonomous and other computer systems is well documented. The reasons States will employ AI/ML autonomous weapon systems—including the ability to process information faster and more accurately and then base lethal actions on that information—are the same reasons that will make humans mistrust their own decisions when they diverge with the systems. Because many decisions in applying lethal force rely to some extent on fact-based judgment, humans may be hesitant to override the automated system’s decisions for fear that their judgment is deficient or itself erroneous. This hesitance will intensify as AI/ML autonomy proves increasingly effective and accurate. Nevertheless, States must provide for human reentry and reinforce the desirability of such action when the circumstances dictate. Without pervasive emphasis, humans will be unlikely to set aside their bias and reenter the decision loop.

This first issue highlights the need for sensitivity to the second issue—the need for command/subordinate relationships to guarantee operational success. Military commanders send subordinates into situations that could lead to death on a routine basis during armed conflict. Subordinates must follow lawful orders and commanders are liable, even individually criminally liable, for violations of the LOAC in directing subordinates during military operations.

This doctrine of command responsibility must persist in operations where lethal AI/ML autonomous systems are employed. No responsible commander wants a weapon system that he or she cannot control, or at least “turn off.” If commanders lack that capability, they cannot be held responsible for an errant system. Just as unforeseeable weather conditions can alter the effects of weapon systems, some aspects in the design of lethal AI/ML autonomous systems lie outside command control and cannot not lead to command responsibility. However, command reentry pathways into lethal AI/ML autonomous systems maintain the necessary superior/subordinate relationship on the battlefield and justify the responsibility that comes with the exercise of command.

Finally, assuming an ability to reenter the loop and that a human chooses to intervene, it is certainly possible that human discretion to reenter the loop and prevent a lethal AI/ML autonomous system from acting (or vice versa) might lead to more deleterious, including unlawful, results. This reality requires States to ensure that a mechanism is in place to penalize wrongful actions upon reentry. Of course, to the extent that reentry was justifiable based on the circumstances—regardless of the results—no responsibility should be allocated, as is true with other decisions on the battlefield. But where a commander or military member reenters the loop and prevents the AI/ML autonomous system from properly performing or causes the system to perform in an unlawful or more deleterious manner, that individual should bear the responsibility for that action.

Conclusion

In sum, if States eventually employ lethal autonomous weapon systems, a command path to reenter the AI/ML autonomous system to override incorrect or unlawful actions will be critical. That reentry must be facilitated in a way that overcomes undue human deference to machines and preserves the commander/subordinate relationship. Assuming further that there will be circumstances in which humans will be justified in overriding autonomous technology decisions, there may also be circumstances where such intervention leads to incorrect or unlawful results. In such cases, that intervention should also lead to individual responsibility, depending on the circumstances of the intervention.

***

Eric Talbot Jensen is a Professor of Law at Brigham Young University.

Carolyn Sharp is a law student at Brigham Young University. Carolyn focuses her research on the impacts of advanced technology on international law and the law of armed conflict.

***

Footnotes

[1] See Eric Talbot Jensen, Autonomy and Precautions in the Law of Armed Conflict, 96 Int’l L. Stud. 578, 580 (2020); Carolyn Sharp, Status of the Operator: Biologically Inspired Computing as Both a Weapon and an Effector of Laws of War Compliance, 28 Rich. J.L. & Tech., no. 1 (2021).

 

Print Friendly, PDF & Email