Will Autonomy in U.S. Military Operations Centralize Legal Decision-making?
The growth of machine learning tools in military operations raises new questions about where the most critical decision points are located. Are the most important political, operational, and legal decisions made out in the field, where the tools are used, or in headquarters, at the time the tools are developed? This post argues that—perhaps ironically—the growing use of autonomy may end up centralizing key military and legal decisions.
How much the military should centralize or decentralize its policy-setting and decision-making is a long-standing question. The U.S. Army’s Joint Publication 1 highlights the importance of relying on decentralization to execute the mission. It states, “Unity of effort over complex operations is made possible through decentralized execution of centralized, overarching plans or via mission command.” Joint Publication 3-0 notes that “[c]ommanders delegate decisions to subordinates wherever possible, which minimizes detailed control and empowers subordinates’ initiative to make decisions based on the commander’s guidance rather than constant communications.”
This means that every level of command must understand the commander’s intent in carrying out its assigned tasks, but it also gives the subordinates freedom of action and room to be creative in executing the mission. So what happens when the military begins to undertake more and more of those tasks using autonomous systems? How do we ensure that those systems act in a manner consistent with the commander’s intent—and, more broadly, the National Military Strategy and U.S. legal obligations?
Increased autonomy in warfighting tasks may—perhaps ironically—centralize decision-making, as the process of building machine learning algorithms for warfighting systems seeks to incorporate the commander’s intent and remain sensitive to legal constraints. These centripetal forces may even mean that other national security agencies begin to play a role in developing those algorithms.
All trends point toward the continued growth of autonomy and the use of machine learning in U.S. military decision-making. When the military uses machine learning tools to improve logistics, for example, there is little reason to think that the contents of the machine learning algorithms will hold interest for national security actors centralized in Washington. But when machine learning algorithms produce predictions or recommendations that implicate the laws of armed conflict (LOAC), or when their use may raise questions from allies, the Defense Department may not be the only agency that is interested in the way the algorithms operate. The National Security Council and the State Department, for instance, may also seek a role in informing the algorithms’ contents and structure. This post argues that there are both benefits and costs to such a development.
The Role of Lawyers in Developing Machine Learning Algorithms
In a forthcoming book chapter on armed conflict in the coming decades, I explore whether it will be possible and desirable “for the military to develop predictive algorithms that are sensitive to the human user’s legal framework ex ante, and that assist the decision-maker in her legally-infused decision ex post.” I envision a three-phase process by which this might occur: “(1) identifying the applicable law; (2) crafting and training the algorithm to produce a recommendation relevant to that legal framework; and (3) interpreting the algorithmic predictions through the lens of that law.” Lawyers may have a role in all three stages, working with computer scientists and military operators.
For example, assume that the United States decides to develop a machine learning algorithm that predicts the level of danger that each civilian internee presents. The goal would be to help militaries comply with their LOAC obligations in international armed conflicts to detain only those civilians who pose an imperative threat to security. In the three-phase process that I envision, the lawyers and other officials first would conclude that the relevant legal provisions are Articles 42 and 78 of the Fourth Geneva Convention. Having identified the applicable rules, the officials would need to identify what characteristics or facts will be most salient to the legal decision they must make. Those characteristics could include the person’s hostile actions, age, tribal relationships, past detentions, associations with organized armed groups or other hostile actors, past employment, and communications networks.
In the second phase, data scientists—working in conjunction with lawyers and military operators—would code those features into the algorithm and train the algorithm on past cases that include examples of those features. To make sure that the system’s recommendations were sensitive to the legal rules, the scientists could require the system to show only recommendations for which it has a high level of confidence.
In the third phase, military lawyers (JAGs) would assess, based on the system’s predictions, whether a given person meets the legal test set out in LOAC. They would then advise the commander about the legality of that detention. Producing law-sensitive, data-driven algorithmic recommendations upon which lawyers and operators can act could promote the military’s compliance with LOAC.
Which Lawyers are in the Room?
In the United States, at least three agencies have a role in interpreting LOAC: The Departments of Defense, State, and Justice. National Security Council officials also often are involved in discussions about LOAC and its interpretation. If legally-inflected machine algorithms are within reach, which lawyers should be involved in producing these algorithms?
The most obvious option is for the process of developing LOAC-sensitive algorithms to involve only military lawyers. After all, the U.S. military has a well-established weapons review process, and non-military lawyers are not involved in that process. Likewise, JAGs constantly provide legal advice during armed conflict without consulting the Defense Department’s Office of the General Counsel, let alone the National Security Council or other executive agencies.
And yet there may be pressure to adjust the traditional process, which involves only JAGs, when building machine learning systems that produce recommendations for soldiers during conflict. If the use of the system will have significant foreign relations implications and if the system’s recommendations operate in areas of LOAC that already have been the subject of significant interagency interest, it seems likely that other lawyers will seek out a role for themselves.
There have been a range of cases in which interagency lawyers have helped craft rules for military policy and operations. Under the Obama Administration, U.S. agencies that were engaged in cyber operations had to gain approval from an “array of stakeholders” in the federal government before acting. That policy presumably was developed within an interagency process, even though the Defense Department (and perhaps other agencies) were the ones executing the operations. The effort was centralized because of what a former National Security Council official termed the “very real and hard legal questions associated with cyber.” Likewise, interagency lawyers (including those from Justice, State, and Defense) played a significant role in decisions about which detainees to continue to hold at Guantanamo, once the Supreme Court concluded that those detainees were entitled to habeas corpus. In short, when the underlying military tools or processes are new, controversial, or legally complex, the associated legal and policy decisions often become centralized in Washington.
Will the Process of Building Machine Learning Algorithms be Centripetal?
As noted above, the process I envision for creating legally-sensitive algorithms anticipates agreement on what law applies in a given context; what facts and factors are relevant for training an algorithm that implicates targeting or detention decisions; and whether specific algorithmic predictions meet LOAC standards.
Today, in settings that do not involve machine learning algorithms, the interagency reaches consensus on the first question: what body of law applies in a given context. It is less common for interagency lawyers to address the factors relevant to a specific targeting or detention analysis. Rather, that analysis usually occurs in a decentralized setting in the field. The military is present; State and Justice Department officials are not. It is even less common for interagency lawyers to evaluate whether a particular military conclusion about detention or targeting meets a LOAC standard. (The Guantanamo habeas cases were an important exception.)
The “algorithmization” of detention and targeting decisions might alter these traditional processes, however. Because the creation of the algorithms implicates the interpretation of international law and because the use of machine learning algorithms in detention and targeting decisions may have tangible implications for foreign policy and military alliances, the Justice and State Departments might want to be part of their development. Likewise, interagency actors might want to craft guidance in advance about what types of quantitative predictions would or would not meet the underlying LOAC standard. Because the coding process will involve decisions about the nuances of LOAC, and will happen before the system is deployed, there may be greater opportunities for a broader set of U.S. government actors to claim a stake in those decisions. Even if the Defense Department successfully preserves these decisions for itself, the “codification” of the process of legal analysis might mean that a greater number of actors within the Defense Department itself are part of that process.
The military may perceive this potential centralization of decision-making as unattractive, and may resist sharing the authority to make algorithmic choices about LOAC. Interagency lawyers might also struggle to reach consensus about what features to incorporate into an algorithm that informs military decision-making. Further, the military might not want to quantify in code certain decisions that it currently makes using a more flexible, language-based standard. On the other hand, obtaining interagency buy-in on machine learning algorithms should bolster the military’s confidence about their use and would also allow State Department diplomats and lawyers to engage more deeply with allies on what may be controversial uses of machine learning.
In light of the potential for the United States to adopt robust military algorithms and high levels of interest in military autonomy from near-peer states such as Russia and China, now is the time for militaries in particular, and executive agencies in general, to consider the roles that their lawyers should play in building LOAC-focused algorithms.
***
Ashley Deeks is the E. James Kelly, Jr. – Class of 1965 Research Professor of Law at the University of Virginia School of Law.