Coding the Law of Armed Conflict: First Steps


| Jul 14, 2022

Coding IHL

[Editor’s note: The following post highlights a subject addressed in the Lieber Studies volume The Future Law of Armed Conflict, which was published 27 May 2022. For a general introduction to this volume, see Professor Matt Waxman’s introductory post.]


Killer robots have captured the collective imagination, but many other predictive algorithms will arrive on the battlefield sooner to provide decisional support to militaries. Militaries may seek algorithms that help them predict the legal status of a person, or whether someone is holding a weapon in a hostile pose, or whether a particular attack would be proportional. Even if coders cannot embed the law of armed conflict (LOAC) directly into those algorithms, militaries will benefit from algorithms that are informed by LOAC. In chapter 3 of The Future Law of Armed Conflict, I explore what an effort to create and deploy LOAC-sensitive algorithms might entail.

There is widespread skepticism about programming LOAC into code; many doubt that autonomous systems will be able to implement complex legal concepts such as distinction and proportionality. Indeed, work to date suggests that it is very difficult to directly translate abstract and context-dependent legal concepts into code. There are a few areas of law where computer scientists and lawyers have encoded legal rules; TurboTax, for instance, produces reliable legal conclusions about a user’s tax liability. Additionally, judges use algorithms in the criminal justice setting to predict how dangerous a person is, which informs their decisions about bail, parole, and sentencing. These examples suggest that it is possible to create predictive algorithms that consider the law ex ante (where programmers understand the legal contexts for which they are producing the algorithms), but still require human decision-makers (informed by those predictions) to apply the law ex post.

Militaries seeking to use predictive algorithms to make sense of vast quantities of information and identify patterns and anomalies should proceed in a three-phase process. First, coders and lawyers should identify the rules of LOAC that will be relevant to the type of operation for which the algorithm’s predictions will be used and assess what characteristics or facts will be most salient to the predictions the algorithms will make. For example, an algorithm used to predict whether a person poses an imperative threat to security might include features such as known suspicious or hostile actions, age, past detentions, associations with organized armed groups or other hostile actors, past employment, tribal relationships, and communications networks.

Second, programmers, working with lawyers and military operators, should code those features into the algorithm and train the algorithm on past cases that include examples of those features. One way to ensure that the system’s recommendations are attuned to the legal rules would be to program it narrowly and set the system to only show recommendations for which the system has high confidence. Third, the algorithm would produce a prediction about the identity or nature of a person or object, identifying the level of confidence about the prediction. Based on the prediction, military lawyers would assess whether the person or object meets the legal test set out in LOAC and advise the commander about the action’s legality.

There would be real benefits to pursuing these algorithms. Their use not only could improve the speed and accuracy of targeting and detention decisions, but may also have subsidiary benefits too. First, the process of developing these algorithms may force government officials to come to greater agreement about what bodies of law apply to particular situations and what factors are relevant for training an algorithm that implicates detention or targeting decisions. Because the coding process will involve decisions about the nuances of LOAC and may happen ex ante, before the system is deployed in the field, there may be greater opportunities for a set of U.S. government actors beyond Defense Department officials to participate in those decisions. Second, the use of these algorithms may make it simpler for militaries to recreate and audit their own detention and targeting decisions. Third, automation-driven efforts to quantify the features and confidence levels surrounding LOAC decisions may force military actors to question and more clearly articulate what drives their non-computerized military decisions.

In creating these decision-support algorithms, military operators, programmers, and lawyers will undoubtedly face challenges. Training these types of algorithms requires a lot of high-quality data, and militaries must guard against hacking and retest machine learning systems frequently. Determining the specific features relevant to the application of a LOAC rule will involve trial and error, as well as steep learning curves by everyone involved. Lawyers will need to understand the capabilities, requirements, and limits of algorithms, while programmers will need to learn the basics of LOAC and how militaries make LOAC-infused decisions under pressure. Leaving lawyers in the loop in the law-algorithm-law process is the most prudent way forward, at least until war becomes “hyperwar.”


Ashley Deeks is a Professor of Law at the University of Virginia Law School, where she teaches international law and national security law.



Photo credit: Piqsels