2023 DoD Manual Revision – To Shoot, or Not to Shoot . . . Automation and the Presumption of Civilian Status

(Editor’s note: This post is part of a series analyzing the 2023 revisions to the U.S. Department of Defense’s Law of War Manual.)
One of the most important and inherently complex areas in the conduct of hostilities relates to plans and decisions to attack. International Humanitarian Law (IHL), particularly in the form of Additional Protocol I, Article 57(2), provides guidance on precautions requiring those engaging to “do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects and are not subject to special protections but are military objectives … .” The term “feasible” has obviously evolved over time, keeping pace with new methods of gathering information and synthesising data. Updated understandings of key concepts embedded in well-established IHL rules are essential if the protections afforded under this area of international law are to remain relevant and effective.
On 31 July 2023, the Department of Defense (DoD) released an update to its Law of War Manual, including a revision to Section 5.4.3 (“Assessing Information in Conducting Attacks”) and a new subsection 5.5.3 titled “Feasible Precautions to Verify Whether Objects of Attack are Military Objectives.” Only the third update to the Manual since DoD first issued it in June 2015, this update raises some interesting questions of “information” in an increasingly information reliant battlespace. The updated Manual confirms the existence of and need to comply with the law of war. Furthermore, in the rising fast pace of warfare, it also stresses that IHL does not restrict commanders and others in making timely decisions or acting with the necessary speed. The updated Manual outlines the legal duty to presume that persons or objects are protected from being targeted for attack unless the available information indicates that they are military objectives.
In this short post we explore and raise several questions relating to the lesser discussed element of this change: not the persons or objects, but the “available information.” This is of critical relevance in the context of information that is increasingly owned and managed by commercial interests.
Available Information
The International Committee of the Red Cross (ICRC) Interpretive Guidance on the Notion of Direct Participation in Hostilities notes,
Obviously, the standard of doubt applicable to targeting decisions . . . must reflect the level of certainty that can reasonably be achieved in the circumstances. In practice, this determination will have to take into account, inter alia, the intelligence available to the decision maker, the urgency of the situation, and the harm likely to result to the operating forces or to persons and objects protected against direct attack from an erroneous decision (p. 76, emphasis added; see also ICRC, Customary International Humanitarian Law Study, p. 24).
Where does this “available information” come from? In the Ukraine invasion, Elon Musk’s network of satellites has significantly enabled communication and Ukrainian activity. Satellite data requires calibration and validation, even when owned by nation States. Who holds responsibility for the accuracy of this imaging and data, enabling battlefield decisions? Is it Elon Musk’s company? Or those using his geospatial data? Can commanders be expected to know how to interrogate data? To what degree? What about where these data are tampered with? What is considered reasonable knowledge in the circumstances? What about data provided by other commercial providers, such as Microsoft? And what does the role of these data providers and tech companies become? Are their employees direct participants in hostilities??
What happens with such public/private partnerships where data are mission critical to ensure that compliance with the DoD Manual, and the law of armed conflict, is assured? What about when companies, such as Palantir, are increasingly offering services of making judgment calls? What happens when they get it wrong? The use of data and automation (even without AI) expedites the process for this decision-making. Who can meaningfully question or interrogate these systems anymore?
What happens when the security of these data is not assured, and it is tampered with and unreliable, resulting in war crimes? Who is responsible? What is the burden of proof to show that the data were correct, reliable and secure?
As Professor Michael Schmitt has highlighted, armed conflict is brimming with uncertainty. It is almost inevitable that additional data will be relied upon to evidence compliance with the required standard. Where will this data come from? How will it be secured? Is the hardware and software providing it a legitimate target?
The push for more and more granular data has profound implications in the battlefield, but also beyond. Who owns and uses these data? What implications will the facial recognition trained on civilians have on their lives if or when this conflict ever ends? Who owns the data? Who owns the algorithms? The push for more accurate data on battlefields, melding tech and battle, even without automation or AI may have perverse long-term effects on populations, particularly vulnerable after conflict, including already news that mothers of deceased soldiers are being contacted, courtesy of Clearview AI. What happens when these data are used to market, target further, hack, or further undermine civil liberties post-conflict?
Concluding Thoughts
Ensuring that a deeper dive is undertaken on wider issues relating to data, commercial ambitions, and populations that are unlikely to even be aware of the gathering of their details for a range of purposes is needed and has commenced. Some of these initiatives involve privacy, a term commonly misused and requiring greater contemplation in the face of joined data sets. But the issues are far broader and are being considered by data science communities through principles of practice, and commercial communities through forthcoming AI management standards. Defence can draw from and learn from many existing laws and initiatives, rather than recreating new frameworks for data and AI, including where gaps exist and require enhanced governance.
Meanwhile, the legal presumption of civilian status in the DOD Manual, on the face of it, seems simple, however the data detritus it may require could have much longer-term and harmful impacts on populations than is being contemplated, and which requires urgent consideration under international law – now.
***
Dr Kobi Leins is a global expert in AI, international law and governance.
Dr Helen Durham is a global expert in international humanitarian law, humanitarian action and diplomacy.
Photo credit: Ben Letham
RELATED POSTS
Department of Defense Issues Update to DoD Law of War Manual
July 31, 2023
–
A Welcome Change to the Presumption of Civilian Status
July 31, 2023
–
The Civilian Presumption Misnomer
by Hitoshi Nasu and Sean Watts
August 1, 2023
–
Handling Uncertainty in the Law of Attack
by
August 2, 2023
–
by Geoff Corn
August 3, 2023
–
Practical Concerns Related to the Presumption of Civilian Status – Part I
by Brian L. Cox
August 14, 2023
–
Practical Concerns Related to the Presumption of Civilian Status – Part II
by Brian L. Cox
August 16, 2023
–
A Commentary on the Amendments
August 23, 2023