Through the Drone Looking Glass: Visualization Technologies and Military Decision-Making
On 29 August 2021, the U.S. military launched its last drone strike in Afghanistan before American troops withdrew from the country. The strike targeted a white Toyota Corolla near Kabul’s international airport, driven by Zemari Ahmadi, believed to be carrying an ISIS bomb. As a result of the strike, the targeted vehicle was destroyed and ten people were killed. The U.S. military called it a “righteous strike,” explaining that it was necessary to prevent an imminent threat to American troops at Kabul’s airport. However, following the findings of a New York Times investigation, a high-level U.S. Air Force investigation found that the targeted vehicle did not pose any danger and that all ten casualties were civilians, seven of them children. Despite these outcomes, the investigation concluded that the strike did not violate any law, because it was a “tragic mistake” resulting from “inaccurate” interpretation of the available intelligence. The investigation suggested that the wrong—and lethal—interpretation of the intelligence—which included eight hours of drone visuals—resulted from “execution errors” combined with “confirmation bias.”
Using cognitive insights, such as confirmation bias, to explain—and excuse—military errors resulting in civilian casualties, is a step forward, but not necessarily in the right direction. It is a step forward in the sense that it recognizes significant cognitive dynamics that limit crucial military risk assessment and fact-finding processes. But this step will not lead to better outcomes without a deeper understanding of how existing data practices—including real time drone visuals—are susceptible to, and affected by, cognitive biases. Stronger, more effective, protections to civilians in armed conflicts require acknowledging the core role drone visuals play in generating knowledge that is often perceived as objective—despite being distorted by technical, socio-technical, and cognitive dynamics.
In this essay I aim to add these technological and behavioral elements in military knowledge production to the important discussion on compliance with International Humanitarian Law (IHL), which was the focus of a joint symposium hosted by EJIL: Talk and the Articles of War blogs in November 2021. The compliance symposium reflected a series of discussions by members of the Oxford Forum for International Humanitarian Law Compliance. Its main premise was that IHL has an enduring compliance problem, which is explained through three main reasons: (i) IHL norms—and their applicability to concrete situations—are inherently contested; (ii) IHL’s obligations are delegated to individuals; and (iii) IHL’s compliance mechanisms are ineffective and subject to a transparency gap concerning state actual behavior. The path forward, as reflected in these discussions, is through “minilateral” initiatives creating a compliance dialogue, as well as through strengthening armed forces’ IHL training.
I agree with these main premises of the symposium. My response aims to contribute to this discussion by adding a missing element, which is central to the compliance problem: the cognitive dynamics affecting IHL’s visualization-centered knowledge production practices. As the symposium focused on compliance in its broader, protective sense, and identified ‘diplomatic doublespeak’ as a part of the problem, I will also engage here with “compliance” in its humanistic (rather than legalistic) form. This is especially important because when the concrete scope and interpretation of legal rules are deeply contested, compliance in its narrow legalistic sense may become meaningless. I will also use, interchangeably, the terms International Humanitarian Law (IHL) and Law of Armed Conflict (LOAC), signaling that this discussion does transcends the existing interpretive “camps,” and that the law’s protective goals ultimately contribute to human security everywhere.
Visualization Technologies and Compliance with IHL/LOAC
My research into the effects of visualization technologies on military decision-making, which recently won ASIL’s 2021 David D. Caron Prize, identifies several compliance-related challenges that stem from reliance on visualization technologies and can be explained—and improved—using behavioral insights. Visual technologies may influence the relevant legal standards, shaping the meaning of “reasonable commander” and constructing the scope of the legal burdens of care.
An awareness of the effects of cognitive biases on the interpretation of drone visuals may influence the scope of the duties to ‘do everything feasible’ to verify the target identification and to avoid or minimize collateral damage. For example, meaningful precaution may require mitigating systemic errors deriving from biased interpretation of drone visuals through various debiasing techniques. Additionally, the visible outputs of visualization technologies and the invisible biases involved in their interpretation may amplify pre-existing vulnerabilities in the legal standards and in particular, their murky standards of proof.
In his contribution to the compliance symposium, Oakley referred to the debate surrounding the required level of certainty on targeting decisions, arguing that there is a knowledge gap concerning the legal requirement. Indeed, in arguing against the “reasonable certainty” standard and supporting the “near certainty” one, Adams and Goodman demonstrate that the level of certainty required by LOAC is nothing but certain. But even if the standard itself was clear, behavioral insights teach us that decision-makers’ level of certainty may be unconsciously affected by a number of cognitive processes leading to misinterpretation of the available evidence, and to experts’ overconfidence in their biased analysis.
Limitations of Visualization Technologies
In the remainder of this post, I will focus on this last point, addressing the challenges relating to the effects of visualization technologies on military fact-finding processes, and exposing their technical, socio-technical, and cognitive constraints. I will do so using examples from military investigations in the United States and Israel, drawing attention to the invisible burdens these technologies place on decision-makers. As findings from these investigations show, visualization outputs create an imperfect, yet highly persuasive, virtual representation of the actual conditions on the ground; a representation that is difficult, if not impossible, to refute.
To clarify, my claim is not that military decision-making processes are better or more accurate without the aid of visualization technologies. These technologies indeed provide a large amount of essential information about the battlefield, target identification, and the presence of civilians in the vicinity of a planned attack. I also do not engage here with arguments, such as those made by Samuel Moyn, that precision weapons and visualization technologies humanize armed conflicts and thus contribute to the legitimation of lethal attacks. The argument, instead, is that the undeniable benefits of visualization technologies for military decision-making processes mask their blind spots: visualization technologies are imperfect and limited in several ways, which are not always visible to decision-makers.
First, visualization technologies have technical and human-technical limitations, including insufficient or corrupt data inputs, blind spots, as well as time and space constraints. The missing details or corrupt information remain invisible, while the visible (yet limited or partial) outputs capture decision-makers’ attention. Indeed, emerging empirical evidence suggests that real-time imaging outputs may reduce the situational awareness of decision-makers, who tend to place an inappropriately high level of trust in visual data. Additionally, technology systems may fail or malfunction.
When military practices rely profoundly on technology systems, decision-makers’ own judgment, and their ability to evaluate evolving situations without the technology, erodes. The misidentification of the Doctors Without Borders hospital in Kunduz, Afghanistan, in October 2015 as a legitimate target—a decision that led to the killing of 42 patients and hospital staff members—was partly attributed to the AC-130 aircrew’s reliance on infrared visualization technology. As this visualization technology is incapable of showing colors, it was incapable of depicting the red color of the hospital’s red cross symbol, which could have alerted the aircrew that the intended target was a medical facility. Deeks points out that both a positive target identification, and an implicit approval by not alerting that the target is a protected target, may involve an automation bias, where individuals accept the machine’s explicit or implicit recommendation.
Second, these technical (and human-technical) limitations create gaps in the available data. The need to fill these gaps makes military decision-making “rife with subjectivity and speculation,” as Broude puts it. Van Aaken emphasizes the relevance of bounded rationality theories, including concrete biases such as availability, anchoring and confirmation, to the application and interpretation of international law generally, and in particular in the context of armed conflicts.
Availability bias occurs when people overstate the likelihood that a certain event will occur because it is easily recalled, making decision-makers less sensitive to information that runs contrary to their expectations. This means, that under some circumstances—for example, when depicted in areas where insurgents have been previously identified—individuals depicted in drone visuals may be more likely to be interpreted as insurgents rather than civilians. Anchoring bias occurs when the estimation of a condition is based on an initial value—anchor—that might result from intuition, a guess, or other easily recalled information. The problem is that decision-makers do not adjust sufficiently from this initial anchoring point. Confirmation bias refers to people’s tendency to seek out and act upon information that confirms their existing beliefs or interpret information in a way that validates their prior knowledge. As a result, the interpretation of drone visuals may be skewed based on decision-makers’ existing expectations, and this confirmation may then serve as an (inaccurate) anchor for casualty estimates or target identification.
To demonstrate the potential effects of these cognitive biases on military decision-makers let’s return to the 29 August attack on the white Toyota Corolla that killed Zemari Ahmadi, three of his children, and six other family members and neighbors. The investigation concluded that U.S. forces received information about a planned terror attack involving a white Toyota corolla at a specified location near Kabul’s international airport. Once that information was received, visuals of Mr. Ahmadi, who was driving a white Toyota Corolla, were interpreted consistently with this intelligence, and all of Mr. Ahmadi’s following movements and actions were interpreted to affirm this suspicion.
Similarly, erroneous subjective judgments—likely affected by availability bias—were found to be the cause for an Israeli Defence Forces (IDF) erroneous attack on civilians during Operation Cast Lead in January 2009. On 5 January 2009, Israeli forces fired several projectiles at the Al-Samouni family house south of Gaza City, killing 21 civilians. The house was targeted following a drone visual which was misinterpreted as depicting five men holding RPG rockets at that location. An Israeli military investigation later found that the attack resulted from erroneous reading of the drone visual, which in fact depicted the five men holding firewood. The technical limitations of the image left room for human judgment, which inserted subjectivity—and cognitive biases—into a seemingly objective visual. In my research into the effects of visualization technologies on military decision-making I provide qualitative evidence from several additional investigations.
Strengthening Compliance
Based on this analysis, strengthening compliance with IHL/LOAC’s protective goal (as opposed to its contested standards), whether by minilateral initiatives or by training, must include a new program focused on the behavioral elements in its technology-based knowledge production practices. In particular, it is essential to identify how drone visuals affect human risk assessments, adding tailored protections against these unconscious challenges. These may include reconceptualization of the “duty of care” (as suggested by Hirsch in another context); heightened visibility of internal disagreements about the interpretation of drone visuals; a rigorous inter-agency review process, with the goal of offering alternative interpretations (similar to the idea of “red teams” in investigative journalism); training sessions that identify the concrete limits and blind spots of the technology (including relevant biases, such as automation bias); and a shift from individual to organizational accountability for technology-related failures.
This last point can lead to better compliance as it encourages individuals to identify their own errors without fear of retaliation. Of course, ex post investigations are themselves influenced by a number of cognitive biases, including outcome bias, as Broude and Levy demonstrate. In the chapter I contributed to Bianchi and Hirsch’s International Law’s Invisible Frames book, I propose legal, epistemological, and behavioral ways to strengthen ex post military investigations, with a particular emphasis on ex post fact-finding processes.
While drone visuals hold much promise for evidence driven risk assessments, visualization technologies may also jeopardize safety and security by masking data gaps and triggering unconscious cognitive biases. As governments around the world intensify their investments in sophisticated combat drones, it is essential to develop effective ways to better integrate these technologies into human decision-making processes, acknowledging the limitations of human cognition.
***
Shiri Krebs is an Associate Professor at Deakin University’s Law School, and Co-lead, Law and Policy Theme, at the Australian Government Cyber Security Cooperative Research Centre (CSCRC).
RELATED POSTS
Joint Symposium: Oxford Forum for International Humanitarian Law Compliance
November 17, 2021
–
Military Partners and the Obligation to “Ensure Respect” for IHL
by Dale Stephens and Eve Massingham
November 18, 2021
–
Legal Advice in Modern Aerial Warfare
by Craig Jones
November 22, 2021
–
Improving Compliance with IHL: A Long-Term Enterprise
by Yvette Zegenhagen and Michael Meyer
November 22, 2021