Israel – Hamas 2024 Symposium – Algorithms of War: Military AI and the War in Gaza

by , | Jan 24, 2024

AI

January brought with it a new phase in the war in Gaza. Earlier this month, Israeli Defense Minister Yoav Gallant declared that the “intense combat” stage of the war had ended, at least in the northern region of the Gaza Strip, with thousands of reservists returning home to Israel. Meanwhile, the world awaits a decision from the International Court of Justice on South Africa’s provisional measures request, in which South Africa urged the Court to order Israel to “immediately suspend its military operations in and against Gaza.”

This latest round of violence between Israel and Hamas has brought so much suffering to a region that had already experienced unimageable amounts of pain, bloodshed, and heartache. With the grotesque and barbaric assault of Hamas on October 7, the beheadings and merciless acts of sexual violence, the burning of kibbutzim and towns, and the taking of hostages. With tens of thousands now dead in Gaza and millions more displaced. With an unprecedented humanitarian and health catastrophe and daily warnings from the World Health Organization about the risk of famine and deadly disease outbreak in the Gaza Strip.

It is trite but true that military forces fighting in densely populated urban areas must make countless life-and-death decisions. These decisions are particularly consequential where the enemy disregards basic rules of international humanitarian law (IHL) concerning distinction, precautions in attack, and precautions against the effect of attacks. Many of these decisions, particularly those relating to targeting, require real-time legal and moral balancing, sometimes with limited relevant and comparable historical precedent to serve as a compass. Civilians always bear the costs of these balancing acts.

Sometimes these decisions will involve mistakes and miscalculations, often driven by the uncertainty of the factual world as presented through the limiting lens of intelligence assessments. These decisions may entail systemic biases, being triggered by the wrong set of values or presuppositions. Certain biases might stem from the top, promoted by political or military officials. Some of these decisions might best be understood not as mistakes at all, but rather as grossly reckless or intentional conduct, carrying even greater degrees of culpability. As one Israeli military intelligence source told Israeli news site +972Mag,

When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed—that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.

Whether intentional or otherwise, we must scrutinize these wartime practices to avoid legitimizing or tacitly condoning them.

On Jus ad Bellum and Jus in Bello

In this spirit, the use of emerging technologies in the Gaza war deserves careful attention. The territories occupied by Israel since 1967 have long been a “proving ground” for new surveillance technologies with the potential to supercharge weapons of war. Independent studies of these technologies are vital to ensure that those technologies that intrinsically violate IHL, or whose uses are inherently harmful, do not perpetuate. The concern is one of “sticky precedent,” sustained through a process of gradual toleration and collective silence. The worry is that these technologies and their associated wartime practices will get exported through the arms trade and military aid, thereby transiting from one battle zone to others.

Within the limits of this post and its follow-up, we hope to give some attention to one particularly disruptive technology, the wartime use of artificial intelligence (AI) as a decision support tool in targeting. This post introduces the rise of AI in military targeting and examines its application in the war in Gaza. The goal of the post is to provide a broad eyed view regarding the implications of the use of such predictive algorithms on IHL. A later post, part of a symposium celebrating the new Oxford University Press co-edited anthology by Professor Laura Dickinson and LTC (ret.) Ed Berg, Big Data and Armed Conflict, will explore one particular legal obligation—the duty of constant care—in the context of the use of military AI and its effects on digital rights protection.

Reasonable minds can differ on certain ad bellum aspects of the war in Gaza (aspects that we both thought and wrestled with in writing this post, and which have been exhaustively debated elsewhere, see for example here and here). Within the limits of this post we do not wish to address, let alone resolve, these thorny questions. Rather, we are interested in focusing on in bello concerns, particularly those raised by the use of AI. That said, we do recognize that the use of AI in military targeting could have ad bellum implications. Existing AI decision support tools, as tools oriented towards advising particular in bello targeting decisions, may encode certain blind spots relating to ad bellum analysis that could produce undesirable and understudied externalities. Ultimately, though, we leave these questions for another day.

The Rise of AI in War and Legal Responses

The rise of AI in military applications, particularly in battlefield scenarios, marks a significant shift in modern warfare tactics and strategy. AI’s integration into military technology has been driven by its ability to process and analyze large volumes of data rapidly, make predictions, and execute complex tasks with speed and precision that far surpass human capabilities. For instance, in reconnaissance and surveillance, AI-powered drones and satellites are employed for gathering real-time intelligence. These systems, equipped with advanced sensors and imaging technologies, can identify and track targets, assess terrain, and monitor enemy movements with astonishing accuracy. The U.S. military’s Project Maven, initiated to automate the analysis of vast amounts of video footage, exemplifies this application. A recent U.S. Department of Defense (DoD) Adoption Strategy accelerated even further “the adoption of advanced data, analytics, and artificial intelligence technologies” in order to increase “rapid, well-informed decision making” by DoD leaders and warfighters alike.

Yet, as one of us has written, the majority of international legal attention in recent years has focused on only one narrow category of AI military applications, lethal autonomous weapon systems (LAWS). Debates around accountability and liability for killer robots and calls and countercalls for and against a moratorium on their development and deployment, have sucked much of the collective air around military AI regulation. That is problematic, because AI is already being embedded in military activity in far more prosaic and pervasive ways. AI decision support systems now inform countless decisions “about who or what to attack and when.” As the International Committee of the Red Cross notes, militaries that have chosen to employ these oft-opaque and potentially-biased machine learning systems are at risk of “over-reliance on AI-generated outputs,” which poses a risk to “civilian protection and compliance with international humanitarian law.”

It is against this backdrop that the United States launched a global effort for norm development as part of the Summit for Responsible AI in the Military Domain (REAIM), convened in February 2023 by the Netherlands and South Korea. As Tobias Vestner and Juliette François-Blouin noted, “the Summit was the first major event that opened the debate from a focus on LAWS to a broader range of military applications of AI.” At the Summit, the United States debuted a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” At the time of writing, 51 States have endorsed the Declaration. As the DoD summarized:

The declaration consists of a series of non-legally binding guidelines describing best practices for responsible military use of AI. These guidelines include ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, have the ability to detect and avoid unintended behaviors, and that high-consequence applications undergo senior-level review.

Among other things, the declaration calls on States to “consider how to use military AI capabilities to enhance their implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict,” including by way of taking “proactive steps to minimize unintended bias in military AI capabilities.”

Predictive Algorithms in the War in Gaza

Israel attended the REAIM Summit but has not endorsed the Political Declaration. Nor did it sign the Summit’s more modest communiqué. Israel’s resistance to global calls to limit or restrain aspects of the use of AI in the military domain is concerning given its extensive use of such technologies in the Gaza war.

First, a caveat. Very little is publicly known about Israel’s AI programs. We must rely on a few public relations pieces and some investigative reporting in assessing the scope and nature of Israel’s military AI machinery. What we do know is that programs such as “The Alchemist,” “The Gospel,” and “Depth of Wisdom”—each carrying a more ominous-sounding codename—have sufficiently redefined the military campaign in Gaza that Israeli military AI must be regarded as a force-multiplier capability. Israel relied on these programs to attack thousands of Hamas and Islamic Jihad assets, missile-launching sites, rocket manufacturing, production and storage sites, underground tunnels, and the private homes of members of those organizations.

Having studied some of the available reporting, we can shed some limited light on what each of these programs entails. “The Alchemist” provides unit commanders with real-time alerts of possible threats on the ground, sent directly to their handheld tablets. “The Gospel” supports target analysis and identification at the command level (e.g. brigade and division). Commanders are given targeting recommendations, which they are able to verify or discredit with the help of human analysts. Finally, “Depth of Wisdom” was deployed to map Gaza’s tunnel network. This big-data analytical tool provided a “full picture of the network both above and below ground with details, such as the depth of the tunnels, their thickness and the nature of the routes.”

These technologies all rely on geospatial and opensource intelligence that is then matched with an endless stream of historical datasets to tell a coherent story about the present and near-future. The idea is that AI can truly break the time/space continuum. It can see backwards-and-forwards, up-and-down, and inside-and-out, in ways that no one person can. It thus offers combatants unparallel capacities that, in theory at least, may clear some of the inherent fog of war.

Implications for the Law of Targeting

The staggering number of civilian casualties and the level of civilian destruction in the war in Gaza, notwithstanding Hamas’s historic failure to comply with its obligation to take precautions to protect the civilian population from attacks (which others term human shielding), generates a strong prima facie claim that Israel’s AI tools are not currently calibrated with the aim of minimizing harm to civilians and civilian objects. Putting aside lay claims that these AI tools have given rise to a “mass assassination factory,” their use as manifested in Gaza puts into question whether military AI could ever be deployed in a manner protective of IHL rules. As some scholars have argued, “AI-enabled targeting systems, fixed as they are to the twin goals of speed and scale, will forever make difficult the exercise of morally and legally restrained violence.”

The first step in grappling with these concerns is a demand for greater transparency. As one of us has argued, reasonableness of intelligence processes and the tools that are developed to facilitate them can only be independently assessed on the basis of a public record. The fact that so little is known about Israel’s AI intelligence support systems seriously impairs the legitimacy of those systems. As Ashley Deeks writes,

Being transparent about why the military chooses to use predictive algorithms, what advantages these algorithms (and particularly machine learning) offer, how the military plans to ensure that their use is consistent with the law, what the costs of using them will be, and how the military intends to mitigate those costs will go a long way toward attracting public and allied support for this latest turn in warfare.

The same is also true in the context of intelligence collaboration. The fact that the Biden administration has been intimately involved in providing Israel with intelligence to support these targeting efforts, while staying silent as to the scope and nature of its engagement, is equally troublesome. It calls into question the U.S. commitment to the military AI norms it has championed through its Political Declaration. It also raises questions about the sincerity of the administration’s repeated urges to the Israeli government to meaningfully protect Palestinian civilian lives.

Beyond transparency, the employment of AI introduces a number of significant challenges to targeting rules in IHL, namely evolving standards of necessity, proportionality, distinction, and precautions. As Mark Klamberg writes, “appropriate limitations on the means and methods of AI warfare have to be implemented into training, military units, computer code and [rules of engagement] in a seamless and integrated way.” But no country, Israel included, can claim to be in a position to have done this already. The reality is that militaries are developing and deploying these technologies in real-time and in real battle scenarios, learning as they go. Turning Palestinians into guinea pigs in what is otherwise a global arms race for wartime AI supremacy is not only dangerous, but morally inexcusable.

Another serious concern has to do with the erosion of IHL principles through the deployment of AI decision support systems. These systems are only semi-automated, which means that there is always a human-in-the-loop. But what value does this human supervision bring? As Neil Renic and Elke Schwartz have explained, AI systems erode moral agency. Part of the reason for that, as they explain, is rooted in “routinization” which at once both reduces “the necessity of decision making” and masquerades the life-and-death meaning of the decision.

But an equally problematic aspect of the deployment of AI in these scenarios is the probabilistic nature of AI assessments. Systems like “The Alchemist” and “The Gospel” introduce claims based on statistical correlations, the nature of which is at best partially understood by the decisionmaker. Military commanders who bear the onus of responsibility under IHL for faulty targeting, already suffer from a lack of nuanced understanding of traditional intelligence production cycles making them inadequate in supervising those processes. Their capacity to do so in an AI-environment, where they have even less capacity to challenge the algorithmic black box advising them, entails ever further erosion of their moral agency.

Conclusion: A Plea for Passionate Reasoning

We recently marked a decade since the end of the last failed round of direct bilateral permanent-status negotiations between Israelis and Palestinians. In 2013, we both worked on opposite sides of those negotiations. Asaf was an articled clerk at the Israeli Ministry of Foreign Affairs Office of the Legal Advisor. Omar was a legal advisor to the Palestinian negotiating team. In 2013 we didn’t know each other; we were strangers. It would take several more years and one Yale Law School to bring us together. We are now friends and colleagues.

Legal analysis about war may sometimes come across as clinical. In Naz Modirzadeh’s terms, we hope our writing is not read as “distanced, remote, and abstract” (p. 63). We do not wish to distance ourselves; we cannot distance ourselves from a war that has impacted so many of our loved ones. At the same time passionate reasoning might come across as patronizing or traumatizing. While the ravages of war are felt in every corner of Israel and Palestine, we hope this text is not read in this way by our brothers and sisters back home.

We stress these points about our backgrounds to highlight the fact that collaborative scholarship is still possible, dare we say needed, at a time of growing divisiveness.

***

Omar Yousef Shehabi is an Acting Assistant Professor at NYU School of Law and a JSD candidate at Yale Law School.

Asaf Lubin is an Associate Professor of Law at Indiana University Maurer School of Law and a faculty. He is additionally, a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University, an Affiliated Fellow at Yale Law School’s Information Society Project, and a Visiting Scholar at the Hebrew University of Jerusalem Federmann Cyber Security Research Center.

 

 

 

Photo credit: IDF Spokesperson’s Unit

RELATED POSTS

The Legal Context of Operations Al-Aqsa Flood and Swords of Iron

by Michael N. Schmitt

October 10, 2023

Hostage-Taking and the Law of Armed Conflict

by John C. TramazzoKevin S. CobleMichael N. Schmitt

October 12, 2023

Siege Law and Military Necessity

by Geoff CornSean Watts

October 13, 2023

​–

The Evacuation of Northern Gaza: Practical and Legal Aspects

by Michael N. Schmitt

October 15, 2023

A “Complete Siege” of Gaza in Accordance with International Humanitarian Law

by Rosa-Lena Lauterbach

October 16, 2023

The ICRC’s Statement on the Israel-Hamas Hostilities and Violence: Discerning the Legal Intricacies

by Ori Pomson

October 16, 2023

Beyond the Pale: IHRL and the Hamas Attack on Israel

by Yuval ShanyAmichai CohenTamar Hostovsky Brandes

October 17, 2023

Strategy and Self-Defence: Israel and its War with Iran

by Ken Watkin

October 18, 2023

The Circle of Suffering and the Role of IHL

by Helen DurhamBen Saul

October 19, 2023

Facts Matter: Assessing the Al-Ahli Hospital Incident

by Aurel Sari

October 19, 2023

Iran’s Responsibility for the Attack on Israel

by Jennifer Maddocks

October 20, 2023

Inside IDF Targeting

by John Merriam

October 20, 2023

A Moment of Truth: International Humanitarian Law and the Gaza War

by Amichai Cohen

October 23, 2023

White Phosphorus and International Law

by Kevin S. CobleJohn C. Tramazzo

October 25, 2023

After the Battlefield: Transnational Criminal Law, Hamas, and Seeking Justice –  Part I

by Dan E. Stigall

October 26, 2023

The IDF, Hamas, and the Duty to Warn

by Michael N. Schmitt

October 27, 2023

After the Battlefield: Transnational Criminal Law, Hamas, and Seeking Justice – Part II

by Dan E. Stigall

October 30, 2023

Assessing the Conduct of Hostilities in Gaza – Difficulties and Possible Solutions

by Marco Sassòli

October 30, 2023

Participation in Hostilities during Belligerent Occupation

by Ioannis Bamnios

November 3, 2023

What is and is not Human Shielding?

by Michael N. Schmitt

November 3, 2023

The Obligation to Allow and Facilitate Humanitarian Relief

by Ori Pomson

November 7, 2023

Attacks and Misuse of Ambulances during Armed Conflict

by Luke Moffett

November 8, 2023

​–

Distinction and Humanitarian Aid in the Gaza Conflict

by Jeffrey Lovitky

November 13, 2023

Targeting Gaza’s Tunnels

by David A. WallaceShane Reeves

November 14, 2023

Refugee Law

by Jane McAdamGuy S. Goodwin-Gill

November 17, 2023

After the Conflict: A UN Transitional Administration in Gaza?

by Rob McLaughlin

November 17, 2023

The Law of Truce

by Dan Maurer

November 21, 2023

International Law “Made in Israel” v. International Law “Made for Israel”

by Yuval ShanyAmichai Cohen

November 22, 2023

Cyberspace – the Hidden Aspect of the Conflict

by Tal Mimran

November 30, 2023

Israel’s Right to Self-Defence against Hamas

by Nicholas Tsagourias

December 1, 2023

Time for the Arab League and EU to Step Up on Gaza Security

by Michael Kelly

December 4, 2023

Attacking Hamas – Part I, The Context

by Michael N. Schmitt

December 6, 2023

Attacking Hamas – Part II, The Rules

by Michael N. Schmitt

December 7, 2023

Flooding Hamas Tunnels: A Legal Assessment

by Aurel Sari

December 12, 2023

Damage to UN Premises in Armed Conflict: IHL and Beyond

by Ori Pomson

December 12, 2023

Applicability of Article 23 of the Fourth Geneva Convention to Gaza

by Jeffrey Lovitky

December 13, 2023

Delivery of Humanitarian Aid from the Sea

by Martin Fink

December 13, 2023

The Question of Whether Gaza Is Occupied Territory

by Michael W. Meier

December 15, 2023

Sexual Violence on October 7

by Noëlle Quénivet

December 19, 2023

Hostage Rescue Operations and the Law of Armed Conflict

by Kevin S. CobleJohn C. Tramazzo

December 20, 2023

Qassam Rockets, Weapon Reviews, and Collective Terror as a Targeting Strategy

by Arthur van Coller

January 17, 2024

A Gaza Ceasefire: The Intersection of War, Law, and Politics

by Marika Sosnowski

January 18, 2024

Information Warfare and the Protection of Civilians in the Gaza Conflict

by Tamer Morris

January 23, 2024

Print Friendly, PDF & Email