Military Use of Biometrics Series – Israel’s Use of AI-DSS and Facial Recognition Technology: The Erosion of Civilian Protection in Gaza

by | Oct 24, 2025

Facial recognition

Editors’ note: This post is part of a series relating to the law applicable to the military use of biometrics. It is drawn from the author’s article-length work, “The Use of the ‘Lavender’ in Gaza and the Law of Targeting: AI-Decision Support Systems and Facial Recognition Technology” appearing in the Journal of International Humanitarian Legal Studies. The introductory post is available here.

Since October 7, 2023, according to the UN Office for Coordination of Humanitarian Affairs (OCHA), at least 64,656 Palestinians have been killed, including more than 18,000 children. A former Israeli colonel recently confirmed that more than 200,000 Palestinians have been killed or injured. The UN Special Rapporteur on the Occupied Palestinian Territories, Francesca Albanese, has concluded that Israel is committing genocide against Palestinians in Gaza, a conclusion recently reaffirmed by a UN Commission. In Gaza, the scale of destruction and loss of life has been described as “unprecedented.”

Israel has used the battlefield to actively experiment and use artificial intelligence (AI)-decision support systems (AI-DSS) to compile potential targets, also described as “kill lists.” The Israel Defence Forces (IDF) have established a large-scale facial recognition program at various checkpoints in Gaza. It is reportedly used to conduct mass surveillance and collect biometric data from Palestinians without their knowledge or consent.

In light of reports of the staggering number of civilian casualties, including children, and the systematic destruction of Gaza, it is imperative to critically examine these patterns of harm and how the IDF has used AI-driven systems to accelerate targeting. According to the +972 Magazine and the Local Call, within the first six weeks after October 7, 2023, one of the AI-DSS, Lavender, generated at least 37,000 target recommendations. Its error rate was reportedly ten percent, meaning that thousands of civilians may have been misidentified as members of Hamas.

In this post, first, I outline the reported uses of facial recognition technology (FRT) and AI-DSS. Second, I highlight the technical risks inherent in these technologies, such as false positives, that can exacerbate harm to civilians. Third, I address the legal barriers to the collection, analysis, and use of data. Finally, I discuss the patterns of harm associated with the use of AI-DSS, with particular attention to the principle of proportionality. 

AI-DSS in Targeting: Lavender and Where’s Daddy

According to media reports, the IDF has used multiple AI-DSS in Gaza, including the Lavender and another called Where’s Daddy. Jessica Dorsey and Marta Bo provide comprehensive explanations on other reported AI-DSS and their implications. Reportedly, Lavender is used to identify individuals suspected of affiliation with Hamas or Palestinian Islamic Jihad (PIJ). This system was built on supervised learning, meaning it was trained with datasets labelled with different attributes to identify Hamas members. It searches for similar patterns in the wider population by assigning each Palestinian a score from one to one hundred, indicating the probability of affiliation. Lavender also provides users with phone numbers and home addresses of suspected members.

Where’s Daddy reportedly tracks individuals flagged by Lavender, identifying when they return home to their families. Once an individual is flagged, they are put under surveillance and marked for bombing once they are home. In some cases, reporting indicates that the IDF authorised strikes that resulted in casualties among present family members, raising serious concerns under international humanitarian law (IHL). Setting up a system in such a way to facilitate the bombing of civilian homes with family members who are not directly participating in the hostilities may be inconsistent with the principles of distinction, proportionality, and precautions in attack.

Facial Recognition Checkpoints

Forensic Architecture has documented that Israel has expanded its use of facial recognition surveillance in Gaza. On November 23, 2023, the UN OCHA reported that internally displaced Palestinians were “ordered to show their IDs and undergo what appears to be a facial recognition scan.” Checkpoints in Gaza have become areas of involuntary biometric collection, where Palestinians must pass through. Whistleblower testimonies claimed that the biometric systems had flagged identified civilians as wanted Hamas militants, leading to wrongful arrests and interrogations of Palestinians.

The New York Times later documented the deployment of a mass recognition program operated by Unit 8200 of the IDF, using tools such as Google Photos to identify Palestinians at military checkpoints and “pick out faces out of crowds and grainy drone footage.” Investigations by Forensic Architecture identified both permanent and temporary checkpoints in Gaza, equipped with remotely controlled biometric cameras. Reportedly, the collected biometric data is “connected to a large database of photographic and biometric data.”

Facial Recognition Technology and False Positives

FRT can be used to identify or verify an individual’s identity. It detects a person’s face, analyses the facial features, and finally identifies or verifies an individual’s identity. If Lavender receives input from FRT, it can search, detect, and identify pre-enrolled individuals in a biometric database. Yet these systems cannot assess an individual’s status under IHL, for example, to determine if they are hors de combat or civilians who are protected from direct attacks. The system is only able to identify or verify an individual’s identity.

At the heart of IHL lies the principle of distinction. It is codified in Article 48 of Additional Protocol I (AP I) and recognised as customary international law. It requires that parties distinguish at all times between civilians and military objectives, and to only direct attack against lawful military targets. The key question, then, is how the use of FRT and AI-DSS affects compliance with this fundamental principle.

To answer this question, it is crucial to consider the associated technical risks. One of these risks is that the system generates a false positive match. This is made more likely if images are taken in an uncontrolled environment with poor lighting, movement, and angles, as is often the case in Gaza. It is possible that FRT matches a civilian to a known militant, a mistake which can have legal consequences if such data feeds into targeting systems.

Some scholars, such as Alison Mitchell, have argued that IHL seems to not directly prohibit the use of FRT. Marten Zwanenburg and Emily Crawford have argued there are certain categories of protected persons where specific IHL rules may restrict or even prohibit the collection of their biometric data. Omar Yousef Shehabi has written about the collection of biometric data during occupation, specifically in the occupied Palestinian territory. I encourage scholars to continue examining these matters in order to strengthen our understanding of how such systems implicate the protections IHL affords to the civilian population.

Algorithmic Bias and Shifting Definitions 

Reports suggest Lavender relies on parameters such as communication patterns, nicknames/names, and the use of devices previously associated with militants. Such indicators are far from definitive and risk capturing civilians whose lives merely overlapped with these attributes by coincidence. In broad terms, the AI-driven systems trained under supervised learning are only as reliable as the datasets and labels on which they are trained. When datasets are unrepresentative, or categories are overly broad or ambiguous, outputs will inevitably be systematically flawed.

Compounding this problem, the definition of “Hamas operative” was reportedly fluid and subject to expansion. According to inside sources, the “bar of what a Hamas operative is” was lowered, thereby vastly widening the pool of individuals classified as “legitimate” targets. In practice, this shift meant that civil defence and police workers were identified as Hamas members. Such category expansion undermines the principle of distinction and increases the risk that civilians are wrongly identified as Hamas operatives, exposing them to serious harm.

Furthermore, in some cases, the review process prior to authorising strikes of so-called “junior militants” was reportedly reduced to checking whether the generated target was male rather than female. One source admitted they would only “invest 20 seconds for each target at this stage, and do dozens of them every day.” Even if carried out occasionally, such a superficial check cannot, under any circumstances, constitute legal compliance with the principle of distinction. If it is correct that Lavender generated targeting recommendations based on an overbroad definition of “Hamas operatives,” and the IDF acted upon these recommendations with minimal scrutiny, this poses a profound threat to civilian protection.

The Forcible Biometric Collection

As described above, the forcible collection of Palestinians’ biometric data raises legal concerns under IHL. I argue that one of the central IHL duties relevant to the forcible collection of biometrics from Palestinians is the prohibition of coercion stipulated in Article 31 of the Fourth Geneva Convention (GC IV). This states, “No physical or moral coercion shall be exercised against protected persons, in particular to obtain information from them or from third parties.”

As documented by Forensic Architecture, Palestinians were “ordered to show their IDs” during the forcible mass civilian transfer from north to south Gaza, following the October 13, 2023, evacuation order. Forcible biometric collection, such as via facial recognition, could arguably amount to unlawful “physical or moral coercion.” If biometric data is considered “information,” then forcibly compelling Palestinians to undergo facial recognition scans may amount to prohibited coercion. One of the mentioned exceptions to this prohibition is “to carry out the necessary evacuation measures” under Article 49 of GC IV. However, according to some—including the International Committee of the Red Cross (ICRC), Human Rights Watch, and UN experts—the October 13, 2023, evacuation order was not lawful under IHL.

The ICRC Commentary to Article 31 of GC IV explains that the provision “covers all cases, whether the pressure is direct or indirect, obvious or hidden.” The IDF’s practices in relation to the evacuations can be understood as a form of “moral coercion,” as the use of intimidation and gunfire at checkpoints effectively forced civilians to move, but also to provide their biometric data. In Gaza, the IDF reportedly used gunfire to compel Palestinians to move through these checkpoints.

Article 27 of GC IV offers additional protection as “[p]rotected persons are entitled, in all circumstances, to respect for their persons, their honour.” Similarly, Article 75 of AP I states that “[e]ach party shall respect the person, honour, convictions and religious practices” of all persons subject to its power. I argue that these IHL duties could play a central role in protecting “their persons” with respect to data protection under IHL, particularly in the context of new technologies during armed conflict. For example, in relation to Article 14 of the Third Geneva Convention (GC III), the updated 2020 ICRC Commentary mentions that “new technologies have developed, most notably in the area of surveillance, which potentially impact the right of prisoners to respect for their persons and honour.” Although the Geneva Conventions do not define the concept of “respect for their persons and honour,” the scope of a right to “data protection” under IHL appears to be relevant.

Patterns of Harm

Taken together, reports of the IDF’s use of AI-DSS and facial recognition systems raises serious concerns for the systematic misidentification of civilians and, ultimately, the consequent harm to civilians. The use of these systems suggests that IDF personnel were aware of and accepted the risk that civilian men were marked as Hamas members. They described that, in practice, “there was no supervising mechanism in place to detect the mistake.” If that is correct, the decision to keep using a flawed system without safeguards if civilian deaths are foreseeable is unlawful. In these practices, the principle of distinction is not only undermined but effectively ignored.

Reports explain that the “emphasis was to create as many targets as possible, as quickly as possible.” Therefore, the overriding imperative was on speed and “on quantity and not on quality.” If this reporting is accurate, it suggests that speed and quantity were explicitly privileged over accuracy and validation. It is imperative that AI-DSS recommendations undergo rigorous scrutiny and are thoroughly verified to ensure that civilians are not directly targeted.

Through the use of such systems, there is a risk that military commanders could “cognitively offload” some of their tasks to AI-driven data models. This can increase the influence of cognitive biases, such as automation bias, whereby military personnel rely on AI-DSS recommendations without verifying their accuracy. Blair Williams suggests that commanders often rely on intuitive decision-making, rather than analytical decision-making. This increases their susceptibility to such biases and potentially to flawed judgments.

Proportionality assessments remain deeply rooted in qualitative, context-sensitive, and ultimately, human judgment. However, the use of AI-DSS risks eroding, or even removing human judgement required under IHL obligations. These systems produce statistical outputs, whereas the principle of proportionality relies on subjective human judgment. Holland Michel has described how individuals often conflate correlation with causation when presented with statistical data, despite the fact that statistical evidence cannot establish a causal relationship.

In this context, statistical outputs—such as the accuracy of each individual recommendation, known as confidence scores—can trigger cognitive biases, such as anchoring bias. If a confidence score is misleadingly high, it may cause a commander to over-rely on the system, affecting their ability to critically assess, reject, or verify recommendations. For instance, if a commander relies on multiple AI-DSS recommendations that are, in fact, incorrect, this can pose significant compliance challenges to the principles of distinction and proportionality.

But when the lives of civilians are reduced to data points, what protections remain, and what new dangers emerge for civilians?

Conclusion

Israel’s deployment of AI-DSS and facial recognition systems in Gaza demonstrates how emerging technologies can entrench, rather than mitigate, civilian harm. If it is correct that the IDF has relied on flawed datasets and ambiguous definitions, without any meaningful oversight, Israel’s use of these systems erodes compliance with core IHL duties.

There is a danger that using such systems might overemphasise the need for speed at the risk of causing harm to the civilian population. Therefore, it may be necessary to limit the role of AI-DSS, especially in areas of civilian presence, and to slow down the military decision-making process. As explained by Anna Rosalie Griepl, AI-DSS can be designed and used to mitigate civilian harm. However, this requires States’ willingness to develop and use such technologies to promote compliance with IHL.

***

Emelie Andersin is a PhD Fellow at Leiden University College of Leiden University.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

Photo credit: Getty Images via Unsplash