Israel – Hamas 2024 Symposium – Beyond the Headlines: Combat Deployment of Military AI-Based Systems by the IDF

by , , , | Feb 2, 2024


It is well established that new and emerging technologies impact how States conduct military operations. Recently, we have seen notable innovations in the development and deployment of autonomous weapon systems (AWS), military use of cyberspace, and many more. However, one emerging field in which significant leaps are being observed in ongoing conflicts is nonweaponized artificial intelligence (AI) with military applications.

Recently, several Israel Defence Forces (IDF) officials acknowledged using AI-based tools for several purposes including targeting support, intelligence analysis, proactive forecasting, and streamlined command and control (C2). Against this backdrop, the current Israel-Hamas conflict has brought Israel’s deployment of such systems into the spotlight, with Habsora, or “the Gospel,” an AI-based system that is used to generate possible military targets for attack, attracting the most attention.

Reporting on the conflict suggests the IDF uses AI as a “data-driven factory” to commit “mass assassinations.” Ultimately, this commentary hinges on misunderstandings of, on the one hand, how the military functions and, on the other, what AI-powered tools realistically can and cannot do. This all too frequent misrepresentation prompts us to shed some light on systems the IDF uses on the battlefield. Putting hyperbole aside, we aim to examine these highly impactful systems and reflect on the legal and ethical considerations. In doing so, we bring to the fore what legal limitations exist on the desire to introduce new AI-based tools in practice and their actual use. In this post, we also detail the developing experience of the IDF with AI systems beyond the current conflict. We aim to join the emerging discussion on the appropriate ways to introduce AI onto the battlefield, both in relation to the Israel-Hamas conflict and beyond.

Israel, Technologies, and Warfare

The State of Israel is a leading actor in the technological field, and it harnesses its capabilities as part of its diplomatic toolbox to establish itself as a leader in the design of international technological governance. Israel has a strong partnership between the government, the security services, and the private sector, which allows Israel to make substantive advancements in military technology.

At the same time, this close partnership can be a source of challenges in properly supervising technological developments and their deployment in various domains and circumstances, ranging from purely military through law enforcement to intelligence operations. As the 2022 NSO scandal aptly demonstrated, at times, the interests of the government, security, and private sector triad can collide with the interests of the State of Israel.

While AI is not a new development, recent years have seen substantial leaps in AI-powered capabilities and their military applications. As such, legislators and regulators at both national and supranational levels are waking up in an attempt to catch up with this new technological wave of evolution. The global AI hype, exacerbated by freely available generative AI tools, has reached the military domain. As these capabilities are quickly becoming a reality in armed conflict, we must examine some AI-based tools the IDF deploys on the battlefield.

The AI Trend in the IDF

Intelligence Analysis, Targeting, and Munitions

Integrating AI-based tools to analyze high volumes of data is essential to deal with the overwhelming influx of data that characterizes the modern battlefield. The developing trajectory of intelligence, surveillance, and reconnaissance (ISR) technologies indicates that future ISR capabilities will hinge on AI-powered decision support systems (DSS). The IDF is no stranger to this trend, as both the ongoing conflict in Gaza and previous escalations demonstrate.

One of the DSS the IDF used is the “Fire Factory,” which can meticulously analyze extensive datasets including historical data about previously authorized strike targets, enabling the calculation of required ammunition quantities, the proposal of optimal timelines, and the prioritization and allocation of targets. Operationally, it is an amalgamate of phase 2 (target development) and phase 3 (capabilities analysis) of the targeting cycle. Functionally, it resembles a blend of the U.S. Prometheus and FIRESTORM algorithms as fielded during Project Convergence-21.

The system that has stirred recent controversy is the Gospel, which helps the IDF military intelligence division improve recommendations and identify key targets. The IDF’s use of AI for target development is not new to this conflict. In 2021, during operation “Guardian of the Walls,” the head of the AI Center embedded within Unit 8200, the Israeli signals intelligence cell, revealed that the IDF effectively deployed an AI system to identify Hamas missile unit leaders and anti-tank operatives within Gaza. The combat employment of this same tool generated 200 military target options for strategic engagement during the ongoing military operation, Iron Swords. The system executes this process within seconds, a task that would have previously required the labor of numerous analysts over several weeks.

In this context, it is also worth noting that the IDF revealed the existence of Unit 3060, a development department within the Intelligence Division. This unit assumes the responsibility for advancing operational and visual intelligence systems, with the unit’s official mandate being to enhance the combat efficacy of the IDF by integrating AI systems for both operational and visual purposes. The beneficiaries of the unit’s output encompass the organization’s command, divisional, and brigade levels.

Finally, the IDF deploys AI to improve the weapons and munitions themselves. For example, the Israeli company Rafael, recognized for its significant contributions to the IDF, introduced an advanced missile system named “SPIKE LR II” that incorporates smart target tracking capabilities, AI, and other features to sustain target lock-on, in challenging conditions, with minimal human intervention required. In addition, AI-based systems, like the Legion-X platform developed by Elbit, allow C2 of various unmanned vehicles simultaneously.

Proactive Forecasting, Threat Alert, and Defensive Systems

AI-based tools can also detect, alert, and occasionally preempt catastrophic scenarios and contribute to effective crisis management. For example, NATO uses AI-based systems in its disaster response exercises to process aerial images and swiftly identify victims. Likewise, the IDF harnesses AI technologies for similar purposes. According to the IDF, during the 2021 Guardian of the Walls Operation, AI-based systems successfully identified the commanders of Hamas’s anti-aircraft and missile units in Gaza from a substantial pool of potentially threatening individuals.

Furthermore, the Iron Dome and David’s Sling are Israeli missile defense systems known for their life-saving capabilities in safeguarding critical infrastructure against the threat of rockets launched into the territory of Israel. A significant application of AI in Iron Dome is to improve system accuracy. In particular, AI-powered algorithms analyze radar and other sensor data to track incoming missiles and calculate the best time to intercept these more effectively and prioritize targets. AI makes the system more effective against a wider range of threats, like drones and other small, low-flying objects. Finally, using AI increased the Iron Dome’s success rate to over 90 per cent and reduced operating costs. This is important because these threats are becoming increasingly common and pose a challenge to traditional air defense systems, as is evident in the Russia-Ukraine war.

The IDF is also using AI in the service of border control, for example with an AI system that was developed to assist border observers, including through AI-facilitated facial recognition tools. The border system undertakes video analysis, proficiently identifying individuals, vehicles, animals, and even armed individuals or specific car models. This system encompasses not only live video analysis but also incorporates numerous additional factors, such as the historical data of the particular geographical region. The October 7 attack raised several red flags concerning this system but until an official inquiry is conducted, it will be hard to pinpoint the exact failures.

Streamlined C2

Another field impacted by the use of AI DSS is that of C2 systems. A first attempt to use AI in this novel way came during the 2022 Operation Breaking Dawn, during which a link was established among the Computer Service Directorate, the Intelligence Division, the Southern Command, and the Northern Command. The primary function involves presenting commanders with an overview of the readiness status of different forces in the upcoming military operations. This pilot project proved pertinent in the current Israel-Hamas war, as the use of AI based systems became an integral part of the IDF’s modus operandi during this conflict.

Challenges Associated with AI on the Battlefield

Challenges and opportunities resulting from the ongoing incorporation of AI into military equipment have been subject to heated and often circular discussion for over the last decade. Yet, the international regulatory debate within the main international forum, the group of governmental experts (GGE) on lethal AWS (LAWS) held under the auspices of the UN Convention on Certain Conventional Weapons, remains limited to weapon systems with autonomous functionalities.

The IDF experience with the Gospel and Legion-X and the occasionally misleading media commentary demonstrates how misunderstood military AI can be in these public fora. First, of all the various systems mentioned in this contribution, only the Iron Dome and David’s Sling can be classified as AWS; the others are simply not weapons and, as such, are not within the purview of the GGE on LAWS. Second, the most contentious system—the Gospel—is neither a weapon nor a decision-making system. Rather, it is a decision-support tool for commanders who may choose to disregard the recommendations, and as such it should be considered as a means of warfare as it forms a military system, or platform, that is being used to facilitate military operations.

This does not mean, however, that no concerns arise regarding the inner workings of such systems. In particular, valid questions remain about the explainability of the algorithms it relies on, especially in generating human targets. Relatedly, one could wonder about the available accountability avenues when the system errs. While both concerns would be valid, it is worth noting that accountability for battlefield mistakes remains under-conceptualized and virtually nonexistent, whether or not it results from the use of advanced technologies. It merits acknowledgment, however, that the inability of AI systems to elucidate their operational processes is likely to impact the duty to conduct investigations into alleged breaches of IHL.

Another pivotal concern arises regarding the appropriate level of human involvement required or necessary in decision-making processes (in/on/off the loop). This concern raises an issue of importance for three crucial purposes: improving accuracy in decision-making; enhancing legitimacy; and ensuring accountability. First, human participation can enhance decision-making precision and quality, and it can serve as a vital safeguard for the prevention or minimization of errors. At the same time, the speed and volume of decisions made in the context of AI-based systems does pose a challenge given human capacity limitations.

Second, the inclusion of a human in the decision-making process can bolster the legitimacy of the decision and enhance public trust, as shown by empirical studies. The IDF confronts challenges related to legitimacy and faces global criticism time and again, and in the context of the use of AI in the Israel-Hamas war, we can see that some outlets blamed the IDF for operating a “mass assassination factory” (in relation to the Gospel system).

Third, the presence of a human factor becomes crucial in terms of accountability. Consistent with our stance, as of today, the IDF commander is the one holding the ultimate decision-making authority when it comes to offensive operations. As the debates continue on how to account for the role of humans in modern combat engagements, both scholarship and the ongoing conflict in Gaza show that glorifying human attributes as a counterweight to the demonized machines is simply disconnected from reality.

Another notable challenge, linked to the role of humans in the decision-making process, is the phenomenon called “automation bias.” While, as stated, IDF commanders can choose to disregard recommendations from the Gospel, and every target must receive an IDF commander’s authorization, it is challenging to avoid automation bias, especially during heightened hostilities. Automation bias refers to the tendency to over-rely, or over-trust, the AI output. While AI DSS are valuable tools in combat to accelerate the pace of decision-making and gain the associated advantages of that acceleration, the risks of automation bias can be substantial and should be accounted for in the training combat troops likely to employ AI-enabled tools receive.

The Beginning of the Road Ahead – Review of Weapons, Means, and Methods of Warfare

A basic tenet of International Humanitarian Law (IHL) is that States are limited in their choice of weapons and means or methods of warfare by norms of international law. Israel’s introduction of AI-based tools invites some form of a mechanism for legality review, like the one prescribed by Article 36 of the First Additional Protocol to the Geneva Conventions (AP I). According to this article, States should evaluate new weapons, means or methods of warfare prior to their deployment in the battlefield. The term “weapon” has been understood to include a range of offensive capabilities used in combat that are capable of causing damage to objects or injury or death to persons. “Means of warfare” is a broader term, extending to military equipment, systems, platforms, and other associated appliances used to facilitate military operations. For example, a surveillance system would fall under this category, if it can collect information about potential military targets. “Methods of warfare,” by comparison, extends to a variety of military strategies and practices, as well as specific tactics used in military operations.

While Israel is not a party to AP I, and the customary status of Article 36 remains doubtful, in its General Comment 36, the Human Rights Committee took the approach that ensuring the protection of the right to life invites prophylactic impact assessment measures, including a legality review for new weapons and means and methods of warfare. Nevertheless, it should be noted that the general comment is not obligating per se, rather it is a suggested interpretation—one that drew some controversy—in relation to the right to life anchored in the International Covenant of Civil and Political Rights.

Cyberspace has become an essential domain for military operations, with cyber-attacks now an integral part of the reality of armed conflicts. States seem poised to incorporate AI tools in cyber operations. Tools like the Gospel and Legion-X indeed constitute a new means of warfare that ought to be subject to a legal review. The legal review is a critical aspect among the portfolio of new technologies and capacities, given the lack of scientific certainty as to their impact on humanitarian interests and predictability in performance.

Indeed, Article 36 does not dictate any particular manner in which the review should be conducted, and the actual mechanisms used differ among States in terms of their format, methodology, mandate of the review body and more. It is worth noting, though, that according to the International Committee of the Red Cross, the review should follow, whenever possible, a multidisciplinary approach, especially when there are several possible effects (say, when there is an impact on different rights, such as privacy or health rights) or when the evaluation requires specific expertise.

It should be clarified, in this regard, that Article 36 invites States to consider new weapons, means or method of warfare, in light of any other rule of international law applicable to the High Contracting Party. Given the increased acceptance of the co-application of IHL and international human rights law in armed conflict situations, though some States (like Israel and the United States) are more hesitant on the matter, we believe that a legality review should, in principle, include both.

Concluding Thoughts

There is room for prudence when deploying new AI-based military tools, as there is no benchmark to follow. Given the experience of Israel, at least what is known to the public, we can suggest some preliminary thoughts.

First, an important step is a preliminary measure to evaluate the legality of new technologies through prophylactic impact assessment measures. This can be accomplished by regulation over development (Article 36-like mechanisms), trade restrictions, or processes like that of privacy by design. Realistically, the path ahead will include a mix of tools at different stages (planning, design, deployment, and retroactive examination), and domestic and international systems should aspire for harmonization and complementarity.

Second, while the tendency to lean on AI is obvious, there are some inherent risks with AI systems at large, like the lack of explainability, which in some circumstances might raise questions regarding individual accountability.

Third, while the private sector is vital for prevention, education, investigation, and attribution of cyber operations, we should avoid over-privatization and fragmentation of authority and responsibility.

Finally, as the world is becoming more divided in ideals and values, there is a difficulty in promoting effective international responses. As such, unless and until additional normative measures are implemented to better cope with this challenge, we must consider how existing rules apply to this new and shifting reality.


Dr Tal Mimran is an Associate Professor at the Zefat Academic College and an Adjunct Lecturer at the Hebrew University of Jerusalem.

Dr Magda Pacholska is a Marie Curie Postdoctoral Fellow with the DILEMA project on Designing International Law and Ethics into Military Artificial Intelligence at the Asser Institute, University of Amsterdam, and a Research Fellow with the Tech, Law & Security Program at the American University.

Gal Dahan is a Master’s student of Law (LLM) at the Hebrew University of Jerusalem.

Dr Lena Trabucco is a Research Fellow, a Visiting Scholar at the Stockton Center for International Law at the US Naval War College, and a Research Fellow with the Tech, Law & Security Program at the American University.




Photo credit: IDF Spokesperson’s Unit


The Legal Context of Operations Al-Aqsa Flood and Swords of Iron

by Michael N. Schmitt

October 10, 2023

Hostage-Taking and the Law of Armed Conflict

by John C. TramazzoKevin S. CobleMichael N. Schmitt

October 12, 2023

Siege Law and Military Necessity

by Geoff CornSean Watts

October 13, 2023


The Evacuation of Northern Gaza: Practical and Legal Aspects

by Michael N. Schmitt

October 15, 2023

A “Complete Siege” of Gaza in Accordance with International Humanitarian Law

by Rosa-Lena Lauterbach

October 16, 2023

The ICRC’s Statement on the Israel-Hamas Hostilities and Violence: Discerning the Legal Intricacies

by Ori Pomson

October 16, 2023

Beyond the Pale: IHRL and the Hamas Attack on Israel

by Yuval ShanyAmichai CohenTamar Hostovsky Brandes

October 17, 2023

Strategy and Self-Defence: Israel and its War with Iran

by Ken Watkin

October 18, 2023

The Circle of Suffering and the Role of IHL

by Helen DurhamBen Saul

October 19, 2023

Facts Matter: Assessing the Al-Ahli Hospital Incident

by Aurel Sari

October 19, 2023

Iran’s Responsibility for the Attack on Israel

by Jennifer Maddocks

October 20, 2023

Inside IDF Targeting

by John Merriam

October 20, 2023

A Moment of Truth: International Humanitarian Law and the Gaza War

by Amichai Cohen

October 23, 2023

White Phosphorus and International Law

by Kevin S. CobleJohn C. Tramazzo

October 25, 2023

After the Battlefield: Transnational Criminal Law, Hamas, and Seeking Justice –  Part I

by Dan E. Stigall

October 26, 2023

The IDF, Hamas, and the Duty to Warn

by Michael N. Schmitt

October 27, 2023

After the Battlefield: Transnational Criminal Law, Hamas, and Seeking Justice – Part II

by Dan E. Stigall

October 30, 2023

Assessing the Conduct of Hostilities in Gaza – Difficulties and Possible Solutions

by Marco Sassòli

October 30, 2023

Participation in Hostilities during Belligerent Occupation

by Ioannis Bamnios

November 3, 2023

What is and is not Human Shielding?

by Michael N. Schmitt

November 3, 2023

The Obligation to Allow and Facilitate Humanitarian Relief

by Ori Pomson

November 7, 2023

Attacks and Misuse of Ambulances during Armed Conflict

by Luke Moffett

November 8, 2023


Distinction and Humanitarian Aid in the Gaza Conflict

by Jeffrey Lovitky

November 13, 2023

Targeting Gaza’s Tunnels

by David A. WallaceShane Reeves

November 14, 2023

Refugee Law

by Jane McAdamGuy S. Goodwin-Gill

November 17, 2023

After the Conflict: A UN Transitional Administration in Gaza?

by Rob McLaughlin

November 17, 2023

The Law of Truce

by Dan Maurer

November 21, 2023

International Law “Made in Israel” v. International Law “Made for Israel”

by Yuval ShanyAmichai Cohen

November 22, 2023

Cyberspace – the Hidden Aspect of the Conflict

by Tal Mimran

November 30, 2023

Israel’s Right to Self-Defence against Hamas

by Nicholas Tsagourias

December 1, 2023

Time for the Arab League and EU to Step Up on Gaza Security

by Michael Kelly

December 4, 2023

Attacking Hamas – Part I, The Context

by Michael N. Schmitt

December 6, 2023

Attacking Hamas – Part II, The Rules

by Michael N. Schmitt

December 7, 2023

Flooding Hamas Tunnels: A Legal Assessment

by Aurel Sari

December 12, 2023

Damage to UN Premises in Armed Conflict: IHL and Beyond

by Ori Pomson

December 12, 2023

Applicability of Article 23 of the Fourth Geneva Convention to Gaza

by Jeffrey Lovitky

December 13, 2023

Delivery of Humanitarian Aid from the Sea

by Martin Fink

December 13, 2023

The Question of Whether Gaza Is Occupied Territory

by Michael W. Meier

December 15, 2023

Sexual Violence on October 7

by Noëlle Quénivet

December 19, 2023

Hostage Rescue Operations and the Law of Armed Conflict

by Kevin S. CobleJohn C. Tramazzo

December 20, 2023

Qassam Rockets, Weapon Reviews, and Collective Terror as a Targeting Strategy

by Arthur van Coller

January 17, 2024

A Gaza Ceasefire: The Intersection of War, Law, and Politics

by Marika Sosnowski

January 18, 2024

Information Warfare and the Protection of Civilians in the Gaza Conflict

by Tamer Morris

January 23, 2024

Algorithms of War: Military AI and the War in Gaza

by Omar Yousef Shehabi, Asaf Lubin

January 24, 2024

The Ibn Sina Hospital Raid and International Humanitarian Law

by Michael N. Schmitt

February 1, 2024