Warification and the Illusion of Precision: AI, Targeting, and Increasing Civilian Harm
Editors’ note: This post features analysis included in the authors’ recently published article, “The Warification of International Humanitarian Law and the Artifice of Artificial Intelligence in Decision-Support Systems: Restoring Balance through Legitimacy of Military Operations,” in the Yearbook of International Humanitarian Law.
The vocabulary of modern conflict has always done quiet, but consequential work. Terms such as “collateral damage,” “surgical strikes,” and “precision targeting” shape how violence is understood, justified, and normalized. Ultimately, such vernacular belies the violent effects of war, sanitising and obliterating the experience of those subjected to such violence. While technology has advanced to enable a target to be hit within meters anywhere on the globe, it has not resolved the issue of target misidentification. In other words, the promise of precision is not necessarily accurate or contained violence.
Into this trajectory, AI is now being incorporated, exemplified by endorsement of a doctrine of “maximum lethality.” This reflects the well-known law of the instrument: the cognitive bias in which the availability of a particular tool encourages its application to problems it may not be suited to solve. In the context of warfare, the risk is that AI becomes treated as the default solution to enduring and deeply contested challenges such as targeting. As the familiar aphorism warns, to a person holding a hammer, everything begins to look like a nail.
We see this as a continuation of the warification of the battlefield and the legal rules meant to frame the conduct of such warfighting. In our recently published Yearbook of International Humanitarian Law article on “The Warification of International Humanitarian Law and the Artifice of Artificial Intelligence in Decision-Support Systems: Restoring Balance through Legitimacy of Military Operations,” we coin the term warification to capture this linguistic and operational drift more sharply.
As we outline, warification does not mean lawfare, where law is used to further strategic or operational objectives. Instead, by turning war into a verb, warify, we highlight the active process of interpreting the law under pressure from parties to armed conflict, which distorts its original intent or spirit. In essence, warification describes the process through which activities, spaces, and technologies not traditionally associated with armed conflict are reconstituted as legitimate components of warfare. It is both a conceptual expansion and a practical one: the boundaries of war stretch, and with them, the range of actors, tools, and targets that become permissible.
Recent developments in the use of artificial intelligence (AI) for targeting in Iran and the conflict in the wider Middle East offer a stark illustration of warification in action. See, for example, Palantir’s Maven Smart System and reports of large-language models developed by Anthropic being deployed in operational contexts. These systems are frequently framed as ushering in an era of unprecedented precision and efficiency in warfare. Yet the empirical reality, including the reported deaths of around 1,700 civilians in recent hostilities in Iran, calls that narrative into serious question. CENTCOM claimed it was generating 1,000 targets per hour. Coupled with the high operational tempo that has come to be a hallmark of modern warfare (largely traceable to the Silicon Valley ethos of “move fast and break things”), such capabilities are often framed as enhancing precision. But they also introduce new layers of abstraction between decision-makers and the consequences of their actions.
If anything, the integration of AI into targeting workflows may be accelerating the very dynamics warification seeks to describe: the normalization, diffusion, and expansion of organized violence under the guise of technological refinement.
At its core, warification is not simply about waging more war, but about waging a different kind of war, war that accepts expansive understandings about: when force can be used; who or what can be targeted; and where and when they can be targeted. Our article emphasizes how legal, technical, and discursive frameworks combine to make this expansion appear natural or even necessary.
AI-enabled targeting exemplifies this convergence. What was once the sole domain of human judgment (identifying a target, assessing proportionality, weighing uncertainty) is increasingly mediated by algorithmic systems trained on vast datasets and optimized for pattern recognition, target selection, and nomination rather than civilian harm mitigation.
Programs like Project Maven were initially presented as tools to assist analysts in processing drone footage, reducing cognitive burden, and improving efficiency. Over time, however, their role has expanded to encapsulate the entirety of the targeting cycle, with smoothing friction between sensor and shooter becoming a main feature. The distinction between assisting humans in decision-making and delegating these decisions to the machines has blurred. When an algorithm flags a “pattern of life” as suspicious or identifies a structure as a potential military objective, it does more than inform human decision-making; it shapes it. The epistemic authority of the machine, grounded in data and statistical inference, can subtly displace human interrogation of the output. This is a key mechanism of warification: the relocation of judgment into systems that are perceived as neutral, objective, and therefore less contestable.
Precision, Proportionality, and Precaution
Precision is more akin to a rhetorical anchor than an empirical guarantee. The promise of AI to further this through better data and smarter algorithms will lead to cleaner, more discriminate uses of force. But the high level of civilian harm complicates this narrative. If these systems are indeed more precise, why does the scale of civilian harm remain so high? One answer lies in the way precision is defined. Technically, a strike can be “precise” if it hits its intended target.
International humanitarian law stipulates that, in any case, constant care must be taken to avoid or minimize civilian harm across military operations. Additionally, parties of an armed conflict must make efforts to verify that a target is a combatant or person directly participating in hostilities or a military objective, defined as making “an effective contribution to military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.” Even where a target has been identified as such, the obligation of proportionality requires assessing whether the expected civilian harm relative to the anticipated military advantage would be excessive. AI is subtly reconfiguring this calculus, one that is not quantifiable, but contextual, something which such algorithms struggle with and do not make apparent to human users.
Warification is expanding the category of what is targetable, through data-driven associations, behavioral patterns, or probabilistic risk assessments, with AI systems increasing the number of objects and individuals deemed legitimate military objectives. A building that houses both civilians and suspected militants, a vehicle that exhibits “anomalous” movement, or a digital node within a communication network can all be folded into the battlespace. The result is not necessarily more accurate targeting, but more targets. An additional complication one of us has written about more recently is that States are also prioritizing proportionality assessments, accepting a certain level of civilian harm, rather than taking the time-intensive steps legally required to ensure precautionary obligations are met before proportionality assessments even enter the picture.
Added to this are the obligations for parties to an armed conflict to ensure, in cases of doubt of the status of a person or object, to assume it is civilian and to ensure “constant care” to spare the civilian population from the ravages of war. These contextual and human judgments cannot be adequately reasoned by AI as “stochastic parrots,” that mimic and regurgitate human language or provide statistical probabilities, but lack environmental and contextual understanding, which risks being exploited by peer rivals. There is also the problem that reliance on such AI in strikes (even possibly reckless use) does not absolve commanders of responsibility. In these cases, where mistakes result in civilian harm, that harm could amount to war crimes.
AI Opacity and Warification
This diffusion is compounded by the opacity of many AI systems. Machine learning models, particularly those based on deep neural networks, often function as “black boxes,” like Russian Matryoshka dolls, with opacity within opacity, producing outputs without easily interpretable reasoning or transparency. In a civilian context, this opacity is already a concern in terms of accountability for who is responsible for the attack and how they can avoid further violence. In a military context, where decisions can mean life or death, it becomes exceedingly problematic.
If an algorithm recommends a target and that leads to civilian deaths, the inability to fully explain the system’s reasoning undermines both accountability and trust for the civilians affected, but also for the operators at the controls of these systems. The moral injury of commanders and operators who use such AI-DSS remains an underexplored problem that results from this increasing warification.
The irony is that the same systems touted for their precision can, in practice, introduce new forms of uncertainty. The systems are only as good as the data input. Data may be incomplete, biased, or outdated, which may be part of the cause of the tragic strike on the Minab girls’ school. Patterns identified by algorithms may not correspond to meaningful threats. Adversaries may deliberately manipulate signals to evade detection or trigger false positives. In such an environment, the promise of precision becomes increasingly tenuous.
Our critique here is not that AI should never be used in military contexts. Without a broader and fuller understanding of how war itself is changing through the integration of AI, it will be difficult, if not impossible, to situate AI within it. The warification lens invites us to ask not only whether a given strike is legal, but whether the conditions that make it possible and permissible are themselves legally justifiable.
This requires a shift in focus from tools to structures. The correct focus, in our view is how AI integration, especially in targeting, reshapes the boundaries of conflict and the cognitive architecture of warfighters. Does it lower the threshold for the use of force? Does it expand the set of legitimate military targets? Does the tempo of operations excise the possibility for meaningful checks and verification steps? Does it obscure accountability? Does it normalize a state of perpetual violence?
These questions are not easily answered, but they are essential to the discussion. Without answering them, the conversation risks being confined to technical fixes (e.g., better data, improved models, more oversight, more AI) while the underlying dynamics of warification embedded into ways of fighting remain unchallenged.
Fundamentally, warification names the dangerous seduction of violence as a means of resolving political differences and confronting intractable challenges. The central lesson of the last two centuries is that sustainable peace is not secured at the end of a sword. Yet this renewed turn to war as a political instrument is already visible: in the targeting of drug traffickers across the Caribbean and Pacific, in the wars involving Lebanon, Israel, and Palestine, and in tensions surrounding the Strait of Hormuz, each with effects that reverberate globally. Across these conflicts, AI-enabled systems such as Lavender and Maven have been implicated in an expanding architecture of violence. Such developments erode the multilateral legal order forged from the hard-learned lessons of the Second World War, including the UN Charter, international humanitarian law, and international human rights law.
There is also a need to reassert the legal frameworks that govern armed conflict. International humanitarian law was developed in an era of human-centered decision-making. Its principles assume a level of human judgment that is strained by the integration of AI. While these principles remain relevant, their application in an AI-mediated environment is far from straightforward and at risk for further hollowing out.
Conclusion
Ultimately, we hope the concept of warification helps to illuminate a troubling trajectory. As AI becomes more deeply embedded in military operations, war risks becoming not only faster and more lethal for belligerents, civilians, and those hors de combat, but also more expansive and less visible. At the same time, the boundaries between war and peace, combatant and civilian, and human and machine decision-making begin to blur. We can already see this dynamic in the physical architecture of AI, as civilian data centres themselves become targets.
In this context, the claim that AI enables more precise warfare rings hollow. Precision, narrowly defined, may improve in some respects. But if the overall effect is to increase the scope and frequency of violence, then “precision” becomes a misleading metric. The high level of civilian harm we are seeing in Iran today is not an anomaly to be explained away. It actually requires us to reevaluate the underlying assumptions that have led to it.
The concept of warification challenges us to see these developments not as isolated innovations, but as part of a broader transformation and it asks us to interrogate narratives that accompany new technologies, considering also whose interests they serve. Most importantly, it reminds us that the evolution and direction of war is not preordained or inevitable. It is shaped by legal, political, ethical, and design choices humans make every day.
***
Jessica Dorsey is Assistant Professor of International and European Law at Utrecht University School of Law.
Luke Moffett is Professor of Human Rights and International Humanitarian Law at Queen’s University Belfast.
The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: U.S. Army, SPC Fabian Jones
