Year Ahead 2026 – Poisoned Wells Before The War

by | Jan 5, 2026

2026

In April 2026, and as part of my role at the National University of Singapore, I am hosting a regional conference on the intersection between artificial intelligence (AI) and international humanitarian law (IHL). The conference abstracts are, understandably, all about data. Yet, the tactic of data poisoning has not been raised although as a tactic it may be the centre of gravity for AI-enabled operations.

But data poisoning has been discussed on Articles of War previously. In 2024, Professors Gary Corn and Eric Jensen engaged data poisoning operations under general public international law. And in 2025, Major Aaron Conti discussed the domestic legal authorities for U.S. data poisoning operations. Meanwhile, Major Emily Bobenrieth concluded that the subject fell within a legal gap. And Jonathan Kwik has expertly addressed the idea under anti-AI countermeasures.

There is, nonetheless, a gap in literature, operational understandings, and black letter legal frameworks. I take this gap as my opportunity to offer some reflections on data poisoning in 2026, not because it is technologically novel, but because it exposes a growing misalignment between contemporary methods of strategic competition and the structure of international law that I suspect will unfold further in the coming year.

A Recap

There are three ways in which AI can be countered. Most, but not all, lie within the cyber domain. The first is data poisoning, which involves the deliberate manipulation of training data used by AI and machine-learning (ML) systems so those systems function improperly. It is separate to a logic attack (which changes the algorithm) or counter-AI adversarials (which can be thought of just as camouflage). Whilst all can occur within armed conflict, the first two happen primarily in peacetime.

Data poisoning, as opposed to a logic attack, is an upstream activity occurring long before any operational use. That temporal separation is precisely what makes data poisoning both attractive to States and difficult for the law to regulate. It is a friction point in international law without any solid historical precedent. Whilst data poisoning may look like sabotage, it behaves differently in ways that matter legally. Traditional sabotage usually produces immediate, observable effects and targets recognisable objects such as bridges, weapons, fuel, and infrastructure. Even when conducted covertly, the harm manifests close in time to the act, making it easier to characterise legally and politically. Think, here, about the poisoning of petrol for an armoured vehicle. It might occur in a peacetime paradigm of international law, but it has quite limited civilian consequences. This cannot be said for mixed datasets. The technology allows both operational-level poisoning, such as affecting military sensors and targeting systems, as well as content-level poisoning, such as affecting civilian data and information.

More importantly, poisoned data rarely stays confined to a single military system. Modern datasets are mixed, reused, and shared across civilian and military functions, meaning interference can cascade into civilian infrastructure, humanitarian systems, or decision-making processes that were never the original focus. Once embedded in ML models, poisoned data can be extremely difficult, or impossible, to fully remove, and its effects are often automated rather than human-mediated. That combination of temporal dislocation, civilian entanglement, and irreversibility makes data poisoning legally and normatively different from classic sabotage, even if the strategic instinct behind it is familiar.

Much of the legal discussion to date assumes that the primary challenge lies in how IHL applies to cyber operations (as data poisoning occurs in this domain through various cyber operations). In my view, that framing misses the point. IHL is comparatively comfortable with data poisoning once hostilities exist. The real difficulty lies earlier.

Where IHL Applies

Once an armed conflict exists, IHL applies to cyber operations in broadly familiar ways. If poisoned data causes a system to malfunction during hostilities, the use of that system is governed by the ordinary rules of IHL. Distinction, proportionality, and precautions continue to apply regardless of whether harm is mediated through code rather than kinetic force. Similarly, a data poisoning operation conducted during armed conflict that is reasonably expected to cause injury or damage may qualify as an attack. The fact that effects are delayed or indirect does not, in principle, remove the operation from IHL’s scope.

But this is not where data poisoning is most legally or strategically significant.

Data poisoning is primarily a pre-conflict activity. It is designed to shape the future battlefield while remaining below the threshold of armed conflict at the time it occurs. It targets data rather than systems, integrity rather than availability, and future performance rather than immediate effects. As a result, it rarely triggers IHL at the moment of execution.

This creates a structural gap. IHL is triggered by armed conflict, not by preparation for conflict. The later manifestation of harm does not pull IHL backward in time to regulate the earlier act. International human rights law may constrain certain forms of data poisoning—particularly where civilian systems are foreseeably affected—but its extraterritorial, attributional, and individual-centred limits leave much operational-level interference unaddressed.

What emerges is a familiar grey zone: conduct that is anticipatory, hostile, and strategically consequential, yet poorly captured by existing legal frameworks.

The Martens Clause

This is why I think the Martens Clause deserves renewed attention in 2026—not as a historical artifact or rhetorical fall-back—but as an operative principle, albeit one that sits uncomfortably with existing legal categories.

The Martens Clause has a long and complicated history, and I have found that it means something different to almost everyone who encounters it. Some treat it as little more than a tautology; others read it as an expression of principle rather than a source of rules; still others seek to give it substantive, freestanding content. Sketching that landscape for readers—giving them a clear Martens Clause baseline—seems a necessary step before relying on it to do any analytical work.

The next difficulty is that the Martens Clause is still widely understood as belonging to the law of armed conflict (LOAC). Of course, it is a creature of LOAC. And that orthodox view holds that its relevance to data poisoning is minimal. Once a course of conduct fails to meet the armed conflict threshold, not only do the attack rules fall away, but so does the Martens Clause. If that position is accepted, then appealing to the Clause is irrelevant.

How correct, however, is that assumption? As this Year Ahead series asks us to do, I look forward to find out and to confront that assumption directly. One option is to argue that the Martens Clause should operate beyond the formal confines of armed conflict, as a residual principle designed precisely to prevent legal vacuums where treaty law runs out. Its text speaks not to war as such, but to “cases not covered” by existing regulation, and its use by the International Court of Justice in the Nuclear Weapons Advisory Opinion suggests a role in reasoning through technological rupture rather than merely filling gaps between targeting rules.

But even if that move is resisted, the Martens Clause still serves a valuable function. Data poisoning exposes a deeper problem: the rigidity of the war–not war binary that structures international law. The frustration here is not simply that the Martens Clause lacks content below the armed conflict threshold, but that entire bodies of law are switched on and off by a single event-based trigger that poorly reflects how contemporary strategic competition unfolds. Applying the Clause below that threshold does not magically generate detailed rules of conduct, nor does it resolve attribution or enforcement problems. It does, however, highlight the unsatisfactory nature of treating anticipatory, system-shaping conduct as legally neutral simply because it occurs “too early.”

Seen in that light, the Martens Clause is not being asked to do too much; it is revealing where the framework itself is strained.

Applied to data poisoning, I think that the Martens Clause reframes the analysis. The question is not whether a poisoned dataset constitutes an “object,” or whether an operation qualifies as an “attack,” but whether deliberately corrupting data in ways that foreseeably undermine civilian systems or future compliance with IHL is compatible with basic humanitarian principles. That inquiry sits uneasily with a binary framework, but it is precisely the inquiry that data poisoning forces upon us.

Several considerations become central. Foreseeability of harm is one. Poisoning data that underpins medical systems, humanitarian logistics, or targeting support creates risks that are neither speculative nor remote. System essentiality is another point to consider. Interfering with datasets that support dual-use or civilian-critical functions engages longstanding humanitarian concerns about indirect and cascading harm. A third is reversibility and control. Data poisoning is difficult to detect, often impossible to fully remediate, and frequently embedded in automated decision-making systems that erode meaningful human oversight.

The Martens Clause does not produce bright-line rules, nor does it resolve problems of attribution or enforcement. Applying it below the armed conflict threshold does not solve the material problem of content. What it does provide is a principled basis for restraint before armed conflict begins and, perhaps more importantly, a way of exposing the limits of a legal architecture that insists on treating war and peace as cleanly separable states.

Looking Ahead

Data poisoning will continue to occur both in peace and in war, within the binary paradigm of international law that we are structurally fixed within. During armed conflict, IHL can regulate its effects. But the decisive legal challenge lies earlier, in the long competitive phase where States are shaping future conflict through invisible means.

If international law is to remain relevant in this space, it must speak not only once hostilities begin, but also to the conduct that makes those hostilities more destructive when they do. Whether through the Martens Clause or through further doctrinal development, that tension will need to be confronted. I expect it to feature more prominently in debates about emerging technologies in the year ahead. Watch this space.

***

Dr Samuel White is the Senior Research Fellow in Peace and Security at the National University of Singapore’s Centre for International Law.

The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

Photo credit: Getty Images via Unsplash