Data Poisoning as a Covert Weapon: Securing U.S. Military Superiority in AI-Driven Warfare

Rapid integration of artificial intelligence (AI) into military platforms has revolutionized modern warfare, providing unprecedented capabilities for decision-making, reconnaissance, and targeting. However, reliance on AI systems introduces critical vulnerabilities, particularly in the integrity of their training datasets.
This post argues for the strategic use of covert action under U.S. Code Title 50 (War and National Defense) to conduct data poisoning operations against adversary AI systems. By covertly undermining these systems, the United States can achieve a decisive asymmetric advantage in future conflicts. Such a strategy is not only operationally viable but well situated to adopt a framework based on the law of armed conflict (LOAC), providing a pathway to secure ethical and legal superiority in AI-driven warfare.
Understanding Data Poisoning and its Strategic Applications
Data poisoning involves introducing corrupted or adversarial data into the training sets of machine learning models, causing them to operate unpredictably. Common techniques include label flipping, altering dataset labels to create misclassification, and backdoor attacks, embedding triggers that cause targeted system malfunctions. Adversaries such as China and Russia increasingly rely on AI for military decision-making, including reconnaissance and targeting.
By covertly introducing manipulated data during the training phase, adversarial AI systems can be rendered ineffective, misclassifying U.S. assets or misinterpreting battlefield conditions. For instance, adversary drones distinguishing enemy military vehicles might misidentify U.S. equipment, providing tactical advantages. This tactic mirrors historical examples of asymmetric warfare, such as cryptographic sabotage during the Second World War, where operational disruptions yielded significant strategic benefits.
Adversary Countermeasures Against Data Poisoning
While data poisoning offers the United States a strategic advantage, adversaries such as China and Russia actively develop countermeasures to defend their AI models. These countermeasures include data integrity defense, adversarial training, and anomaly detection techniques. Securing training data supply chains, trusted dataset validation, and data verification are crucial methods employed by adversaries to reduce exposure to poisoning attempts.
AI models are increasingly training to recognize unexcepted deviations in datasets, making detection of anomalies a growing priority. Defensive mechanisms such as adversarial robustness training and differential privacy techniques allow AI models to identify subtle manipulations. Furthermore, real-time model monitoring detects deviations in AI behavior, signaling potential tampering. These layers of defense challenge the viability of sustained data poisoning operations.
For the United States to maintain its advantage, poisoning strategies must evolve beyond retaliatory or reactionary operations. While the most difficult and highest risk, a compromised AI engineer working for the adversary (intentionally manipulating or exploiting adversary AI systems and their components) would ensure the greatest degree of success. Alternatively, advanced techniques such as gradual and time-delayed poisoning, which introduce small, cumulative distortions, offer more effective ways to evade detections. Similarly, stealthy backdoor embedding, in which poisoned data only activates under specific conditions, remains the ultimate goal to ensure long-term persistence within adversary AI models.
Risk of Retaliatory Data Poisoning Attacks
Data poisoning is not simply a unilateral weapon to be wielded by the United States. It represents a reciprocal threat that adversaries are actively developing. Just as the United States may pursue covert means to degrade adversary AI reliability, it must also anticipate and defend against parallel efforts targeting its own systems. The logic of reciprocity here is not a theoretical mirror image but a dynamic, asymmetric competition. Techniques such as model inversion, label flipping, and clean-label attacks could be employed not only to disrupt adversary AI decision-making. They may also compromise U.S. machine learning systems, especially those supporting intelligence, surveillance and reconnaissance (ISR), targeting, or logistical optimization.
Unlike prior examples that focus on general AI privacy (e.g., differential privacy), military relevance requires technical specificity. For instance, ISR classification algorithms trained on contaminated terrain data might misinterpret tactical features of a battlespace. Target recognition systems could misidentify friendly forces due to corrupted visual training sets. Or AI-powered logistics tools could be misled into deprioritizing critical supplies. These effects, though non-kinetic, would be operationally significant and potentially catastrophic.
The United States is particularly vulnerable due to its reliance on open-source, commercial, or foreign-derived datasets in both military and defense-adjacent applications. While programs like the Chief Digital and AI Office (CDAO) and the Joint Artificial Intelligence Center (JAIC) recognize the importance of adversarial resilience, technical defenses such as adversarial training or blockchain-integrity verification must be paired with covert operational doctrine. By understanding this reciprocal threat environment, the United States can proactively develop both offensive capabilities and defensive countermeasures to ensure its strategic advantage in the AI battlespace.
Legal and Policy Framework: Title 50 Authorities
Title 50 of the United States Code defines covert action, in relevant part, as activities intended to influence political, economic, or military conditions abroad, conducted in a manner that does not overtly acknowledge U.S. government involvement. These operations require a presidential finding and congressional notification, ensuring both strategic alignment and democratic accountability.
Data Poisoning as a Covert Cyber Operation
Data poisoning qualifies as a form of covert cyber operation, a non-kinetic means to influence or degrade foreign military capabilities through manipulation of AI systems. According to U.S. Department of Defense (DoD) Manual 5240.01 (DoDM 5240.01), DoD intelligence components may conduct foreign intelligence and counterintelligence operations, including those in cyberspace, below the threshold of armed conflict provided they are consistent with Executive Order 12333 (EO 12333) and approved procedures.
While DoDM 5240.01 does not define specific cyber techniques, it permits intelligence activities that involve accessing or exploiting foreign military technologies when carried out under proper authorities. In this context, data poisoning falls within the scope of lawful covert action when used to degrade adversary AI systems involved in reconnaissance, targeting, or operational planning. Importantly, this manipulation can occur during peacetime, as part of intelligence-driven shaping efforts.
These preparatory efforts align with the concept of preparation of the environment (PE) found in Joint Publication 3-05 (Special Operations). PE includes clandestine access, persistent surveillance, and operational conditioning to set favorable conditions for future operations. Data poisoning conducted through covert cyber or human-enabled means serves as a modern extension of this doctrine, allowing the United States to discreetly undermine adversary decision-making systems before they reach the battlefield.
DoD’s Role Under Title 50
As members of the intelligence community, Defense Intelligence Components are authorized to conduct intelligence activities under Title 50. Such activities are permitted under clear statutory authority, especially in support of, or in anticipation of a military operation or campaign conducted under Title 10 authority. This legal construct enables a joint approach in which other intelligence agencies lead the covert action, while DoD provides technical support, cyber infrastructure, or subject matter expertise related to AI exploitation. Historical precedent supports this model. For example, the 2011 raid on Osama bin Laden’s compound involved multiple agencies working together to facilitate covert action with significant military support under Title 50 authority. Accordingly, data poisoning as a covert cyber operation falls squarely within Title 50’s statutory architecture. It leverages U.S. intelligence advantages, preserves deniability, and creates asymmetric disruption to adversary military readiness, while remaining consistent with legal frameworks that govern gray zone operations.
Human-Enabled Delivery and Intelligence Tradecraft
While data poisoning creates cyber effects, its delivery need not be digital. In fact, the most effective (though operationally complex) method of injecting poisoned data may be through human intelligence (HUMINT), specifically, covert insertion of military sources or agency assets. Direct access to a foreign lab, academic pipeline, military procurement contractor, or training data repository allows for the kind of persistent, tailored poisoning that is difficult to detect and nearly impossible to attribute.
This is not hypothetical. As Paul Scharre highlights in Four Battlegrounds: Power in the Age of AI, China is the world’s largest producer of AI talent. But the United States remains the premier destination for graduate study. Over ninety percent of Chinese AI PhD students who come to the United States remain here after graduation. This dynamic creates both opportunity and risk: covert exploitation of global AI pipelines becomes not just possible, but strategically imperative. Human-enabled delivery methods allow for slow poisoning, trigger-based backdoors, and targeted training distortion that evade even sophisticated anomaly detection. These techniques mirror cold war-era HUMINT operations but are reimagined for the digital battlespace. They also provide flexibility to operate in gray zone conditions where direct cyber intrusion would be too risky or attributable.
Combining cyber capability with HUMINT tradecraft ensures the U.S. can disrupt adversary AI systems at origin, rather than solely at point-of-use. This fusion of domains, digital and human, is essential to executing covert action in an AI-defined battlespace.
LOAC Principles as an Analytical Framework for Covert Action
LOAC applies to military operations conducted during armed conflict. However, in practice, particularly when conducting irregular warfare and sensitive activities, LOAC principles are often applied normatively to ensure legal and ethical continuity from preparatory activities to eventual kinetic operations. This practice is consistent with U.S. military legal advice, especially during “left-of-bang” phases of competition. Applying the principles of LOAC at this stage is not a claim of legal necessity under treaty or customary law, but a recognition that early adherence to these principles informs operational design, reinforces U.S. legitimacy, and mitigates reputational risk.
Distinction
While data poisoning may not physically harm civilians or infrastructure, the principle of distinction can still guide its targeting. In this context, it means selectively degrading AI systems tied to military objectives, such as enemy reconnaissance, targeting, or command-and-control systems. For example, injecting corrupt training data into a surveillance model that identifies military vehicles, while avoiding systems that support civilian infrastructure, ensures effects remain directed at lawful military objectives.
Proportionality
The principle of proportionality, which prohibits attacks expected to cause excessive civilian harm relative to the military advantage, remains relevant even in non-kinetic contexts when operations could influence civilian systems indirectly. A data poisoning campaign that causes a misfire or misidentification could, in theory, lead to physical destruction downstream. To avoid such outcomes, planners should ensure that the trigger mechanisms for poisoned models are tailored to function only in operationally significant and clearly defined military scenarios.
Necessity
The principle of military necessity supports the idea that any action, even a covert one, must offer a concrete military advantage. Data poisoning satisfies this requirement by degrading adversary confidence in AI decisions-making, especially in ISR or battlefield interpretation systems. However technical effects must be scrutinized. The claim that training data corruption will “ensure misidentification” must be more carefully framed. A more precise assertion would be: “Targeted corruption of object classification data may statistically increase the likelihood of misidentification by adversary models, degrading their decision-making advantage.”
Similarly, manipulated reconnaissance datasets could cause adversary AI systems to misinterpret terrain or force disposition. This would not guarantee misinterpretation but could bias the system’s outputs toward inaccurate assessments, especially if paired with camouflage, decoys, or other deception operations. The goal is not indiscriminate malfunction, but controlled degradation of decision-making quality in adversary systems.
Therefore, the LOAC principles of distinction, proportionality, and necessity provide a policy informed framework for covert shaping operations like data poisoning. While not legally binding in peacetime or intelligence-gathering contexts outside armed conflict, their application ensures that covert cyber activities remain strategically consistent with U.S. values, ethically grounded, and legally sustainable as part of broader continuum of military action.
Policy Considerations
AI’s growing role in military operations creates both opportunities and vulnerabilities. Data poisoning offers a cost-effective, scalable method to exploit these vulnerabilities, imposing disproportionate costs on technologically advanced adversaries. While implementing data poisoning operations is relatively low-cost, the financial and operational burdens on adversaries to detect and mitigate such attacks are substantial.
Crucially, this argument does not assume that adversaries comply with international legal norms or care about the integrity of their AI systems. On the contrary, it anticipates the possibility that adversaries may continue to employ degraded or partially compromised systems, especially if they still yield destructive results. In such cases, the strategic benefit of data poisoning lies not in forcing compliance, but in eroding adversary confidence and increasing the risk of operational miscalculation, causing delaying, misfires, or overcorrections that degrade battlefield effectiveness.
Additionally, data poisoning operations may carry strategic narrative risks. If a corrupted adversary AI system causes civilian harm, the adversary may attempt to shift blame to the United States, especially if they detect evidence of covert interference. This highlights the need for careful targeting, ethical oversight, and preemptive information operations strategies to shape global perceptions and maintain legitimacy. Covert action must always be weighed against its potential for unintended consequences in both physical and narrative domains.
Nevertheless, degrading adversary trust in AI can induce hesitation, operational errors, and strategic paralysis. For example, mistrust in targeting algorithms might force adversaries to revert to less efficient, human-based decision-making processes. Even if the adversary persists in using compromised systems, their performance degradation still yields tactical and strategic dividends.
While acknowledging that data poisoning could contribute to an erosion of global trust in AI systems, the imperative to protect national security and safeguard U.S. service members justifies its applications, provided it is carried out with legal enablement, operational discipline, and ethical oversight.
Conclusion
Data poisoning represents a powerful addition to the U.S. arsenal of covert capabilities, offering a distinct advantage in the evolving landscape of AI driven warfare. As AI increasingly defines how modern militaries operate, the ability to subtly degrade adversary systems before they reach operational deployment allows the United States to shape the battlefield in advance, without direct confrontation or overt escalation.
This post argued that data poisoning, when executed under Title 50 covert action authorities, provides a legally grounded and ethically manageable tool for influencing adversary military capabilities. Whether delivered through cyber intrusion or human-enabled operations, data poisoning aligns with doctrinal principles of PE and supports broader U.S. objectives in strategic competition.
This capability is not without risk. Adversaries may continue to employ corrupted AI systems to achieve destructive effects or exploit such operations for propaganda, shifting blame to the United States and undermining U.S. credibility. Therefore, the application of data poisoning must be governed by operational discipline, legal oversight, and strategic continuity. Its use should complement broader U.S. efforts to maintain moral and narrative superiority in contested domains.
Ultimately, the future of warfare will not be determined solely by who builds the most advanced AI systems, but by who can most effectively exploit, undermine, and control the underlying data environments that power them. By integrating offensive and defensive data strategies within a coherent national security framework, the United States can secure an enduring advantage in the AI battlespace, without firing a shot.
***
Major Aaron Conti is a Judge Advocate in the United States Army, currently serving as a graduate student in the 73rd Judge Advocate Officer Graduate Degree Program at the Judge Advocate General’s Legal Center and School in Charlottesville, Virginia.
The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: Getty Images via Unsplash