Gaps and Seams in the Law of Armed Conflict for AI-Enabled Cyber Operations

by | Dec 10, 2025

Cyber

The continued, robust use of cyber operations in both competition and conflict has inspired many States to express whether and how international law applies to cyber operations. While these attempts at clarity and consensus have led to some convergence of views and understanding, gaps and friction remain in the application of international law to these complex operations.

This past September, American University Washington College of Law’s Tech, Law, and Security Program, in partnership with West Point’s Lieber Institute for Law and Warfare, the Hebrew University of Jerusalem’s Federmann Cyber Security Research Center, the National University of Singapore’s Center for International Law, and the NATO Cooperative Cyber Defense Centre of Excellence held the Fourth Annual Symposium on Cyber and International Law, titled “Navigating Gaps and Seams.” As the title suggests, this year focused on areas of potential discordance in international law relating to cyber operations. Over three days, the symposium featured eight roundtable discussions with experts from around the world, as well as an opening keynote address from GEN(R) Paul Nakasone.

The Lieber Institute led a panel discussion on the application of the law of armed conflict (LOAC) to cyber operations enabled by artificial intelligence (AI). The discussion sought to explore whether AI-enabled cyber operations in armed conflict present uncertainty in the application of LOAC.

The integration of cyber operations into armed conflict is an ongoing reality projected to grow in scale, sophistication, and speed (see here, here, and here). It follows, then, that the use of AI to enable cyber operations will accelerate what is already a growing integration of cyber into military operations. AI-enabled cyber operations threaten to outpace actors who fail to employ the same. In Gaza, for example, Israel’s extensive use of two AI systems, “Gospel” and “Lavender,” has caused extensive legal debate (see here and here). Both systems provide AI-automated targeting selections subject to negotiable human verification. This novel capability has allowed Israel to execute operations expeditiously against its adversaries but has also raised concerns surrounding the obligation to distinguish between military and civilian targets.

LOAC Considerations in AI Acquisition and Development

The use of AI to enable both offensive and defensive cyber operations can take many forms, including assistance with identifying zero-day vulnerabilities, autonomously developing malware to exploit adversary networks, supporting automated cyber defense, and rapidly processing open-source information to facilitate information operations, among others. But before an actor can use an AI capability, it must first be developed and acquired. The panel initially sought to examine the legal obligations associated with the development and acquisition of this technology.

Generally speaking, States do not independently develop AI technology. The private sector provides essential support and is racing to develop cutting-edge technology to produce and sell to State governments for contemplated use in armed conflict. Article 36 of Additional Protocol (AP) I obligates High Contracting Parties to determine whether the “employment” of new weapons, means, and methods of warfare is prohibited under international law. An International Committee of the Red Cross Commentary to Article 36 explains,

Determination by any State that the employment of a weapon is prohibited or permitted is not binding internationally, but it is hoped that the obligation to make such determinations will ensure that means or methods of warfare will not be adopted without the issue of legality being explored with care.

While the United States is not a party to AP I, it is bound by its own weapons review policies. The Department of Defense (DoD) Law of War Manual (the Manual) reflects U.S. policy requiring a legal review of weapon systems to ensure their development and use are consistent with LOAC (§ 6.2). More narrowly, the Manual explicitly dictates that DoD policy requires a legal review for the acquisition of cyber weapons or weapons systems. It also clarifies that “[n]ot all cyber capabilities … constitute a weapon or weapons system” (§ 16.6). This distinction suggests that capabilities not inherently considered a “weapon” may not undergo the same legal review process.

DoD Directive 5000.01 requires legal review of the intended acquisition of weapons or weapon systems (DoDD 5000.01(v)). From a service-specific standpoint, Army Regulation (AR) 27-53 implements this policy, requiring the legal review of “any intended development, procurement … and fielding of weapons and weapon systems” to ensure they are “consistent with the international legal obligations of the United States, including law of war treaties and … customary international law…” (para. 1).

Finally, more recent U.S. policy ensures U.S.-acquired AI systems are legally reviewed. Early in 2025, the Department of Defense reissued DoD Directive 3000.09, “Autonomy in Weapon Systems.” This policy requires legal review of the intended acquisition, procurement, or modification of autonomous and semi-autonomous weapon systems. These reviews must be conducted in accordance with U.S. policy, domestic and international law, and “in particular, the law of war” (DoDD 3000.09, 1.2(c)). The Directive further states that before a decision to enter formal development, three senior military officials must verify that the system meets seven prescriptive criteria focused on allowing appropriate levels of human judgment, as well as reliability and consistency of the system (DoDD 3000.09, 4.1(c)).

The panel acknowledged that, generally speaking, the private sector does not focus on LOAC compliance. In fairness, it is not the obligation of private companies to ensure adherence to the laws of war. Nor can these legal obligations be outsourced to private companies. This reality underscores the State’s responsibility to carefully account for its obligations, whether under international law or domestic policy, in the acquisition and development of AI technology. Precision in contracting language and communication of State requirements is key to States satisfying their obligations under international law. For example, companies that wish to contract with the DoD must comply with the Federal Acquisition Regulation (FAR) and the Defense Acquisition Regulation (DFAR). Both policies set specific requirements for contractors in areas such as business ethics, cost accounting, technical competence, and security. If such standards included LOAC compliance, private companies could be incentivized to account for these considerations at the front end of AI development.

Application to LOAC: Attacks and Foreseeability

Following their acquisition, AI tools that enable cyber operations in armed conflict may have unique implications for LOAC. When exploring the conduct of hostilities, the panel discussed several key inquiries. Article 49(1) of AP I, for example, defines attacks as “acts of violence against the adversary, whether in offence or in defence.” The Tallinn Manual 2.0 International Group of Experts (IGE) agreed that a “cyber operation … that is reasonably expected to cause injury or death to persons or damage or destruction to objects” constitutes a cyber attack (p. 415). The IGE also discussed whether a cyber operation that impacts or neutralizes the functionality of a physical object (e.g., a computer) “constitutes damage or destruction for the purposes of this Rule” (p. 417). The majority ultimately opined that interference with functionality equates to damage under this Rule “if restoration of functionality requires replacement of physical components” (p. 417).

From the United States’ perspective, the Manual does not provide a stand-alone, unique definition of “cyber attack” for the purposes of jus in bello. Instead, it superimposes the definition of an ad bellum “attack” to the in bello application. It states that a cyber operation will constitute an attack if it “cause[s] effects that, if caused by traditional physical means, would be regarded as a use of force under jus ad bellum …” (§ 16.3). Regardless of the differing definitions, whether an action qualifies as an attack consistently relies on the consequences of an operation. Thus, in the panel’s analysis, whether a cyber operation is AI-enabled may ultimately be irrelevant to the legal analysis of the attack.

Relatedly, Article 51(5)(b) of AP I prohibits attacks “which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.” In other words, States must demonstrate a good-faith effort to foresee collateral impacts of lawful attacks on civilians. In the panel’s view, AI-enabled cyber operations that automate any foreseeability analysis will require States to ensure that the system and decision-makers are confident in the accuracy of the information they receive. This is especially critical in ensuring that the information used by the system is precise and robustly protected from any attempt to poison the training data.

The panel concluded that, to exercise greater foresight, States that use this technology are now required to think about LOAC on a more profound level. Given that machines do not incur legal liability, humans seeking to employ AI in the cyber domain must contemplate—earlier, more frequently, and in a much deeper sense—how to account for machine error. The greater a State’s reliance on AI, the greater the responsibility to ensure that the use of the technology sufficiently accounts for the law. In short, AI-enabled cyber operations will require States to be more deliberate in their legal processes.

A Commander’s Responsibility and Risk Calculation

The panel then contemplated how broad LOAC considerations may impact command responsibility and risk assessment. In armed conflict, the law may hold commanders criminally liable for failing to control those under their command. In 1946, the U.S. Supreme Court upheld the conviction of Japanese General Tomuyuki Yamashita for failing to discharge his duty to control the operations of persons under his command who violated the laws of war (see In Re Yamashita No. 61 (1946)). While the Yamashita case is not without controversy, U.S. Army regulations place clear and sweeping responsibility on commanders, proclaiming, “Commanders are responsible for everything their command does or fails to do” (AR 600-20, para. 2-1).

Within international law, Article 86 of AP I codifies commanders’ criminal responsibility for the actions of their subordinates if the commander knew or should have known that a breach of the law would occur. The Manual, reflecting customary international law, assigns a duty to a commander to take “reasonable measures to ensure that their subordinates do not commit violations of the law of war” (§ 18.23.3). Failure to do so may result in criminal liability.

The panel agreed that while AI inherently diminishes the role of the human, commanders must nonetheless maintain reasonable awareness of activities that occur within their command. It follows, then, that if commanders employ an autonomous AI system, they are legally obligated to weigh the risks in using (or failing to use) it. The panel discussed how commanders must remain vigilant for indicators that the AI system is behaving in ways not originally intended and determine in real-time whether its use is still within the bounds of the law. This is especially true given the risk that AI itself may become a target of attack, such as through data poisoning, which could intentionally alter training data to produce incorrect (and potentially unlawful) results. This broader requirement to ensure that AI cyber operations are legal may impact an individual commander’s risk assessment regarding whether and when to employ these technologies.

Interoperability

Finally, the panel wrestled with the continued challenge of interoperability. The acquisition of AI tools according to the legal standards of each State may present additional complexities when working alongside partners and allies. The panel contemplated two ways in which interoperability could occur and present different challenges: (1) acquiring a capability from another State; and (2) relying on information or intelligence generated by an ally’s AI system. Like the prohibition on assigning legal responsibility to machines, States should be conscious not to outsource their legal obligations to other States. The panel concurred that any sharing of AI capabilities must meet individual State obligations under both international and domestic law. Robust joint exercises can assist in teasing out existing snares in the States’ ability to share AI capabilities within their legal boundaries.

Conclusion

Resoundingly, the panel concluded that the use of AI to enable cyber operations may not change how the law applies, but rather makes existing obligations on States more complex. Unlike the advent of cyber, where scholars and States identified legal areas of qualitative differences, this panel focused primarily on a difference in degree when it comes to the application of LOAC to AI in cyber. Over time, substantive legal issues related to AI-enabled cyber operations may become more apparent. For now, the question is not where the law is lacking, but whether States are willing to implement thoughtful processes to account for the existing law broadly and thoughtfully. All parties must anticipate and plan ahead to both satisfy LOAC obligations and fully realize the tactical, operational, and strategic benefits of AI in the cyber domain.

***

Major Emily E. Bobenrieth is an active duty Army judge advocate currently assigned as an Associate Professor of National Security Law at The Judge Advocate General’s Legal Center and School in Charlottesville, Virginia.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

Photo credit: Getty Images via Unsplash