CyCon 2025 Series – AI-Enabled Offensive Cyber Operations: Legal Challenges in the Shadows of Automation

by , | Aug 22, 2025

Cyber

Editors’ note: This post is part of a series that features presentations at this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia. Its subject will be explored further as part of a chapter in the forthcoming book International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay. Kubo Mačák’s introductory post is available here.

As can be observed from the conflicts in Ukraine and Gaza, the use of artificial intelligence (AI) in military operations is here to stay, whether in the air, on land, at sea, in space, or in cyberspace. AI is already used for real-time surveillance, target recognition, autonomous navigation, logistics optimization, and decision support. It enhances speed, efficiency, and precision beyond what human operators alone can achieve.

However, AI comes with inherent vulnerabilities. AI systems are highly sensitive to the quality of the data on which they are trained or operate. Biased, incomplete, or adversarial data can distort decision-making processes in unpredictable ways. Moreover, many AI systems, particularly those based on deep learning, operate as opaque “black boxes” and offer little or no insight into how decisions are made. Human-machine interaction also introduces risks of automation bias or overreliance on computational systems that may not fully understand the context or subtleties of complex combat environments.

Numerous reports, conferences, and policy papers warn therefore that AI introduces novel—and potentially unmanageable—challenges to international law, in particular to international humanitarian law (IHL) and international human rights law (IHRL). While much of this attention focuses on the use of AI in more conventional combat operations, to date, relatively little systematic attention has been paid to the interplay between international law, on the one hand, and AI in military cyber operations, on the other hand.

To address this gap, several research initiatives have been launched, including a book-project entitled, International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay, under auspices of the Cooperative Cyber Defence Centre of Excellence. The book endeavors to cover this interplay with respect to different types of cyber operations, including a chapter on the use of AI in offensive cyber operations (OCO) in armed conflict, of which we are the principal authors.

This post reflects our principal exploratory findings on this topic, as they were presented during the 2025 CyCon in Tallinn, Estonia.

Framing the Issue

The central question in this early-stage research was to identify the scope and nature of the potential challenges of AI-use in OCO to IHL. A common reflex is that as soon as AI enters the equation, especially in the security realm, it is quickly scapegoated and seen as the main cause for legal concerns. We do, however, have an academic responsibility to ask whether these concerns are always grounded in the right assumptions. We do not aim to downplay them. But we intend to clarify where challenges truly originate in order to better isolate what challenges to IHL are uniquely caused by the inclusion of AI in OCO.

When taking a closer look at the object of our study—the use of AI in OCO’s in armed conflict—we identify three layered topics that require further examination. These three layers are: 1) challenges inherent in the applicable legal framework; 2) challenges posed by the interplay of characteristics of cyberspace with an applicable legal framework; 3) challenges inherent to AI itself, that become manifest irrespective of the type of military operations.

Unpacking these three topics allows us to examine a sharper, as yet unresolved, question. Namely, are the remaining challenges to IHL truly unique to the use of AI in OCO? We will discuss these layers below. However, before we do so, let us briefly explain what OCO are and how AI is used in them.

OCO and AI

Within the broader category of military cyber operations, military doctrine and State strategies and policies commonly distinguish between OCO and defensive cyber operations (DCO). While DCO are reactive, OCO are aimed at a specific target, whether physical or virtual, at the initiative of an actor. Various legal, military, and policy contexts use the term OCO. But it lacks a single, universally accepted (legal) definition. Definitions often vary depending on the actor and the purposes of the description.

For example, the U.S. Department of Defense (DoD) defines offensive cyber operations at the strategic level as “cyber operations intended to project power in and through cyberspace” but at the operational and tactical level as “actions taken to gain access, manipulate, disrupt, deny, degrade, or destroy information or information systems of adversaries” (U.S., DoD Cyber Strategy; U.S., DoD, Cyberspace Operations, Joint Publication 3-12).

International law does not define cyber operations or OCO. However, the term gains legal significance if it reaches certain thresholds within international law, such as that of an “armed attack” under the jus ad bellum or, relevant for our examination, that of “armed conflict” and “attack” as understood under Article 49(1) of Additional Protocol I to the 1949 Geneva Conventions (AP I), for the purposes of applicability of (certain parts of) IHL. We will return to that aspect in more detail below.

We define OCO as military operations at the initiative of an actor to manipulate, deny, disrupt, alter, degrade, or destroy the physical network layer (i.e. the hardware and infrastructure, such as computers, servers, routers, cables and satellites), the logical layers (data, protocols, and software) or the virtual persona layer of cyberspace (the digital representations of human users, e.g. email addresses, IP identities, online content), to achieve a military advantage. For brevity, we limit this contribution to cyber operations that undermine or sabotage cyberspace and will not cover influence operations that merely make use of cyberspace as a vector.

AI-powered OCO may use different types of AI to automate, accelerate, or enhance various aspects and phases within the operational planning cycle. For example, for training and simulation, reinforcement learning (RL) and generative adversarial networks (GANs) are used to create adaptive red-teaming environments where AI agents mimic real-world attackers to test responses to dynamic threats. To enhance work performance, machine learning (ML) and natural language processing (NLP), models automate time-consuming tasks such as scanning for vulnerabilities, writing code, or crafting phishing messages tailored to individual targets’ work processes.

AI also plays a growing role in decision making as AI-Decision Support Systems (AI-DSS) tools assist commanders, planners, and cyber operators in making faster, more informed, and more precise decisions throughout the cyber mission lifecycle. These systems do not execute attacks autonomously but augment human decision-making in the planning and execution of OCO. Examples are: AI-DSS for target analysis and prioritization; to generate potential attack pathways and simulate multiple courses of action; and to integrate data on civilian-military interdependencies and to use predictive modelling to assess the proportionality and feasibility of attacks.

Finally, in the realm of cyber weapons, various types of AI are incorporated to enhance their autonomy, adaptability, and stealth. Examples are: MLs to create malware to evade detection by analyzing and adapting to a system’s defenses in real time; RLs to enable autonomous malware to explore networks, to discover optimal attack paths, and to dynamically adjust tactics without human input; NLPs that automate social engineering by crafting convincing phishing content or impersonating trusted individuals; and Generative AI (GenAI) that can produce polymorphic code that continuously mutates, to avoid detection and reverse-engineering. More theoretical or experimental, but not inconceivable, is the (future) use of swarm intelligence to coordinate distributed attacks, such as adaptive botnets or self-organizing malware clusters.

The use of AI adds an additional layer to an already complicated discourse on the legal framework applicable to activities cyberspace as an operating (military) environment. It is a realm that is not void of legal challenges, irrespective of whether AI is used. To determine what exactly these legal challenges are, it is necessary to first assess which legal framework, available within international law, may apply to a particular OCO. In that regard, the designation of a cyber operation as “offensive,” or its substantive meaning as an activity that “attacks” something, may be somewhat deceptive, as will be explained below.

Applicable Legal Framework

As mentioned, the concept OCO has no legal definition under international law; it is merely a doctrinal qualification. Even though our focus lies on the use of AI in OCO in armed conflict and—as the principal legal regime regulating armed conflict—on an examination of IHL, neither the commonly used reference to OCO as cyber attacks, nor its offensive character should be regarded as suggesting that they only take place in the context of an armed conflict and that only IHL applies to them (and to the use of AI in them). To the contrary, it can be reasonably assumed that the vast majority of OCO takes place outside the context of an armed conflict (peacetime), in which case they are not governed by IHL, but by IHRL and other (peacetime) regimes of international law.

For those OCO that do take place in the context of an armed conflict, their qualification as an “attack” as defined in Article 49(1) of AP I is crucial in determining which rules of IHL apply. To recall, Article 49(1) defines attacks as “acts of violence against the adversary, whether in offence or defence.” The term “acts of violence” refers to an act or acts, the effects of which can be reasonably expected to cause injury or death to persons or damage or destruction to objects. This embodies the classic threshold for distinguishing attacks from ordinary military operations, such as reconnaissance or a convoy. In the context of cyber operations, this threshold has been subject of debate for a long time, due to the fact that cyber operations are generally directed against data and predominantly result in non-physical effects.

In essence, this debate revolves around questions of whether non-physical effects of cyber operations qualify as “damage” and whether data qualify as objects. While this post is not the place to elaborate on various viewpoints, nor to outline the debate, the observation we want to raise here is that, in view of the effects they generate, OCO could qualify as both attacks and as ordinary military operations.

The importance of this assessment is that OCO that qualify as attacks must adhere to IHL, including a body of specific attack-rules, which culminate in a range of specific affirmative precautionary measures stemming from the core IHL-principles of distinction and proportionality (AP I, arts 51, 52, and 57). In contrast, OCO that do not qualify as attacks are not bound by these attack rules, and differing viewpoints exist as to whether IHL applies to such operations at all and, if so, which rules.

Legal Challenges Inherent to Applicable Legal Regimes 

Irrespective of the use of AI, each legal framework applicable to an OCO is troubled by its own deficiencies, ambiguities, and unsettled interpretations of key notions inherent to them, particularly when applied to military operations. Notable examples of inherent and lasting issues within IHL are the threshold of armed conflict, the interplay of IHL with non-State armed groups, and debates on the interpretation and application of terms and concepts concerning the principles of distinction and proportionality and the obligations on precautionary measures in attack.

These terms and concepts, however, are deliberately abstract. IHL was designed as a technology-neutral framework, permitting its rules to apply to evolving methods and means of warfare. This flexibility is not a flaw, but a foundational strength that does away with the common assumption that IHL is a legal framework that is ill-suited to new developments (take, for example, the post 9/11-debates on IHL’s applicability to transnational terrorism) or modern technologies, such as cyber and AI.

At the same time, it means that these abstractions are not new but reflect existing interpretative dynamics within IHL. Hence, the challenges addressed here are not unique to AI per se. Take AP I, Article 57(2), which obliges parties to take a wide range of affirmative precautionary actions to spare civilians from the effects of an attack. The difficulty in meeting these affirmative obligations does not primarily lie in the technology, but in the ambiguity of the various elements within the rules, such as “excessive” in balancing expected civilian harm with “direct and concrete military advantage,” or “feasible precautions.” However, a fair question to ask (and a source of further exploration) is whether and, if so, how the use of AI could impact these on-going issues, particularly when applied in operations in cyberspace.

Legal Challenges from Cyberspace as Operational Domain

Characteristics of cyberspace itself as an operational domain add an additional layer of complexity. Cyberspace is largely immaterial, borderless, asymmetric, and non-kinetic. Meanwhile, activities in or through cyberspace are generally anonymous, virtual (digitalized), and take place at high speed. This creates friction with IHL’s focus on tangible features of armed conflict, such as physical territory, real objects and persons, and observable harm, all appearances of traditional armed conflicts.

Cyber operations can disrupt systems without physically damaging infrastructure, triggering interpretative questions on notions such as “attack,” as highlighted above. Harm may be cumulative, indirect, or purely functional, which complicates the determination of potential civilian harm resulting from an attack. The dual-use nature of much of cyberspace’s infrastructure and cyberspace’s capacity to obscure the identity of actors exacerbates difficulties in the determination of the status of a potential target under IHL and causes attribution to be often complex, slow, and uncertain. Moreover, the inherent speed of OCO may negatively impact the capacity for human control and overview and cause tension in relation to compliance with rules on precautions in attack.

These difficulties, however, were not triggered by the use of AI, but originate from the confrontation of existing IHL with the novelties that cyberspace brings. Once again, a fair question to ask is how AI itself interacts with the features of cyberspace and, consequently, what this implies for IHL.

IHL Challenges Intrinsic to the AI Itself

As noted in the introduction, AI is used throughout the entire spectrum of military operations. A common assumption is that AI creates uniquely difficult legal challenges in cyber operations because of its advantages and vulnerabilities. While they are specific to AI, they are not unique to the cyber domain. They also affect compliance with IHL in traditional military operations.

Autonomous drones, automated sensor fusion, and AI-supported targeting systems all raise questions that equally apply to the use of AI in cyber operations, such as: how to conduct a proportionality assessment with AI-generated data; whether “those who plan and decide upon an attack” (AP I, art. 57(2)) maintain effective oversight; or whether AP I, Article 36 reviews can adequately deal with the notion that AI-systems are adaptive (and thus may act differently after a review).

Whether AI is used to guide a drone strike or to identify vulnerabilities in a digital network, these core concerns remain constant. It may therefore be analytically imprecise to treat the use of AI in cyber operations as uniquely problematic under IHL. In this sense, the legal predicaments AI creates are not domain-specific; they are intrinsic to the technology itself.

IHL Challenges from AI-Use in OCO?

Thus far, we have argued that many legal concerns that may be attributed to AI originate from three layers: the abstractness of IHL; the intangible nature of cyberspace; and the general properties of AI technology.

If we strip away these layers or topics, what remains? Are there challenges to IHL that are truly unique to the use of AI in OCO, and if so, what are they? In other words, does the use of AI in OCO still put IHL to the test? The fact that the object of our study, the interplay of IHL with the use of AI with OCO’s in the context of an armed conflict, inherently puts the attributes of these three topics in a unique melting pot suggests an affirmative answer.

However, precisely how AI challenges IHL when used in OCO will depend on the specific application of AI, in terms of its intended purpose (e.g. to enhance, optimize, or automate the planning, execution or adaptation of an OCO) and its interplay with the characteristics of cyberspace, as well as whether the OCO in question qualifies as an attack under Article 49(1) of AP I, or not.

Let’s explore this with the following illustration. State A and State B use the same AI-driven malware against a common enemy. This malware can autonomously identify, adapt to, and exploit multiple targets, without human intervention and in near real-time. Its use results in functional loss of multiple enemy military systems, with reverberating effects into civilian systems with severe consequences for the civilian population, as a result of which their support for the war plunges. Due to their different interpretations of IHL, State A might consider this operation as an attack under Article 49(1) of AP I, while State B regards it as an ordinary military operation.

The use of AI in this illustration challenges IHL, but in different ways. To begin, while States A and B are firm in their qualification of the operation as an attack or not, the illustration highlights how AI enables cyber operations that sit in the grey zone—not violent enough in the kinetic sense, yet strategically destructive—but more importantly, it underlines the need for a universally accepted understanding of the concept of attack under Article 49(1) of AP I when applied in the context of AI-enabled cyber operations. Secondly, while both States have a duty of constant care to spare the civilian population, civilians and civilian objects (AP I, art. 57(1)), State A must assess the use of this AI against the attack rules within Articles 51, 52 and 57(2) of AP I, while State B does not consider itself bound by these rules.

The operationalization of these targeting rules in traditional military operations without the use of AI can be challenging, both factually (applying them to the facts on the ground) and legally (as terms and concepts are ambiguous) but generally involves human oversight.

Their application in OCO, without AI, is even more troublesome. For example, compliance with the duty of target verification (AP I, art. 57(2)(a)(i)) can be complex, as targets are often digital infrastructure (e.g. servers, IP addresses, networks) with no clear visual or physical link to their users of function. Cyberspace infrastructure is also often dual-use. But this function is often opaque or dynamic. And obfuscation techniques (e.g. proxies, VPNs, anonymizers) make attribution to an adversary State or military unit difficult.

Similar difficulties are found in relation to other provisions of Article 57(2). Generally, humans remain in the loop during the planning phase, but the quality or degree of human oversight degrades in phases of target selection, execution, and monitoring, posing challenges to the requirement of Article 57(2) of API that precautionary measures must be taken by “those who plan and decide upon an attack.”

The scale and speed involved with the use of AI can substantially reduce or even eliminate meaningful human involvement. AI in cyberspace acts across multiple time zones, jurisdictions, or network layers, which is impossible to track for humans. This not only affects compliance with Article 57(2), but overall accountability for the use of AI, which is further exacerbated by its inherent vulnerabilities, such as system opacity (black-box).

In contrast, legal and oversight challenges become even more complex—and arguably more dangerous—with respect to State B’s view that the operation does not qualify as an attack. While the effects in this particular illustration are quite transparent, the use of AI in other OCO may result in effects that are more silent, gradual, and subtle and therefore further obscure intent, scale, and attribution, but may still cause significant strategic and humanitarian harm.

As previously noted, the attack-specific obligations in AP I do not apply to State B’s operation, but exactly what rules do apply in that situation within an armed conflict but below the threshold of an attack remains subject of debate. In effect, it may only encourage States to design operations to stay just under the threshold of attack, so as to use AI to maximize impact while avoiding legal accountability. This not only creates a grey-zone of high-impact, low-visibility operations, but also a risk that IHL’s object and purpose are hollowed out from below.

Conclusion

This post offered our main preliminary observations in dissecting the possible impact of AI-enabled OCO on IHL, as presented during CyCon 2025. It demonstrates, firstly, that an examination of the potential of AI in OCO to challenge international law first requires an understanding of the different legal frameworks that may apply to these operations. While our research concentrates on OCO in the context of an armed conflict and therefore on IHL, other frameworks may apply, depending on the context (peace or armed conflict) and the qualification of an OCO as an attack under Article 49(1) of AP I or not.

Secondly, this post illustrated that many legal challenges find their origin in three layers: the applicable legal framework; the intangible nature of cyberspace; and the general properties of AI technology. This is not to say that the use of AI in OCO does put IHL to the test. How, exactly, will be subject of exploration in our forthcoming book chapter, but will, in any case, depend on the question whether a particular OCO qualifies as an attack, as well as on how the particular characteristics of the AI used will interact with the traits of cyberspace.

***

Colonel Eric Pouw is an Associate Professor of Military Law at the War Studies Department of the Faculty of Military Sciences of the Netherlands Defense Academy. He is also affiliated to the Amsterdam Center of International Law, University of Amsterdam.

Brigadier General Peter Pijpers is a Professor of Cyber Operations and Vice-Dean of Education at the Faculty of Military Sciences of the Netherlands Defence Academy. He is also affiliated to the Amsterdam Center of International Law, University of Amsterdam.

The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

Photo credit: U.S. Space Force, Ethan Johnson