The Impact of AI-Enabled Capabilities on the Application of International Law in the Cyber Domain
This post describes the proceedings and conclusions of a workshop that brought together scholars, some with both operational and technical expertise, to discuss the emerging applications of artificial intelligence (AI) in military cyber operations. This virtual workshop, held on November 12-13, was co-organized by Prof Scott Sullivan from the Army Cyber Institute (ACI), Brigadier General Professor Peter Pijpers, and Colonel Doctor Eric Pouw from the Netherlands Defense Academy (NLDA). The main objective of the workshop was to examine whether and how the employment of AI capabilities in conjunction with cyber operations affects the application of international law. It explored the under-researched trilateral relationship between AI, cyber, and international law.
Insights from the workshop are organized into three core themes. First, the distinct features of AI-enabled cyber capabilities are discussed, and whether they introduce any qualitative changes that would render current legal frameworks inapplicable. The second theme describes how AI-enabled cyber operations reanimate longstanding legal controversies. Lastly, the ability of the commanders and operators to evaluate the legal compliance of AI-enabled (cyber) capabilities in concrete situations is examined. This overview of key takeaways will interest researchers and practitioners wrestling with the fast-paced developments of the AI field and its effects on international law in cyber warfare.
Brief Overview of AI-enabled Cyber Operations
There are several ways AI can be employed in cyber operations, including: 1) detecting cyber threats such as phishing attacks or anomalous network traffic; 2) identifying vulnerabilities in targets such as outdated software versions or misconfigurations; and/or 3) mounting coordinated cyber responses. Among new developments in this area, large language models (LLMs) can be used to generate malware code for military cyber operations. For example, polymorphic code which leverages LLMs can change itself to exploit the target’s vulnerabilities and evade detection. Other uses of AI discussed during the workshop include AI-enabled espionage, sabotage, and influence operations, such as cyber-attacks on data on which enemy AI decision-support systems rely.
The workshop’s theme was timely. On November 13 2025, a U.S.-based company, Anthropic, reported that a malicious actor, allegedly a “Chinese state-sponsored group,” misused its platform, Claude Code in a “highly sophisticated espionage campaign.” The actor “used AI’s ‘agentic’ capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattacks themselves,” effectively tricking the Claude Code into bypassing its guardrails and executing attacks on pre-selected targets, including “large tech companies, financial institutions, chemical manufacturing companies, and government agencies.” As Anthropic further reports,
Claude identified and tested security vulnerabilities in the target organizations’ systems by researching and writing its own exploit code. Having done so, the framework was able to use Claude to harvest credentials (usernames and passwords) that allowed it further access and then extract a large amount of private data, which it categorized according to its intelligence value. The highest-privilege accounts were identified, backdoors were created, and data were exfiltrated with minimal human supervision.
This was not the first such AI-enabled cyber operation. In another example, OpenAI’s ChatGPT was allegedly used by a North Korean group of hackers to generate deepfake images of South Korean government and military employee ID cards. These deepfake ID cards were then used to deceive targeted South Korean “researchers in North Korean studies, North Korean human rights activists, and journalists” and extract sensitive information from them by conducting phishing attacks.
These operations illustrate the sophistication of AI-enabled cyber operations and the possibility of executing them with commercially available AI tools, such as Claude Code or ChatGPT.
Assessing Novelty of AI-enabled Cyber Operations
When considering whether AI-enabled cyber operations present any distinct features that would complicate the application of existing international legal frameworks governing warfare, workshop participants made three broad observations.
First, AI integration renders cyber operations increasingly difficult for victim States to detect. AI-enabled cyber tools may leverage generative AI models to morph, modify, or rewrite their code in ways that rapidly bypass known cybersecurity defenses. Second, AI’s capacity to automate vulnerability analysis, deploy adaptive malware, and manage multi-vector attacks could precipitate a dramatic increase in the success rate and scale of cyber operations. Third, cyber environments, inherently code-driven and digital, facilitate the integration of AI, paving the way for rapid adoption by State and non-State actors alike. This observation was made in contrast to leveraging AI models in kinetic operations, such as autonomous weapons platforms or uncrewed systems, which involve significantly more barriers due to the need to configure robotic components of these systems for different physical environments. A similar argument can be found here.
AI amplifies evasiveness, operational scale, and accessibility of cyber operations. Participants in the workshop held varied views on whether this development poses novel challenges for international legal frameworks. Some participants emphasized that the potential array of effects from cyber operations is not expected to change with the integration of AI. As such, AI introduces a different degree, but not a different kind, of cyber operations. Participants holding this view concluded that existing legal frameworks are generally sufficient to deal with the use of AI-enabled cyber operations in warfare.
Others argued that AI capabilities are rapidly evolving, making it difficult to understand how they work and predict future iterations and implications. They also emphasized that the scale and frequency of AI-enabled cyber operations may quickly exceed States’ capacity to address vulnerabilities and ensure cybersecurity. Scholars should maintain openness to new developments in the field and revisit legal conclusions as technical capabilities evolve.
Persisting Legal Ambiguities
The workshop also demonstrated that AI exacerbates some of the existing legal ambiguities by resurfacing long-standing legal debates. Two examples are worth mentioning.
First, the question of whether data constitutes an “object” under international law, particularly the law of armed conflict (LOAC), was a prominent topic of discussion among workshop participants. Under customary international law and as enshrined in Articles 48 and 52 of the 1977 Additional Protocol I to the 1949 Geneva Conventions, those conducting attacks must distinguish between the civilian population and combatants and between civilian objects and military objectives. Only combatants and objects that qualify as military objectives can be lawfully targeted. The status of data is crucial: if data is treated as an object, civilian datasets would be protected from attack; if not, they may be left unprotected by core targeting rules.
Whether data can qualify as an object of attack under the law of armed conflict has been a subject of much debate among scholars and States alike, as reflected, for example, in the Tallinn Manual 2.0. During the workshop, participants warned that AI-enabled systems expose the growing dependence of contemporary society on data and data-driven systems. Simultaneously, adversarial measures to sabotage, undermine, or otherwise tamper with data and data-driven systems are becoming more varied and effective.
For example, a recent study shows that even small amounts of malicious data can poison LLMs, effectively undermining system reliability and user trust in system outputs. Such backdoor data poisoning attacks can have devastating effects, including when LLMs are used to generate code for other AI models or inform critical decisions. Hence, a data poisoning attack on the first LLM can have cascading effects on different systems or humans relying on its outputs in a chain of decision-making processes. With each step, identifying the source of the adversarial attack on the data and rectifying it becomes increasingly unwieldy.
In assessing whether data is an object, some experts differentiate between “process data” and “content data.” Under this framework, AI models are categorized as “process data,” while the training data they rely on is considered “content data.” Others differentiate between essential data, which should be protected as an “object,” and non-essential data, which falls outside such protection. Some participants voiced concern that the legal interpretation of “objects” under the LOAC is overly narrow.
Defining data as an object based on its “essentiality” is not only impractical but also complicates determining when data is no longer essential and thus loses its protected status. Given the potentially significant impact of AI-enabled cyber operations on civilian data and data-driven systems, excluding certain types of data from legal protection could increase risks to civilian populations.
A second example of AI-enabled cyber operations exposing legal ambiguities concerns the concept of direct participation in hostilities. Under LOAC, direct participation in hostilities refers to specific acts carried out by civilians that result in a temporary suspension of their protection from direct attack (Additional Protocol I, art. 51(3)). According to an International Committee of the Red Cross Interpretive Guidance, such acts must meet three cumulative criteria: 1) they must be likely to cause harm to the military operations or capacity of a party to the conflict (threshold of harm); 2) there must be a direct causal link between the act and the resulting harm (direct causation); and 3) the act must be specifically designed to support one party to the conflict and harm the other (belligerent nexus). Not only do experts disagree on the three criteria, but their application remains contested as well.
The question of how to capture the “for such time” condition for direct participation in hostilities in the context of AI capabilities has attracted most attention. For instance, does direct participation in hostilities start at the time a person is developing an AI model intended to be used in a hostile way? Does participation end when the person no longer programs or modifies the system, or does it continue as long as the AI-enabled system remains in operation? Most participants agreed that an individual who is no longer controlling or overseeing the AI-enabled cyber tool (even if they developed the tool) is no longer participating in hostilities. Experts compared this legal analysis to a civilian laying down an improvised explosive device (IED); once the device is placed, the civilian is no longer directly participating in hostilities. The only exception is found in situations where the civilian is known or reasonably expected to continue engaging in similar hostile acts, such as placing IEDs or developing and employing AI-enabled cyber operations.
With respect to the growing inclusion of software engineers alongside military personnel, experts discussed whether and when their role in conflict would result in suspension of their protected status. There were two views on this matter. Some experts argued that forward-deployed engineers would only lose their protected civilian status for such time as they were involved in the act of modifying, re-training, updating, or otherwise changing the AI-enabled system. Others considered that forward-deployed engineers participate in hostilities for such time as they are available to update the AI model. This interpretation could extend the temporal suspension of their protected status to that of their contractual obligations.
Lastly, another ambiguity was whether a person directly participating in hostilities could be located outside the territory of one of the belligerents. Such situations are likely to occur with AI-enabled cyber operations conducted by civilians in a State not involved in the armed conflict. Views were divided, with the majority of participants agreeing that the location of the person participating in hostilities is irrelevant to the legal analysis. In other words, even a person engaging in hostilities remotely could lose their protected status. However, participants cautioned that a State would be unlikely to attack a person directly participating in hostilities from a non-belligerent territory due to other legal complications, such as the infringement of sovereignty, especially in the context of States engaged in non-international armed conflicts.
The debates underlined that AI-enabled cyber operations continue to raise many challenges in classifying civilian conduct, with concerns about civilians’ growing involvement in contemporary conflicts as well as the potential erosion of civilian protections when they are interpreted too narrowly.
The Challenges of Exercising Legal Obligations Amidst AI-Amplified Uncertainties
The final theme of the workshop can be summarized as a call from experts not to underestimate the challenges of complying with legal obligations when employing AI-enabled cyber operations in warfare. Although the nexus between AI and cyber does not necessarily require a reinvention of legal regimes, numerous uncertainties amplified by these technological capabilities may affect a commander’s or an operator’s ability to ensure compliance with the rules.
Users face numerous uncertainties about the expected behavior and effects of AI-enabled (cyber) tools. For example, deploying cyber malware that leverages generative AI to create or modify its code without human intervention may limit the foreseeability of its results. Another example is the use of AI-enabled cyber operations to sabotage an adversary’s AI-enabled system, as in the poisoning of the adversary’s data-driven system with misleading deepfake satellite videos. The effects of such an operation on the output or behavior of the adversary’s AI-enabled system should be carefully considered, but participants agreed they could be challenging to foresee.
Conversely, those using AI-enabled systems may harbor doubts about the safety and reliability of those systems. Sabotage attacks on AI-enabled systems could go unnoticed for long periods, leading operators to rely on or deploy systems that cause undesirable, if not unlawful, consequences. Even the mere threat of cyber sabotage operations could erode users’ trust and confidence in their AI-enabled tools. Trust in AI systems is not easily repaired; its loss could severely harm operations.
Second, as already mentioned, the use of AI is likely to intensify the speed and scale of cyberattacks. Such pace and reach can quickly overwhelm human operators and require reliance on automated cyber defenses with little to no time to verify expected effects. On the one hand, according to common interpretations of the reasonable commander standard, faster tempo operations afford fewer feasible precautions in a given circumstance. As a result, in fast-paced operations, it may be at times nevertheless reasonable to rely on an AI output without further verification. On the other hand, there are concerns that speed of operations exacerbates the risks of automation bias and overreliance on AI-enabled tools, where AI outputs are implemented without verification, not due to operational time constraints but automation of decision-making.
Furthermore, workshop participants raised the concern that AI-enabled (cyber) tools challenge the commander’s ability to comprehend the risks associated with their deployment and, consequently, the compliance of the attacks with relevant legal obligations. a commander acting in good faith may struggle to assess whether it is reasonable, and therefore lawful, to rely on an AI-enabled (cyber) system in a given situation. Struggling with assessments of reasonableness is not new for military commanders. However, in the context of AI-enabled (cyber) systems, commanders seeking additional information to identify vulnerabilities and mitigate risks may be unable to gather sufficient information to satisfy their interest in reducing uncertainty. A reasonable commander may turn to a forward-deployed engineer who should have been monitoring model performance, identifying any deviations from expected behavior, and alerting to possible cyber breaches. Notwithstanding, available testing techniques for AI-enabled systems are limited and, therefore, often unable to provide greater certainty about the reliability of AI-enabled tools or systems.
Conclusion
The workshop, organized by the ACI and the NLDA, highlighted rapid developments in AI-enabled cyberattacks and examined the challenges they pose to the application of international legal obligations in armed conflicts. Several conclusions are apparent from the proceedings.
First, although the use of AI may amplify evasiveness, operational scale, and accessibility of cyber operations, there are no qualitative differences that would challenge the application of the existing legal regimes.
Second, AI-enabled cyber operations bring long-unresolved legal debates back into focus, for example, on whether data is an object or how to qualify direct participants in hostilities. States are encouraged to provide greater granularity to the interpretation of these legal norms, especially in light of the growing everyday reliance of civilian populations on data.
Third, until more safeguards are put in place and technical dashboards are developed to monitor system performance, commanders and operators are likely to struggle to assess whether the use of AI-enabled (cyber) tools is reasonable in a situation and compliant with applicable legal obligations. There must be focused efforts to develop mechanisms, safeguards, and test protocols to support commanders in their tasks.
Finally, States often acquire AI capabilities from private entities that continue to service them. In this regard, it is necessary to ensure that obligations under LOAC are effectively communicated to the contracting parties and to those planning or developing AI-enabled systems for military use. Participants lamented that this expertise is largely missing in the private sector. At the same time, informing private entities, including engineers, of the applicable legal norms must not absolve States or commanders from exercising their obligations when launching AI-enabled cyber operations.
***
Klaudia Klonowska is a Postdoctoral Researcher at Sciences Po Paris, where she coordinates the DIGILAW clinic. She is also Managing Director of the Manual on International Law Applicable to AI in Warfare and a member of the International Law Association’s (ILA) Committee on AI & Technology Law.
The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: Xavier Cee via Unsplash
