Legal Reviews of War Algorithms: From Cyber Weapons to AI Systems
States are obliged to conduct legal reviews of new weapons, means, and methods of warfare. Legal reviews of artificial intelligence (AI) systems pose significant legal and practical challenges due to their technical and operational features. This post explores how insights from legal reviews of cyber weapons can inform those of AI systems and AI-enabled weapons.
AI and cyber tools are similar and closely related. Both operate in the digital sphere and can be characterized as “war algorithms” when used for military purposes. In addition, AI can be used to control and deploy cyber weapons, while cyber weapons can be used to manipulate and counter AI systems.
This post addresses this correlation from the standpoint of legal reviews. It first delves into legal criteria relevant in the cyber domain that can help determine which AI-enabled tools deserve scrutiny, and how temporal considerations in legal reviews of evolving cyber weapons can inform when reviews of learning AI tools should be triggered.
Further, this post examines how substantive rules of international law relevant to cyber weapons’ reviews, including targeting law and the prohibition on indiscriminate weapons, offer guidance for assessing AI systems’ legality. Finally, from a practical angle, it addresses how assessment frameworks and toolkits in the cyber domain can support and inform review practices of AI-enabled systems.
Legal Basis and Scope
International law applies both to cyberspace and the development, deployment, and use of military applications of AI. Under treaty law, Article 36 of Additional Protocol (AP) I to the Geneva Conventions obliges States to assess whether the employment of new weapons, means, and methods of warfare would violate international law.
Because AP I has not been universally ratified, the question of whether the obligation to conduct legal reviews amounts to customary international law or finds support in other sources of international law remains open to debate. Scholars are divided on whether the rule has crystallized into customary law, although there is evidence for a more restrictive regime under customary law.
States not party to AP I may also engage in legal reviews as a matter of domestic policy (see U.S. Department of Defense (DoD) Directive 2311.01). Such procedures may serve to anticipate risks and identify disadvantages. Overall, contemporary State practice shows a positive trend for conducting legal reviews of cyber weapons. The Tallinn Manual 2.0 relates States’ obligation to ensure cyber weapons comply with the law of armed conflict under rule 110.
In the cyber domain, experts have proposed so-called software reviews and operational legal reviews to account for a lack of clarity regarding the definition of cyber weapons, the thresholds triggering armed conflicts, and the obligatory nature of Article 36 of AP I for its Parties. Approaches of this kind can be expanded to reviews of AI systems to address similar challenges.
Independently of these discussions, if AI applications are categorized as weapons, means, or methods of warfare, the current approach to cyber weapons can be useful for deciding if a system is subject to review. Cyber weapons that are capable of inflicting harm and destruction fall within the ambit of weapons reviews. Cyber tools intended to be used in situations below the threshold of armed conflict are not included in such reviews. Software not originally developed for military purposes should undergo legal reviews when acquired for use in conflict.
Temporal Considerations
States determine the appropriate moment for launching a review process, although reviews should be undertaken at the earliest possible stage.
As with cyber weapons, if an AI system needing review is produced domestically, this should be done at the conception, study, research, design, development, and testing stages. If the system is externally acquired, adopted, or procured, reviews are to be conducted while reviewing the offer. If software has already undergone a legal review by the offering State, this does not relieve the acquiring State of its obligations.
AI systems are likely to adapt and evolve once they are trained and/or deployed. The reality of cyber weapons and the “speed of cyber” already demands dynamic adaptation. Cyber tools are generally designed and tailored for a specific operation or target and may require frequent modifications. Iterative reviews may be necessary in light of continuously changing cyber environments, even during active hostilities. The Tallinn Manual 2.0 indicates that “significant changes” should trigger new legal reviews, while “minor changes” not affecting operational effects would not trigger review. Although defining the boundary remains challenging in practice, this can be used as a standard for AI systems.
The timing of the review can impact the designation of the competent authority. Ministries of defense typically review conventional weapons. The reality of cyber attacks tends to lead to less formality, and reviews by military lawyers advising commanders on specific operations may suffice. Germany, for instance, configures legal reviews of cyber means alongside operational planning and such reviews are integrated with precautionary obligations. This is a model that can be useful for AI systems.
Legal Considerations
The legality of a weapon is independent of its novelty or common use by States (see the DoD Law of War Manual). What matters is whether its use in some or all circumstances could violate international law. Although States need not foresee all possible misuses of weapons, including cyber weapons, they must apply higher diligence to AI systems with learning capabilities because of the potential unpredictability of the outcomes of their learning processes. Neither cyber weapons nor AI systems are currently prohibited by treaty or customary law. Their legality is determined by applicable rules of international law. In other words, legal reviews broadly address compliance with international law.
From the perspective of the law of armed conflict, legal reviews must first assess whether a cyber or AI-enabled weapon cannot be directed at (or its effects cannot be limited to) military objectives. Additionally, States must abide by targeting rules, notably those of distinction, proportionality, and feasible precautions. While traditionally applied by commanders and operators during specific operations, such rules should be integrated into legal reviews when systems autonomously perform targeting law assessments based on AI.
Use cases of cyber weapons can help assess use cases of AI. In the context of cyber weapons that could be directed by AI, it is noteworthy that cyber tools designed to target users of a website regardless of their combatant status are considered indiscriminate. Such tools should also be prohibited if they are capable of inflicting widespread, long-term, and severe damage to the environment. In addition, cyber devices causing harm after their activation through prior innocuous acts could be considered to be “booby traps,” and thus would engage the respective restrictive legal framework. Similar considerations apply to cyber and AI tools designed to alter or take control of restricted or inadmissible weapons.
There are jus ad bellum considerations as well. While Articles 2(4) and 51 of the UN Charter do not refer to specific weapons, use of autonomous capabilities embedded in cyber weapons or AI decision support systems remains subject to the rules on self-defense, necessity, and proportionality. Controversies and grey areas regarding the occurrence of “attacks”, notably in the digital sphere, could make related reviews complex or inconclusive. Compliance with human rights obligations can further guide legal assessments of AI systems, although to date no clear practice has emerged in the cyber domain.
Practical Considerations
Legal reviews involve legal, military, and technical perspectives. Tests and empirical evidence may contribute to legal evaluations. This may include the use of military “cyber ranges” or similar AI laboratories that assist in training and education, and can foster respect for targeting law and responsible behavior. However, reproducing simulations that replicate reality remains particularly complex in both the cyber and AI systems domains.
In the cyber domain, structured examination frameworks that involve unified methods to assess software’s specific and operational capabilities have been proposed to promote clarity and objectivity regarding cyber weapons’ functioning. These encompass design features and technical and performance characteristics.
Similarly, up-to-date toolkits offer guidance to practitioners through systematic access to information. These may include overviews of contemporary cyber incidents for lessons learned as well as hypothetical deployment scenarios that shed light on critical legal touchpoints, such as whether the use of a tool or system would constitute an “attack” and thus require a review. Furthermore, the mapping of current State practice can inform policymakers on successful approaches to legal reviews (see, for example, the Cyber Law Toolkit).
Moving forward, States’ steps to improve legal reviews of cyber weapons can already integrate elements that are important for reviewing AI applications, while new approaches to the legal review of autonomous weapons and respective exchanges among States can inform policy, procedural frameworks, and decision-making regarding practical aspects of legal reviews of cyber weapons (see Asia-Pacific Institute for Law and Security (APILS), Third Expert MeetingReport; APILS Legal Review Portal).
Conclusion
While AI systems and AI-enabled weapons pose new challenges to legal reviews of weapons, the law and practice with regard to cyber weapons and tools can advance current reflections on this issue. Cross-fertilization between the cyber and AI domains may become inevitable, but is already informative for reflections on legal reviews of AI. As such, new practice, coherence and clarity can emerge regarding legal reviews of war algorithms.
***
Dr Tobias Vestner is the Director of the Research and Policy Advice Department and the Head of the Security and Law Programme at the Geneva Centre for Security Policy (GCSP).
Nicolò Borgesano is the Associate Strategic Programme Officer at ITU and a former Associate Project Officer at GCSP.
The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: U.S. Air Force, Airman 1st Class Jared Lovett
