CyCon 2025 Series – Artificial Intelligence in Armed Conflict: The Current State of International Law

by | Aug 18, 2025

artificial intelligence

Editors’ note: This post is part of a series that features presentations at this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia. Its subject will be explored further as part of a chapter in the forthcoming book International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay. Kubo Mačák’s introductory post is available here.

Is the rise of artificial intelligence (AI) set to reshape the rules of warfare? While technological advances may challenge long-held assumptions about how armed forces plan and fight wars, the prevailing consensus among States remains that existing international law, including the UN Charter and international humanitarian law (IHL), continues to govern AI capabilities throughout their life cycle in the military domain. This message emerges consistently from several sources, such as UN General Assembly Resolution 79/239 on Artificial Intelligence in the Military Domain, the reports of the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS), and the UN Secretary-General’s 2024 report on lethal autonomous weapons systems.

The main idea underpinning this position is that international law is “technologically neutral,” i.e., it applies irrespective of the tools or technologies used. Thus, new technologies do not come into existence in a legal vacuum. They remain subject to the same rules that have governed State activity, including in armed conflict, for decades. This means, for instance, that the fundamental principles of distinction, proportionality, and precautions in attack, continue to fully apply, irrespective of whether a human or an AI-enabled or AI-powered system undertakes the conduct. International human rights law, too, continues to apply to the development of AI technologies and their use in armed conflict.

Yet international law’s technological neutrality does not, by itself, resolve the legal and operational challenges posed by machines that behave in increasingly autonomous ways on the battlefield. States recognise that deploying AI-enabled military systems may complicate compliance with IHL’s core obligations. Systems relying on opaque algorithms and machine learning might behave unpredictably, especially in unfamiliar or dynamic environments. This unpredictability raises pressing questions about whether they can consistently comply with rules of IHL, such as the principles of distinction and proportionality, in real combat scenarios. Ensuring human accountability remains equally challenging. How do States ensure that humans remain central in AI-enabled targeting determinations? And who is responsible if those determinations constitute a violation of international law?

Emerging Regulatory Work

These concerns have spurred several multilateral discussions. At the centre of these is the GGE LAWS, established within the framework of the 1980 Convention on Certain Conventional Weapons (CCW). Since its creation in 2016, the GGE LAWS has provided States with a forum in which to consider how rules of international law apply to the development and use of autonomous weapon systems. Based on its mandate, the GGE LAWS aims to achieve consensus by 2026 on a draft for a possible new legal instrument on LAWS.

The GGE LAWS’s work on this draft has been incremental. In 2019, the CCW High Contracting Parties endorsed eleven Guiding Principles reaffirming that international law, especially the UN Charter and IHL, must guide how States develop and use new military technologies powered by AI. These principles stress, inter alia, that it is necessary to retain human responsibility for decisions on the use of weapon systems across the entire life cycle of the relevant system, and that effective accountability exists for their development, deployment, and use. New autonomous weapons must also undergo thorough legal review to ensure that armed forces can use them in compliance with existing law.

Since 2023, the GGE LAWS has moved its work forward through a “rolling text.” This is a draft instrument that collects areas of consensus between the various GGE LAWS members in language that prepares the way for a future binding legal instrument. The members released the latest version of the rolling text on May 12, 2025.

Within the rolling text, the GGE LAWS espouses the so-called “two tier approach” (for which the UN Secretary-General has also advocated): categorical prohibitions for systems that cannot be used in compliance with IHL; and detailed regulation for other systems. The text as it currently stands bans the use of LAWS that cause superfluous injury or unnecessary suffering, are inherently indiscriminate, or whose effects in attack their operators cannot anticipate or control to meet IHL requirements.

Beyond outright bans, the draft text provides practical guidance for the lawful development and use of LAWS, urging States to conduct rigorous legal reviews, to maintain meaningful human control, and to put in place safeguards to detect, correct, and mitigate the risk of bias. The rolling text also makes clear that IHL imposes obligations on States, the parties to an armed conflict, and individuals, but not on machines. Responsibility must remain with humans at every stage of the system’s life cycle. States must therefore establish effective national processes for investigating incidents, reporting violations, and ensuring effective accountability.

Recent UN General Assembly resolutions have reinforced these trends. General Assembly Resolution 78/241 (2023) explicitly affirmed that international law, including the UN Charter, IHL, and international human rights law, applies to the development and use of lethal autonomous weapons systems. Crucially, the General Assembly also requested the UN Secretary-General to seek views from a broad range of stakeholders and underlined the need for urgent international action, giving political backing to the GGE LAWS’ ongoing efforts to develop consensus-based rules.

Building on this momentum, General Assembly Resolution 79/62 (2024) broadened the legal scope by adding international criminal law to the relevant bodies of applicable law, highlighting that accountability for violations extends to individual criminal responsibility where appropriate. It also called for open consultations with States and other stakeholders, a process that continued in May 2025 to consider the Secretary-General’s report on States’ positions on autonomous weapons systems. Furthermore, GA Resolution 79/239 (2024) recognised that international law must apply not only to fully autonomous weapons but to all stages of the AI lifecycle in military settings, from research and development through deployment and post-use review.

Beyond the GGE LAWS and the UN General Assembly, other multilateral initiatives are considering the application of international law to the development and use of AI technologies in armed conflict. Among these, a notable example is the Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM), launched in The Hague in 2023. Aiming to deliver a comprehensive report by 2025, the GC REAIM has the mandate to identify best practices, technical standards, and governance measures to help ensure that armed forces use AI lawfully and responsibly. The GC REAIM has adopted an inclusive approach, engaging States, industry, and civil society to integrate diverse perspectives into its work.

Individual stakeholders have also contributed to steer the broader debate, such as the International Committee of the Red Cross (ICRC). Its 2021 Position on Autonomous Weapons Systems advocates for a ban on unpredictable autonomous weapons and on the use of autonomous weapons to target human beings. If adopted by States, the ICRC proposals would subject the development and use of all other autonomous weapons to comprehensive regulation.

An Uncertain Future

Still, the future remains undetermined. The coming year will test whether the GGE LAWS’ rolling text will mature into a draft treaty text or whether non-binding norms will remain the dominant form of regulation for now. Negotiations at the GGE LAWS should aim to clarify the necessary level of human judgment and control that must be retained at all times, the specific measures that States should adopt to ensure compliance with the full spectrum of their IHL obligations, and the principles and rules that would ensure State and individual criminal responsibility when AI-enabled systems are deployed in armed conflict. These clarifications will be crucial if the final text is to translate broad guidelines into actionable, enforceable rules that will be effective on the ground.

Moreover, the international conversation is widening its focus beyond lethal autonomous weapons to encompass the broader range of military applications of AI, including intelligence analysis, decision-support systems, and cyber operations. This broader lens calls for parallel efforts to clarify and develop how bodies of law other than IHL, particularly international human rights law, state responsibility, and individual criminal responsibility, apply to the design and use of AI systems.

Whether States converge on a binding treaty, a non-binding instrument, or a hybrid approach that combines hard law with flexible guidelines, the next phase of work will be the true test of whether the international community can match technological advancements with effective regulation.

***

Dr Antonio Coco is an Associate Professor (Senior Lecturer) at Essex Law School, University of Essex.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

Photo credit: Tech. Sgt. Jim Bentley, USAF