Artificial Intelligence in Armed Conflict CyCon 2025 Series – Introduction
Artificial intelligence (AI) is rapidly emerging as the defining disruptive technology of our time. While academics lament the supposed imminent demise of college writing, people turn to ChatGPT instead of their therapists, and agentic AI dangles the promise of an executive assistant for us all, militaries are no less determined to harness the potential of AI. Vladimir Putin expressed this ambition in stark terms as early as 2017, declaring that whoever “becomes the leader in this sphere will be the ruler of the world.” Since then, other States—foremost among them the United States and China—have been increasing their investment in military AI at an exponential rate.
As a result, AI is already reshaping the conduct of hostilities. In the Russia-Ukraine war, Ukrainian forces are deploying AI-enabled drones that can identify targets and navigate terrain. Russia has fielded its own AI-based systems, including the Abzats anti-drone system that detects and disrupts Ukrainian drone frequencies. In the Israel-Hamas conflict, autonomous and AI-enabled systems play a similarly prominent role: the Iron Dome missile-defence system detects, identifies, and intercepts threats without human intervention, and AI tools—such as Habsora, which identifies and suggests objects of military interest, and Lavender, which uses machine learning to flag suspected Hamas operatives based on pattern-of-life analysis—support targeting decisions.
Outside the realm of kinetic warfare, AI is transforming the cyber environment as well. At the 34th International Conference of the Red Cross and Red Crescent, States warned that the use of AI in malicious cyber activities could significantly increase their scale, speed, and impact (Resolution 2, preambular para. 11; see my earlier analysis here). AI can be used to identify and develop exploits for software or network vulnerabilities, or to conduct harmful cyber operations autonomously. As noted by the International Committee of the Red Cross (ICRC) earlier this year, these capabilities heighten the risk of indiscriminate effects, including damage to critical civilian infrastructure and uncontrolled escalation in complex digital environments. AI is also revolutionising the information domain. Generative tools now enable the production of highly convincing false text, audio, images, and video. In armed conflicts, such technologies can amplify psychological operations, incite violence, and disrupt essential services and humanitarian operations.
At the same time, AI offers opportunities for innovative and more efficient humanitarian action during armed conflicts. AI-enabled tools can help identify missing persons, map patterns of violence, and predict population movements. For example, in 2017 the Office of the UN High Commissioner for Refugees launched Project Jetson, a machine-learning tool designed to forecast forced displacement of people. Meanwhile, the ICRC has been employing AI tools in its operations to improve logistics and deliver health care more efficiently (see its 2024 policy on AI, p. 3). Yet even these positive developments can raise concerns, including over the protection of personal data and human rights of the beneficiaries, as well as over ensuring accountability. These issues must be addressed if the benefits of AI are to be realised without compromising humanitarian principles.
The Series
Against this backdrop, I am delighted to introduce the forthcoming series on International Law and Artificial Intelligence in Armed Conflict. It begins with this introductory post and continues with four insightful contributions offering distinct perspectives on the subject. The series offers an early glimpse of a future book of the same title, which I have the privilege of editing as part of NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE)’s project on AI and the legal aspects of cyber operations led by Lieutenant-Colonel Nick Wobma, Deputy Head of Law Branch at CCDCOE. The volume will be published next year by Oxford University Press in the prestigious Lieber Studies Series.
Too often, book projects of this kind keep their findings under wraps until publication. We have chosen a different approach. At this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia, four of the authors featured in this series presented their draft chapters in an engaging and well-attended panel discussion, which can be viewed online. Over the coming days, we are complementing those presentations with written posts that distil their key arguments.
The book will examine how AI technologies are reshaping armed conflicts through applications in areas such as cyber operations, targeting, decision support, and humanitarian action. At its core, it will assess how international law—in particular international humanitarian law (IHL)—applies to these developments. We have assembled 24 leading and emerging experts from around the world who will contribute chapters on different aspects of this theme.
The volume will be divided into four substantive parts, each represented in this series by one post.
The first part, entitled “Foundations,” will lay the groundwork for the rest of the book. The corresponding post in the series is by Dr Antonio Coco, Associate Professor at the University of Essex. His post examines the current state of international law as it applies to the use of AI in armed conflict, providing the basis for the discussions that will follow. International law is often said to be technologically neutral. Antonio takes that assumption as his starting point and explains that, on its own, such neutrality does not resolve all of the legal and operational challenges posed by AI.
The second part of the book, “Applications,” will focus on how international law applies to selected uses of AI in armed conflict, to include cutting-edge issues such as the use of AI in detention, humanitarian aid, and psychological operations. In her post, Dr Anna Greipl, a researcher at the Geneva Academy of International Humanitarian Law and Human Rights, builds on her recently completed PhD to examine the evolving relationship between humans and AI decision-support systems in military operations. She argues that the normative conception of this relationship must be critically re-evaluated to preserve and restore IHL’s delicate balance between military necessity and humanitarian imperatives.
The third part, “Cyber,” will explore the intersection of AI and cyber operations during armed conflict. The corresponding post in the series is written by Colonel Dr Eric Pouw and Brigadier-General Professor Peter Pijpers, both based at the Faculty of Military Sciences of the Netherlands Defence Academy. In their post, they unpack the challenges posed by the use of AI in offensive cyber operations during armed conflicts from the perspective of international law. Eric and Peter analyse how these challenges emerge from three distinct sources: the applicable legal framework; the unique characteristics of cyberspace; and the properties of AI itself. They highlight the grey areas where accountability may be most at risk.
The final part, “Compliance,” will examine how to ensure that the use of AI during armed conflict remains within the boundaries of international law. Our fourth contribution is by Netta Goussac, a Senior Researcher at the Stockholm International Peace Research Institute (SIPRI), and Professor Rain Liivoja, Professor and Deputy Dean at University of Queensland Law School. In their post, they explore the critical role of legal reviews in safeguarding compliance with IHL in the context of military AI capabilities. They argue that the distinctive features of such capabilities require a tailored approach to these reviews and outline practical ways in which States can strengthen their processes to ensure lawful development, acquisition, and use of military AI systems.
Concluding Thoughts
I am sure I speak for all contributors and the project team more generally when I say that we welcome readers’ feedback and suggestions, which will be carefully considered as we revise and finalise the book. I am grateful to the Lieber Institute and Articles of War for offering a platform for this series and hope it will spark a constructive exchange of ideas. I invite you to follow the series in the coming days as we examine how international law, including IHL, governs—and thus constrains—the use of AI in today’s armed conflicts and may shape its role in those of the future.
***
Dr Kubo Mačák is a Professor of International Law at the University of Exeter in the United Kingdom and a Senior Fellow of the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE).
The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: Christian Clausen, A1C, USAF
