CyCon 2025 Series – Deciding with AI Systems: Rethinking Dynamics in Military Decision-Making

by | Aug 20, 2025

Decision-making

Editors’ note: This post is part of a series that features presentations at this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia. Its subject will be explored further as part of a chapter in the forthcoming book International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay. Kubo Mačák’s introductory post is available here.

International Humanitarian Law (IHL) builds on a delicate yet essential balance between military necessity and humanitarian imperatives. This balance is increasingly under pressure, marked by a concerning trend whereby IHL, originally intended for protection, is instead being used to justify destruction. Specifically, recent technological developments risk unveiling a new guise of protection: Artificial Intelligence Decision Support Systems (AI DSS), marketed as “tools” for increased accuracy and speed in military decision-making, including decisions on the use of force.

Current legal discussions rightly interrogate the risks of AI DSS in military decision-making. I suggest, however, that the challenges posed by AI DSS in military decision-making serve as a catalyst to critically reevaluate the role of humans in legally relevant military decision-making. Such a shift in approach would restore the balance struck by IHL and ensure that, as we progress, we do not abandon its protective core.

The Ideal Decision-Maker within IHL

The rules and principles of IHL primarily address States. Still, some provisions imply the presence of a human responsible for ensuring that means and methods of warfare comply with IHL norms. One of the few evident examples of such presence can be found in Article 57 of Additional Protocol I, particularly in its reference to “those who plan or decide upon an attack.” Though rarely made explicit within IHL, there is a widely held assumption among its expert community that legally significant decisions are ultimately made by human actors.

Importantly, this human decision-maker is not just any “human.” As my doctoral research explores, this figure is shaped by numerous assumptions. First, it is widely held that legal decisions are based on a rational decision-making process, suggesting that the interpretation of law requires an objective and value-neutral reasoning process to achieve an optimal outcome. IHL thus envisions a normative ideal of the human, devoid of emotions and personal biases, as being uniquely suited to make decisions in the context of armed conflict.

Second, this model of reasoning is closely tied to the belief that humans are capable of exercising control over technology. IHL presumes that compliance hinges on a level of consciousness whereby humans remain sufficiently aware of—or in “control” of—the output of the technological systems in use.

Together, these assumptions paint a specific and idealised picture of the human decision-maker within IHL. Given the emerging role of the human-AI DSS relationship in critical decision-making, this vision demands closer examination.

AI DSS: A “Tool” for Scientific Truth and Objectivity

Importantly, this dominant vision of IHL’s ideal decision-maker aligns with claims made by powerful AI scientists and business leaders. They argue that algorithmic decision-making, which relies on methods such as deduction, induction, and analogy, enhances legal certainty and facilitates neutral application by overcoming human bias and error.

While recognising that current AI systems have limitations, many of these stakeholders contend that technological neutrality can be preserved. Specifically, they maintain that issues with AI DSS can be addressed through technological solutions. For example, in addressing concerns about human operators’ limited understanding and trust in AI outputs, the Explainable AI programme published by the Defense Advanced Research Projects Agency (DARPA) claims that

[n]ew machine-learning systems will have the ability to explain their rationale, characterise their strengths and weaknesses, and convey an understanding of how they will behave in the future.

It follows that as AI DSS become more refined, they are expected to outperform humans across an increasing range of decision-making tasks. This depiction of AI DSS capabilities then implies that less human judgment will be needed, including in critical tasks such as targeting decisions. Unsurprisingly, military organisations appear eager to adopt these narratives. They tend to embrace this idealised image of AI DSS as entities that embody scientific neutrality in legally pertinent decisions, thereby liberating military decisions from human bias.

It is critical to interrogate whether this depiction of technological capability is accurate. Because if AI DSS are expected to provide the objectivity and neutrality that IHL seems to aspire to, what is the role of humans in legally relevant decision-making processes? Or does the normative pursuit of a strict binary between human and technology, along with the ideal of rationality, after all, risk undermining the protective essence of IHL?

Towards a Co-Constitutive Approach

My doctoral research suggests that a more reality-sensitive understanding of the human-AI DSS relationship can help us address these questions. We must first acknowledge that these systems do not exist in isolation. They are shaped by human choices, whether in prioritising certain technologies as more “useful” or “profitable” than others, or in the countless assumptions embedded by developers as they translate data into action. The upshot: every AI DSS reflects human values, priorities, and design decisions.

Yet this relationship is not one-sided. AI DSS also shape human behaviour, often subtly. These systems can frame decisions, influence trust, and steer action, frequently beyond one’s conscious awareness. Whether we follow a smartwatch’s rest suggestion, trust Google Maps’ directions, or rely on Netflix’s “top picks,” we all relate daily to such systems that affect our behaviour in predetermined ways.

From this mutually constitutive relationship, two crucial insights emerge. First, there is no neutral component in the human-AI DSS relationship: neither technology nor its users exist outside the social context in which they operate. Second, it invites us to reevaluate our conception of the human-technology relationship within IHL. Rather than viewing the relationship in terms of “control” at a fixed point, we should understand it as distributed and dynamic, extending across the entire lifecycle of a system, from the choice of investing in one system rather than another to development, deployment, and post-use evaluation.

This raises an important question: How can these insights contribute to legal discussions on the role of humans operating alongside AI DSS in legally relevant military decision-making?

The Human-AI DSS Relationship: An Opportunity

While my PhD explores the various opportunities offered by this shift in approach towards the human-AI relationship within IHL—and these will be further elaborated in the context of AI DSS in military decision-making in an upcoming book chapter—I want to highlight a frequently overlooked aspect, namely, how reliance on AI DSS may subtly, yet significantly, reshape the contours of IHL itself.

Recognising the mutual influence between humans and AI DSS means we cannot downplay legal concerns about AI DSS’ outputs. It is not because these outputs do not directly “classify” an individual’s status under IHL that their impact is minimal; far from it. In practice, AI DSS outputs are likely to affect, even if indirectly, how military decision-makers categorise and assess situations under IHL.

One relevant factor is that users may overlook the fact that the probabilistic logic used in these systems for decision-making tasks, such as predictive tasks, differs significantly from human normative logics of reasoning. Another factor relates to the assumptions embedded in those systems. Consider, for instance, the use of an AI DSS that employs remote biometrics to track gait or behavior to identify potential enemy threats. To uphold their promise of accuracy and capacity to enhance compliance with IHL, they must operate with accurate assumptions of what constitutes a “threat” and who counts as part of the “civilian population.” Unfortunately, these systems frequently embed ableist assumptions, overlooking the diversity of human bodies and behaviours present in armed conflicts.

As investigated here, a civilian person with a disability—who walks differently, uses assistive devices, or behaves in ways not represented in training data—risks being misidentified as a potential “threat.” When the AI DSS output aligns closely with human decision-makers’ own views or assumptions, such misclassifications are likely to go unnoticed, resulting in significant civilian harm and potentially amounting to violations of IHL. This risk is amplified by a well-documented human tendency toward automation bias, where individuals tend to uncritically trust technology, especially in high-pressure situations, such as targeting decisions.

Safeguards, including time and other restrictions that require human decision-makers to critically engage with AI DSS output, may mitigate such risks, but are unlikely to be sufficient.

The preservation of IHL’s protective aims demands safeguards that address how assumptions, biases, and other embedded logics permeate the entire human–AI lifecycle, including the application of IHL.

This is precisely the point where recognising the limits of IHL’s idealised notion of the human decision-maker—one who is perfectly rational and in control of technological “tools”—becomes crucial. Especially as long as technology companies, driven by vested interests around profit, play a central role in developing these technologies, a risk will remain that IHL’s sine qua non of protection will be undermined by systems presented as superior decision-makers based on a guise of accuracy and objectivity. The danger is that legal reasoning may quietly shift into algorithmically driven logic focused on optimisation and speed, without human users noticing. As a result, our understanding of IHL and its foundations may become increasingly misaligned with its application. All while we remain unaware of this divergence.

Concluding Thoughts

A renewed approach to the human-technology relationship within IHL is indispensable to shaping IHL’s culture of compliance. Instead of permitting industry interests and technological optimism to dominate IHL’s culture of compliance, this perspective encourages greater faith that we (humans) possess everything that is needed to make this culture a lived reality. Central to this perspective is a reflection on how AI DSS should relate to us and how we, in turn, should engage with these systems to foster a legal culture apt for responding to the evolving reality of military decision-making.

As explored elsewhere, I believe AI DSS have the potential to significantly enhance military decision-making while protecting civilians without compromising operational effectiveness. Many of these potentials lie in the decisions that take place well beyond the tactical level and should be driven by an ambition to upskill human decision-making capabilities. Research is ongoing on how non-invasive forms of AI DSS can help humans manage their emotions more effectively in decision-making. These systems aim to increase awareness and equip individuals with ways to take ownership of their emotions. This is just one example among many ways AI DSS can assist—rather than displace—humans in making emotionally complex IHL decisions in extreme situations.

In today’s context, where technologies are not merely “tools” for task execution but can fundamentally alter how legally relevant military decisions are made, such reflections need to form the foundation of any resilient culture of compliance with IHL.

***

Anna Rosalie Greipl is a Researcher at the Academy of International Humanitarian Law and Human Rights (Geneva Academy).

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

Photo credit: U.S. Cyber Command, Josef Cole