In Honor of Françoise Hampson – Artificial Intelligence in Military Detention

by | Nov 3, 2025

Detention

Editors’ note: This post is part of a series to honor Professor Françoise Hampson, who passed away on April 18, 2025. The posts touch on a few issues—in this case detention—that Professor Hampson worked on and aim to pay tribute to the significant contribution her scholarship made to our understanding of international law. This contribution is based on an article that will be included in International Law and Artificial Intelligence in Armed Conflict, edited by Kubo Mačák, to be published in 2026 as part of the Lieber Studies Series with Oxford University Press.

The use of artificial intelligence (AI) is being increasingly reported in various areas of human activity including, of course, in military operations. The technology is all the rage in discussions among governments, experts, and scholars on the conduct of hostilities. However, almost no attention is being paid, at least publicly, to the (possible) use of AI in military detention.

The lack of interest in developing and applying AI systems for detention confirms what is already known to those working on these issues in practice: that, unfortunately, detention operations are often an afterthought compared to the focus of military planners and operators on targeting. Currently, tech companies also have little incentive to devote themselves to the possible uses of AI in detention given that the financial returns would be meager compared to States’ spending on weapons and weapon systems in the existing geopolitical environment.

While concerning, it is submitted that the ongoing attention deficit to AI in military detention may not necessarily be a bad thing. It should, ideally, allow more time for an in-depth examination by States, tech companies, military decision-makers, and other stakeholders of whether, and if so, how and when AI should be employed in dealing with persons deprived of their liberty in armed conflict.

This post provides an overview of just two possible uses of AI in military detention, based mainly on technological developments from the civilian sector. One of the tools is already being implemented, while the other could potentially be deployed in armed conflict detention. The examples given should not be taken as an endorsement by this author, or as implying lawfulness, which is not the subject of this contribution.

The Current and Possible Future Use of AI in Military Detention

Biometrics

Apart from biometrics, there is hardly any public record of the military use of AI tools in armed conflict detention as of the time of writing. Modern militaries are increasingly using biometric tools to verify the identities of captured persons and track detainees. These technologies (from fingerprint and iris scans to gait and facial recognition) allow forces to compile digital dossiers of individuals and quickly confirm if a detainee is a known threat or has a prior record.

For example, during the U.S. war in Afghanistan, coalition forces collected fingerprints, iris scans, and facial photos of almost nine million locals to achieve “identity dominance.” These databases were used to screen detainees against watchlists and link them to evidence of insurgent activity to help commanders with detention or release decisions. All data was stored in the Department of Defense’s Automated Biometric Identification Systems (ABIS) to facilitate the detention and targeting of persons of interest believed to pose a threat to coalition forces or the Afghan authorities. In Iraq, the U.S. military established a biometric data program that enrolled roughly two million Iraqis, an unprecedented scale of data mining in an occupation.

Other militaries have adopted or expanded similar measures. Israel deployed extensive facial recognition in the occupied Palestinian territories to control movement and identify suspects for arrest and detention prior to October 7, 2023, and has expanded its use of AI programs since.

Some de facto authorities have also exploited biometric tech, seized or repurposed, for capture and detention purposes. As has been reported, after the takeover of Afghanistan in 2021 the Taliban gained control of the U.S.-built biometric databases and devices containing iris scans, fingerprints, and the personal details of vast numbers of Afghans. Reports soon emerged of Taliban fighters using these systems to identify and detain (or execute) former security personnel and opponents.

Biometric AI offers undoubted operational advantages: it can verify identities in seconds; link detainees to prior activities or aliases; and reduce reliance on given and family names, which may in some cases be false or repetitive. It can also help parties register prisoner of war (POW) identities under the Third Geneva Convention (GC III), and those of civilians subject to security measures, including internment, under the Fourth Geneva Convention (GC IV). This could in turn facilitate the rapid transmission of accurate information to the National Information Bureaux and the International Committee of the Red Cross (ICRC)’s Central Tracing Agency pursuant to obligations under GC III (arts. 122-125) and GC IV (arts. 136-141). However, some of the legal and ethical implications are also significant; below is an example.

As Professor Marten Zwanenburg has recalled, GC III (art. 17) limits the questioning of POWs to obtain certain information (name, rank, serial number, etc.), while prohibiting coercion to obtain more. He notes that taking biometric data, like fingerprints or DNA, from a POW could be argued to be an unlawful sanction or a coercive method. The ICRC’s 2020 updated Commentary on GC III implies that collecting DNA from all POWs is not justified absent a specific purpose, such as identifying an incapacitated soldier. As described by Zwanenburg, some experts believe that taking or compelling a POW to provide biometrics violates the spirit of Article 17, while others contend, including him, that it is a permissible identification measure so long as it’s not punitive and is done humanely. State practice is evolving: for instance, the U.S. and Norwegian military manuals explicitly authorize or require the biometric enrolment of POWs for identification, indicating that this does not contravene GC III.

Similar concerns may be raised about taking or compelling the collection of biometric data from civilians. GC IV (art. 27) requires humane treatment and mandates respect for the person of civilians, specifying that no physical or moral coercion may be exercised against them, especially to obtain information from them or third parties (art. 31). Different views may also exist on whether the taking of certain types of biometric data, DNA for example, would contravene that prohibition.

Predictive Algorithms

One of the most discussed potential uses of AI in armed conflict detention is reliance on predictive algorithms to aid decisions on who should be detained and for how long. POW detention is likely to be somewhat of an exception to the application of this technology as captured combatants may, for the most part, be interned until the end of active hostilities based on their status alone, without an individualized assessment of security risk. However, civilian internment in international armed conflicts (IACs) and, it is submitted in non-international armed conflicts (NIACs), must be reviewed by a court or administrative board that applies basic principles of procedural fairness. Under international humanitarian law (IHL), civilian conduct must be evaluated on a case-by-case basis to determine whether a civilian represents a threat warranting internment as the severest measure of control allowed by this body of rules.

Predictive algorithms are risk assessment tools already used in some civilian criminal justice systems for bail, sentencing, and parole that could be “transposed” to an armed conflict setting. A military could employ machine learning models to analyze data on individuals such as past activity, associations, background, social media, and intelligence reports, to produce a risk score indicating the likelihood that a person poses a security threat. This could inform decisions about whether to intern, continue internment, or effect release.

There are no publicly confirmed cases of fully algorithmic-driven detention decisions to date, but transposing domestic algorithmic tools to armed conflict detention would raise serious concerns. Experts and scholars have, inter alia, cautioned against a “portability trap,” in which an algorithm designed for one social context is applied in another, producing misleading or harmful results. Unlike persons suspected or convicted of a criminal offense in a national setting, armed conflict detainees will most often have no prior police or judicial records available to the detaining authority, meaning that an algorithm must rely on intelligence and behavioral proxies that may be far less reliable than such records. It should be recalled that internment (the most common form of detention in armed conflict) may be defined as non-criminal detention for security reasons. The risk factors that algorithms in a domestic criminal justice or penitentiary system will rely on such as prior conviction(s), financial situation, family history, behavior in prison, etc., are not likely to translate well to a conflict zone in which the cultural context will be different and available data is sparse or biased. In other words, the risk factors for criminal detention and internment are not fungible.

Moreover, any AI system influencing detention decisions in armed conflict must incorporate and be able to apply substantive IHL standards such as “imperative threat to security” or “absolutely necessary” in civilian internment; the models need to be conservative and err on the side of caution by not recommending detention unless very high confidence exists. Militaries would also have to determine acceptable error rates (false positives vs. false negatives) the risk of releasing an allegedly dangerous person against the “mistake” of holding someone who represents no security threat. It is submitted that machines must in any case never be employed to make sole or final decisions on the deprivation of human liberty (or on the taking of life) and that human control over AI outputs must be exercised for reasons of compliance with the law and ethical considerations.

The application of detention algorithms would also need to ensure basic procedural safeguards. An internment review process must, as already noted, be undertaken on a case-by-case basis and be non-discriminatory. The detainee, or their legal representative where one is feasible, must be informed of the reasons for detention and be able to challenge it or, as most often happens in the practice of armed conflict internment, have it promptly reviewed by the detaining authority (with the “black box” problem likely to arise in either scenario). Such review, initial and then periodic, must be carried out by an independent and impartial body, with the detainee personally attending the proceedings. Reliance on AI processes should not be allowed to circumvent the application of these and other procedural safeguards in internment.

To navigate the issues outlined above, experts have stressed that strict governance frameworks need to be established before predictive detention algorithms are deployed. Militaries should clearly regulate if, when and how such algorithms are used, ensuring human oversight, transparency, and accountability at every step.

Can Humane Treatment be Embedded in AI Tools?

Any person deprived of liberty is extremely vulnerable. Being held in a bounded space without the freedom to leave means that the fulfilment of a detainee’s material needs (food, shelter, medical care, etc.), as well as their psychological well-being entirely depend on how they are treated by a detaining authority. Ensuring proper treatment is a constant struggle in most civilian detention facilities, but presents a particular challenge in armed conflict, where emotions run high and detainees belong to the opposing side. IHL takes account of this reality by requiring the humane treatment of detainees in both IAC and NIAC; it is an explicit general principle further reflected in myriad specific binding rules of treaty and customary law. There is no definition of humane treatment (nor should there be one) because an assessment of whether detainees are (in)humanely treated can only be contextual.

A key question in this context is what effect reliance on AI applications such as those described in the previous section (and others) is likely to have on the implementation of the requirement of humane treatment. Can algorithms be trained on a concept such as humane treatment that is “vague by design,” but is nevertheless the IHL benchmark for evaluating the lawfulness of both specific aspects of detainee treatment and of a detention regime as a whole? To this author it would appear doubtful and is likely to remain so in the future. The reasons have been well captured in the somewhat related context of humanitarian action more broadly:

Algorithms may simulate aspects of empathetic response, but the subtlety of trauma or grief, cultural meanings, and the silent dimensions of dignity often elude systems trained solely on textual or behavioural data. AI-generated content tools such as chatbots built on large language models (LLMs), for instance, may appear fluent, but are essentially ‘stochastic parrots’, stitching together statistically likely outputs based on correlations in training data without genuine understanding of meaning or context.

Similar challenges arise in attempts to encode IHL principles such as distinction, proportionality, and precautions in AI applications for the conduct of hostilities. As pointed out by Suresh Venkatasubramanian: “Models may be imprecise in a strict probabilistic sense, but they need precision in order to be built. And this precision is at odds with the vagueness baked into legal language.”

Conclusion

It would be premature to provide a conclusion related to the issue addressed in this contribution, for too much is still unknown. While some AI applications are being slowly introduced in civilian sector prisons and courts systems in a few countries, the use of AI in military detention, apart from biometrics, seems to be just on the cusp. As a result, neither the advantages, nor the inevitable challenges that will need to be overcome to ensure humane treatment in detention are yet on the public radar, even though they should be. It would also appear that certain very broad questions on the relationship and between humans and AI need to be discussed and, ideally, answered, before any real progress can be made. Chief among them, in a nutshell, is whether humanity will learn to govern AI before AI governs humanity. This author hopes for the former, but who really knows?

***

Jelena Pejic is a Lieber Scholar at the Lieber Institute and she was formerly a Senior Legal Adviser in the Legal Division of the ICRC in Geneva.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published.

 

 

 

 

 

 

Photo credit: George Prentzas via Unsplash