Future of Warfare and Law Series – Addressing Uncertainty in the Use of Autonomous Weapons Systems
Editors’ note: This post is part of a series featuring topics discussed during the Third Annual Future of Warfare and the Law Symposium. LTC Christina Colclough’s introductory post is available here.
The Future of Warfare and the Law Symposium, which took place in May 2025, brought together lawyers and technical experts to discuss some of the most pressing challenges concerning the future of warfare, including military uses of artificial intelligence (AI). This post addresses one of the topics that generated significant discussion in the workshop: uncertainty in targeting using AI-enabled autonomous weapons systems (AWS). Consistent with the cross-disciplinary approach of the workshop this post is intended to help weapons designers and developers (or the officials who set weapons’ performance parameters for designers and developers) better understand some of the legal issues that might arise when AWS are fielded.
Uncertainty and AWS
There is no generally accepted definition of AWS, but they are commonly understood as weapons systems that, once activated, can select and engage targets without further intervention by a human operator. While AWS may feature different degrees of autonomy, they generally use a combination of sensors and artificial intelligence to verify and engage targets or target sets that have been pre-selected by a human operator. For example, an AWS may be designed to search and destroy enemy tanks. The AWS will search for technical signatures (e.g., heat, electro-magnetic, acoustic, or visual) that match the unique profile of a tank and then autonomously attack the target. AWS can provide significant military advantages due to their speed in processing large amounts of data, the ability to operate in swarms and/or in communications degraded environments, and the ability to identify and destroy targets whose location may not be known to the human operator at the time of the AWS activation.
Despite these military benefits, the use of AWS in targeting poses legal challenges. This post focuses on one of these legal challenges. Specifically, AWS introduce an element of uncertainty into targeting because one or more critical functions in the targeting cycle—such as the selection, verification, tracking, and engagement of targets—are delegated to machines. The human operator will, at least for the foreseeable future, still set parameters on the use of force by directing the AWS to target a particular type of military objectives (i.e., tanks or armored vehicles). But, the human operator will often not know which precise object within a target set will be targeted (or whether the AWS will correctly identify the intended target), where it will be engaged, when force will be deployed, or in some cases why the AWS engaged the target.
Uncertainty and the Law
This uncertainty raises potential challenges for compliance with the law of armed conflict (LOAC). The rules and principles of LOAC require combatants to make context specific judgments and determinations when conducting attacks. These obligations apply to humans and cannot be delegated to machines. The following LOAC rules are codified in Additional Protocol I (AP I) to the Geneva Conventions. While the United States is not party to AP I, the rules of distinction and proportionality are considered binding as a matter of customary international law.
The principle of distinction requires combatants to distinguish between combatants and military objectives, on the one hand, and civilians and civilian objects on the other, and to direct their attacks only against the former (AP I, art. 48). An object constitutes a lawful “military objective” only if it is expected to make an “effective contribution to military action” and its partial or total destruction is anticipated to confer a “definite military advantage” in light of “the circumstances ruling at the time” (AP I, art. 52(2)). While some objects, such as tanks, are military objects by their nature, other objects may be considered military objectives only in certain circumstances, based on their “location, purpose, or use.” For these objects, combatants need some knowledge of the circumstances in which force will be deployed to assess whether the object is contributing to enemy action and whether its destruction would confer the requisite military advantage.
The principle of proportionality similarly requires combatants to make context-based evaluative judgments. Article 51(5)(b) of AP I prohibits attacks that may be expected to cause incidental harm to civilians or civilian objects that “would be excessive in relation to the concrete and direct military advantage anticipated.” To conduct this balancing of interests, combatants generally need some knowledge of the circumstances of the attack so they can calculate both the likely collateral damage and the military advantage expected from the attack.
Uncertainty and AWS
The inherent uncertainty in the use of AWS can complicate these determinations required by LOAC. As noted above, AWS may create several different layers of uncertainty in targeting. The first layer of uncertainty concerns the reliability of the AWS in correctly verifying the intended target (for example, correctly identifying an enemy tank as a target rather than a school bus). As with any data driven AI system, an AWS trained on synthetic data may perform slightly differently in new and dynamic operational environments. AWS that rely on machine learning while activated generate an additional layer of uncertainty, as the AWS may modify its behavior without human input. This uncertainty concerning the correct identification of intended targets raises potential legal concerns under Articles 50 and 52 of AP I, which prohibit attacks where there is “doubt” as to the legal status of the individual or object. It may also implicate the obligation under Article 57(2) to “do everything feasible to verify that the objectives to be attacked” are neither civilians nor civilian objects.
The second layer of uncertainty concerns the timing and location of an attack. An autonomous loitering munition, for example, may not strike its target until hours or days after activation and potentially miles away from where it was initially deployed. This uncertainty can raise challenges for both distinction and proportionality, which is why the International Committee of the Red Cross has proposed legal “limits on the duration, geographic scope and scale of use” of AWS. As noted above, the test for identifying a “military objective” requires an assessment whether the destruction of an object would confer a definite military advantage “in the circumstances ruling at the time.” The uncertainty as to when an AWS might engage its targets presents challenges for assessing what the circumstances will be at the time of attack, especially if there is a significant lapse of time between the moment of deployment and the moment of kinetic action. While military objects by “nature” are unlikely to change status, so called “dual use” objects (objects like bridges or cell phone towers that have civilian uses but may be considered military objectives based on their location, purpose, or use) may cease to be military objectives if they no longer make an effective contribution to the enemy’s military action due to a change in circumstances.
Perhaps more importantly, uncertainty as to the timing and location of an attack raises concerns for compliance with the principle of proportionality, codified in Article 51 of AP I. If a commander does not know where or when an AWS will strike its target, she may not be able to assess the likelihood or degree of collateral damage. Consider an AWS programmed to loiter for 24 hours over a 100 square mile grid with the objective of locating and targeting enemy tanks. If the AWS engages the tank in an open field where hostilities are ongoing, it is unlikely any civilians would be present. However, if the AWS engages the tank while it is driving through an urban area, the collateral damage concerns could be much more significant. If the human is not monitoring the AWS when it engages the target, the operator may not be aware of potential collateral damage.
Mitigating Uncertainty in the Use of AWS
The inherent uncertainty entailed in the use of AWS, however, does not necessarily render their use unlawful. Uncertainty in warfare is not unique to the use of AWS. The use of long-range conventional weapons, for example, entails some of the uncertainty described above as circumstances can change during the time between weapon launch and detonation. Moreover, LOAC does not require absolute certainty in targeting determinations. Combatants must operate in the “fog of war.” The legal judgments combatants must make in complying with the principles of distinction and proportionality are based on the information reasonably available to them at the time, which may be incomplete or even faulty. LOAC does not impose strict liability on combatants for any harm to civilians or civilian objects in armed conflict. Rather, LOAC requires that combatants make reasonable and good faith assessments based on the information they have, or reasonably should have, at the time of the attack.
The use of AWS similarly does not require absolute certainty in targeting decisions. In some instances, uncertainty will not materially affect the legality of the use of a weapon system as the commander can still make informed decisions regarding distinction and proportionality based on the circumstances in which it is deployed. In other contexts, uncertainty may pose significant challenges for LOAC compliance. Whether the inherent uncertainty created by autonomous technology will render an attack unlawful will depend on several factors, including the capabilities of the AWS, the operator’s use of the weapon, and the operational environment in which it is deployed.
To illustrate this argument, consider a hypothetical unmanned aerial vehicle that is programmed to autonomously detect, track, and engage enemy armored vehicles. We might call this weapon the autonomous counter-armored vehicle system or “ACAVS.” The ACAVS is designed to loiter over a geographically determined location for some defined period until it detects an enemy armored vehicle. It then dive-bombs into the vehicle and detonates its munition. Testing on the ACAVS reveals that it accurately and reliably identifies enemy armored vehicles with 99 percent accuracy. Testing also shows, however, that it mistakenly identifies school buses for armored vehicles ten percent of the time. In other words, it positively identifies actual armored vehicles nearly all the time, but it will mistakenly identify and engage one of every ten school buses it detects. This degree of uncertainty, at first blush, seems legally problematic. The possibility that the ACAVS might strike a school bus filled with children seems like an unacceptable legal and moral risk, given the relatively modest military advantage of destroying an enemy armored vehicle.
Weapons developers can reduce this uncertainty risk by giving operators the ability to control and adjust the parameters of the ACAVS deployment. This is why U.S. Department of Defense Directive 3000.09, “Autonomy in Weapons Systems,” requires that all AWS “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” By exercising human judgment to control the temporal or geographic parameters of the ACAVS, a commander can significantly reduce the legally problematic uncertainty in the weapon system.
Deploying the ACAVS at night, for example, would significantly mitigate any concerns about collateral damage and thus proportionality. An ACAVS deployed at night could still incorrectly target a school bus, but there is little chance that children would be on the bus. To further reduce legal risk, the ACAVS operator could seek to confirm the location of schools within the area of operation and program the ACAVS not to fly within 500 feet of those locations. As school buses are generally parked in school parking lots at night, this would drastically reduce the risk of mistakenly targeting a school bus. Finally, operators could take steps to limit any collateral damage resulting from a strike against a correctly identified enemy armored vehicle. If operating in an urban environment, where civilians could be in the proximity of armored vehicles, the human operator could adjust the size of the ACAVS munition to limit the blast radius and reduce the risk of collateral damage resulting from the strike. Once these mitigation measures are implemented, the operator might still not know when or where force will be deployed, but that uncertainty is less problematic for LOAC compliance because he has confidence the ACAVS will not strike civilian objects or cause excessive collateral damage.
Concluding Thoughts
These risk mitigation options would only be feasible if the ACAVS is designed to have such capabilities and operators receive appropriate training on the weapon system. It is thus important for weapons developers to be aware of the legal challenges presented by AWS and to take steps at the design stage to allow human operators to exercise appropriate levels of human judgment during deployment. Human judgment is critical for both effective targeting and compliance with the fundamental LOAC rules of distinction, proportionality, and feasible precautions.
The challenge for weapons developers is determining how and when human judgment will be needed and designing weapons systems that allow for this judgment to be exercised. Ensuring human judgment or control over the use of force will not eliminate all concerns about the use of AWS, but it is critical for ensuring that these weapons can be used lawfully.
***
Charles Trumbull is the Legal Adviser for the U.S. Mission to the United Nations and Other International Organizations in Geneva. This post is written in the author’s personal capacity and does not necessarily represent the views of the U.S. Mission, the Department of State, or the United States Government.
The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: U.S. Air Force
