Clarity and Consequence: Autonomous Wingmen and the Rising Standard of Feasible Precautions

by | Dec 23, 2025

Wingmen

The U.S. Air Force’s and Anduril’s ambitious wingman program, Fury, has already lifted off. Its designers intend Fury and comparable attritable collaborative combat aircraft (CCA) to draw fire, extend sensor range, and provide human pilots with additional seconds of clarity before they decide when and how to strike. Those seconds go to the heart of Article 57 of Additional Protocol I (AP I) to the Geneva Conventions, and its counterpart in customary international law, as articulated in the U.S. Department of Defense (DoD) Law of War Manual (§§ 5.3.3, 5.11–5.11.7).

The law of armed conflict (LOAC) presumes a decision space in which commanders and pilots can perceive and deliberate, as warfare often denies them that luxury. Fatigue, threat saturation, and degraded communications compress judgment until “precautions in attack” become aspirational. Suppose autonomous wingmen can absorb initial threats and stabilize the information flow. In that case, they might improve the human operator’s capacity to meet precautionary standards, reducing the fog and cognitive stress that drive tragic error.

The core LOAC principles enshrined in AP I (distinction (art. 48), proportionality (art. 51(5)(b)), and precautions in attack (art. 57)) all hinge on the attacker’s access to the information reasonably available at the time. The International Committee of the Red Cross Commentary to AP I and Customary International Humanitarian Law Study (specifically, rules 1, 14–21) explicitly frame these obligations around the limits of human perception under combat conditions. The United States, though not a party to AP I, adopts the same standard through customary law and through the DoD Law of War Manual (§§ 5.3.3, 5.11), which anchors attacker obligations in the feasibility of gathering, processing, and acting upon available information.

In this sense, uncertainty performs a dual legal function. Factually, uncertainty restricts the attacker’s ability to perceive threats, correlate sensor inputs, and identify civilian risks, making LOAC compliance more difficult. Legally, however, those very limits narrow the scope of what an attacker can be expected to perceive and thus shape assessments involving questions of distinction and proportionality. When “fog” constrains what is reasonably knowable, the law calibrates expectations accordingly.

Once that fog begins to lift, however, whether through distributed sensing, attritable wingmen absorbing initial threats, or more stable data fusion, the content of these legal duties shifts. As more information becomes “reasonably available,” the obligation to verify the target, to take feasible measures to minimise civilian harm, and to cancel or suspend the attack when doubt arises (AP I, arts. 57(2)(a)(i), 57(2)(a)(ii), 57(2)(b)) become correspondingly more demanding. What once qualified as unforeseeable or unverifiable may, with enhanced sensor reach and reduced time pressure, become both foreseeable and verifiable.

In doctrinal terms, improvements in situational awareness enlarge the attacker’s “epistemic baseline,” shifting the legal assessment of both feasible precautions and permissible risk. The claim is neither celebratory nor naïve. Greater clarity and safety for pilots inevitably tighten the accountability threshold: once machines shoulder the chaos, humans have fewer excuses for misjudgment.

Fury promises to ensure an operationally more stable, less chaotic decision space for human operators. These conditions most directly reshape the feasibility-based obligations of Article 57, including: what risks can be mitigated; what doubts can be detected; and what an operator ought to have done before the engagement. Accordingly, while this post acknowledges the doctrinal implications for distinction, its analytic focus remains on precautions in attack, where pilot safety and reduced confusion most clearly translate into a more stringent assessment of diligence and legal responsibility.

Precautions in Air Operations

States often treat precautions in attack as abstract obligations. Yet they were written with aviation in mind, where fleeting visibility, compressed timelines, and high-velocity engagements make civilian protection uniquely fragile. The provision imposes a sequence of duties, each tied to a cognitive capacity that aircrews must maintain even under extreme stress: verify the target; choose a method of attack that minimises incidental harm; assess proportionality; and cancel or suspend an attack if new information casts doubt on the legitimacy of the strike (DoD Law of War Manual, Chapter 5). Underlying these steps is the overarching concept of “constant care” (AP I, art. 57(1)).

Importantly, Article 57 of AP I does not prescribe success; it prescribes diligence. The law recognizes that pilots operate amid imperfect information. That margin of uncertainty is gradually shrinking as newer generations of aircraft fuse sensor streams and reduce the gaps that once dominated air operations. Nonetheless, in a supersonic engagement, “reasonably available” may mean a handful of seconds to verify sensor cues, correlate them with mission parameters, and decide whether doubt warrants aborting, a mental calculus that depends heavily on human cognitive load.

Modern air combat environments amplify this strain. Dense air-defence networks compress reaction times, electronic warfare degrades sensory inputs, and surprise contacts force split-second judgments. These conditions narrow the “decision space” in which belligerents can meaningfully take precautions. Much of the literature on air operations implicitly acknowledges this gap. Accordingly, the law’s ideal of deliberative precaution often exceeds what pilots, under real-world conditions, can humanly sustain (see here, here, here, and here).

This tension is where attritable autonomous wingmen become legally interesting. If Fury can stabilize or expand that decision space, it may strengthen compliance with the duty of constant care.

Attritable Wingmen Could Reduce the “Fog of War”

The defining characteristic of systems like Fury is their willingness to enter risk envelopes that human pilots must avoid. By flying ahead of manned aircraft, these wingmen probe air defence systems, trigger hostile radars, and draw missile fire that would otherwise force pilots into defensive manoeuvres. This is more than tactical convenience. The ability to remain on profile rather than reactively manoeuvring can mean the difference between a rushed strike and a careful one. The pilot who is not defending can review sensor correlations, check for civilian presence, reassess attack geometry, and, if needed, abort. In other words, less chaos means more precaution.

Enhancing Situational Awareness

Attritable wingmen carry sensor suites that complement, not merely duplicate, those of crewed fighters. Their forward placement increases standoff intelligence, surveillance, and reconnaissance reach, reveals threat movements earlier, and helps filter false positives and adversarial deception techniques. In human factors research, increased situational awareness, when delivered in a manageable form, correlates with higher target identification accuracy and lower error rates under stress.

If properly fused and communicated, this distributed sensor data can widen what precautionary obligations describe as the “information reasonably available” to attackers at the time of decision. Information superiority does not guarantee lawful conduct, but it improves the factual footing on which verification and proportionality determinations rest, replacing guesswork with a clearer and more coherent picture of the battlespace.

Cognitive Load Mitigated

Cognitive load is a silent adversary in air operations. High workload degrades judgment, narrows attention, and increases vulnerability to perceptual biases, especially under threat. A system that can accept delegated tasks (navigation, threat detection, ingress timing) allows pilots to devote their remaining cognitive resources to the legally decisive question: under these conditions, is force appropriate?

Autonomy that absorbs procedural tasks can, therefore, strengthen the human’s ability to perform the qualitative tasks the law demands. The structure of Article 57 implicitly prioritises quality over speed; attritable wingmen offer, in some scenarios, both. Surprise contacts, unplanned movements, unanticipated air defence nodes, and sudden civilian presence are where unlawful strikes most often originate (see here, here, and here). Because attritable wingmen can confront or illuminate these unexpected elements first, their presence may reduce the likelihood that pilots will fire under uncertainty.

The Accountability Paradox and Legal Pressure

The potential precautionary gains that autonomous wingmen promise do not come without legal consequences. Indeed, the very improvements that may enable pilots to better satisfy Article 57 of AP I raise the expectations placed upon them. This dynamic forms what might be called the accountability paradox: as machines absorb more physical risk and cognitive burden, the threshold of diligence expected from human operators rises accordingly.

If attritable wingmen extend the sensor horizon, filter inputs, and present a clearer battlespace picture, the human operator’s duty to act on that information expands. The pilot who previously faced chaotic threat cues now faces structured data. In legal terms, what was once unforeseeable may become reasonably foreseeable. And what was once a defensible misjudgment may increasingly appear as a failure of diligence. Thus, the very success of assistive autonomy broadens the scope of what a reasonable operator “should have known” before authorizing or continuing an attack. It narrows the lawful margins of error.

Moreover, the presence of semi-autonomous wingmen reconfigures the commander-subordinate relationship. Although machines are not legal agents, pilots who supervise an autonomous scout or decoy effectively command a system whose behaviour contributes to the attack. This supervisory posture resembles, in functional terms, the “effective control” standard that underlies doctrines of responsibility.

Even if no criminal liability attaches to machine behaviour, the expectation that humans actively supervise the wingmen’s actions introduces a heightened standard of care, which raises several questions. For example, did the pilot understand the system’s confidence levels or alerts? Did the operator override a questionable target classification? Did the commander set appropriate parameters before mission ingress? Was the system’s behaviour monitored at moments where new information clearly emerged?

If autonomy gives humans breathing room, the law will expect them to use it. Automation bias, the well-documented human tendency to trust algorithmic recommendations even when they conflict with intuition, poses acute risks in lethal operations. A pilot who receives a target cue from an attritable wingman may implicitly assume its correctness, particularly when the system has successfully absorbed prior threats. Operational success breeds psychological trust.

Attritable wingmen promise to reduce uncertainty, but they will also fail. Electronic warfare, spoofing, degraded sensors, or adversarial deception may render their feeds unreliable. Humans supervising multiple wingmen face a difficult legal challenge in detecting when information has become contaminated and acting immediately to suspend the attack. The law may judge a failure to notice degraded autonomy precisely because the system was supposed to reduce chaos, not contribute to it.

A Silver Lining

The possibility that attritable autonomous wingmen could support more rigorous precautionary practice depends entirely on the architecture surrounding them. These systems do not inherently enhance precaution; they create conditional opportunities for humans to exercise better judgment. Whether those conditions materialize in practice turns on engineering design, operator training, legal review processes, and the evidentiary infrastructure that governs accountability. Absent these supporting structures, the precautionary benefits collapse into narrative rather than law.

Classic Article 36-style reviews presume weapons with stable characteristics. Autonomous wingmen change this assumption. Their behaviour changes with software updates, sensor calibrations, and mission profiles. This requires version-based review gates, mandatory pre-deployment operational tests after each major update, red-team simulations for adversarial conditions, and “re-review triggers” tied to firmware changes, new training data, or unexpected field behaviour.

Article 57(2)(b) of AP I requires attackers to “cancel or suspend” an attack when circumstances change. Attackers can only exercise this duty if the operator can reliably override the machine. Users must pair autonomy with latency guarantees for overrides under contested electromagnetic conditions and negative testing, or where the system intentionally misclassifies to check human response.

Concluding “War Cards”

If autonomous wingmen reduce fog and enlarge decision-space, investigations must be able to verify that operators used that space prudently. For this, the system needs several capabilities, including: a tamper-evident evidentiary package that captures version and configuration at the time of operation; sensor inputs relayed to the pilot; timestamps of cues, warnings, and degradations; a record of human overrides or hesitations; and, where applicable, proportionality calculations or collateral-estimate displays. This evidentiary scaffolding protects operators against unfair inference and ensures responsibility remains traceable and human.

Finally, those who claim that CCA platforms improve precautionary performance should demonstrate as much in operationally realistic trials. These should include validating sensors and autonomy under simulated jamming, decoys, civilian clutter, and degraded GPS conditions, where pilots most need the clarity that autonomy claims to provide. Operators must demonstrate the ability to verify targets, recognise doubt, and abort missions while supervising the wingman.

The path forward is procedural. If States treat autonomy as a tool for strengthening precaution rather than circumventing it, and if they align design, doctrine, and accountability accordingly, autonomous wingmen may ensure that the law does too.

***

Davit Khachatryan is an international law expert and researcher with a focus on operational law, international criminal law, alternative dispute resolution, and the intersection of various legal disciplines.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

Photo credit: U.S. Air Force