Diverging Standards in the Legal Review of LAWS

by | Jun 6, 2025

Legal Review LAWS

In May 2025, Anduril Industries publicly unveiled Fury (YFQ-44A), a next-generation autonomous aircraft currently under evaluation by the U.S. Air Force as part of its Collaborative Combat Aircraft (CCA) program. Fury takes its first test flights this summer. The goal is to achieve full operational deployment by 2030. It represents a major leap forward in artificial intelligence (AI)-driven airpower.

Yet, it is only one piece of a legally complex ecosystem of emerging autonomous systems. Also among notable developments is Roadrunner, a twin-jet-powered, reusable, vertical takeoff and landing (VTOL) aircraft designed to intercept aerial threats autonomously. Such systems can be integrated into Lattice, Anduril’s AI platform that fuses data from satellites, drones, radars, and cameras to generate real-time targeting and coordination decisions faster than any human could respond.

Fury and Roadrunner raise immediate concerns about levels of human involvement and control, as these aircraft can engage without real-time human input once airborne. Although current testing keeps a human supervisor “on the loop,” Fury’s AI already selects and prioritizes targets; a single software update could allow engagement with minimal human input. And therein lies one of the most urgent legal questions: when decision-making is delegated to machines running predictive models, can core International Humanitarian Law (IHL) principles—distinction, proportionality, and accountability—still be meaningfully upheld?

International law includes a provision for systematic legal review of any new weapon, autonomous or not. Article 36 of Additional Protocol I to the Geneva Conventions plays a central role here. While not binding on the United States, it formally obliges most NATO allies to review new weapons for compliance with IHL before adoption. However, only a minority of States Parties to Additional Protocol I (including most NATO members) conduct systematic Article 36 reviews for all new weapons and methods of warfare, leading to substantial divergence between formal obligations and operational reality. The United States follows a similar review process as a matter of policy (§ 6.2).

This post examines legal review mechanisms for AI-driven platforms and addresses the unresolved challenge of meaningful human control as autonomy accelerates the tempo of both military operations and legal oversight. It also examines a divergence between European States’ binding obligations under Article 36 of Additional Protocol I and the United States’ non-treaty-based weapons review practices, highlighting legal friction and prospects for harmonization in the regulation of emerging, software-driven weapon systems.

Fury and the Law

At the heart of Fury’s design is a commercially available business-jet engine, ringed by largely off-the-shelf flight-control hardware. Fury’s mission computer hosts an AI “autonomy stack” that plans routes, classifies threats, and can propose lethal engagements while airborne. In U.S. doctrine, the drone sits at the edge of what the 2023 U.S. Department of Defense (DoD) Autonomy in Weapon Systems directive calls “human-on-the-loop” control: a remote pilot or cockpit-based commander retains veto power. But the machine can choose and prosecute targets if the human does not intervene. A routine over-the-air update could turn an electronic-warfare loadout into a kinetic strike platform overnight.

The Air Force sees Fury as a loyal wingman that will fly a few miles ahead of crewed F-22s or F-35s. Fury, then, is best understood as a weapon system platform capable of carrying a variety of munitions, such as the AIM-120 AMRAAM, which itself has previously undergone legal review. What differentiates Fury and would necessitate a fresh review is the autonomous functionality built into the mission computer, which could alter how and when weapons are employed. The fusion of AI-based threat classification, target selection, and potential engagement without direct human intervention moves Fury from simply being a carrier for existing armaments to a system whose method of warfare is fundamentally new, and therefore subject to renewed scrutiny under legal review frameworks.

For lawyers conducting a weapons review, each major software build or payload swap reopens the question of legality under distinction, proportionality, and superfluous-injury rules. Since 1974, every new American munition, sensor, or platform has faced a lawyer’s examination under what is now DoD Directive 5000.01 (the Defense Acquisition System) and Section 6.2 of the 2023 DoD Law of War Manual. Acquisition officials, engineers, and judge advocates study the design notes, modelling data, and concept of operations, then ask three core questions. First, does any treaty or customary rule ban the weapon? Second, can the platform be used in a way that violates distinction, proportionality, or the prohibition on unnecessary suffering? Third, do built-in safeguards and human-machine controls keep foreseeable employment inside lawful limits?

Importantly, the standard “law of war review” under DoD 5000.01 and the DoD Law of War Manual is distinct from the separate review under DoD Directive 3000.09 (Autonomy in Weapon Systems). The former focuses broadly on legal compliance for all new weapons, while Directive 3000.09 sets out additional policy requirements for autonomous or semi-autonomous systems, particularly concerning human judgment over the use of force.

Fury most likely will operate as a semi-autonomous platform. It can autonomously navigate, classify threats, and propose target engagements, but final engagement decisions require human authorization (“human-on-the-loop”). However, the system is designed such that future software updates could enable more fully autonomous targeting and engagement, triggering a fresh review under both legal frameworks.

Unlike Article 36 of Additional Protocol I, the U.S. review procedure is a matter of policy rather than treaty law. Yet, the Pentagon treats it as a binding internal rule. In the 1990s, the Army’s anti-personnel laser program was cancelled after the review judged it incompatible with the ban on weapons that cause unnecessary suffering. More recently, a loitering munition concept reportedly cleared the intrinsic-legality hurdle but was restricted to open battlefield environments until further technical development allowed its sensors to better distinguish between lawful and unlawful targets, such as combatants versus protected civilians or medical personnel. In practice, this can mean limiting deployment to settings where the risk of misidentification is minimized, pending further upgrades. Similarly, a loitering munition concept cleared the intrinsic-legality hurdle but was restricted to open battlefield environments until its sensor-fusion suite could distinguish rifles from medical stretchers. If Fury’s software cannot show reliable discrimination in urban clutter, the review process will (hopefully) prevent the drone from being assigned such missions until a new software update improves its performance.

Non-Delegable Duty

If Fury crosses the Atlantic, allied lawyers are likely to conduct Article 36 reviews based on a preceding U.S. assessment. Every NATO member except the United States and Turkey has ratified Additional Protocol I. Whenever an AP I State party “stud[ies], develop[s], acquire[s] or adopt[s]” a new weapon, it must decide for itself whether the system can be used without breaching the Geneva Conventions or any other rule of international law binding on that State. The obligation also covers imports; a drone that rolls off an American production line becomes “new” again when Paris, Warsaw, or Oslo signs a purchase order. Article 36 insists that the legal judgment remains sovereign. The International Committee of the Red Cross (ICRC) frames the review as a multidisciplinary inquiry into intrinsic legality, foreseeable methods of employment, and built-in safeguards such as target-validation thresholds or abort logic. The United Kingdom assigns serving military lawyers to run the process. Germany, the Netherlands, Norway, and half a dozen other allies follow parallel models.

Some have imputed a customary character to Article 36. Under this view, even States that never signed or ratified Protocol I, the review obligation attaches as a rule of general international law. Moreover, Common Article 1 of the Geneva Conventions requires all parties to “ensure respect” for IHL, including how they field imported systems. If the review is perfunctory, any subsequent unlawful strike risks boomeranging back as a breach of the weapon-use rules and the State’s procedural duty to have reviewed the system.

Flying at Different Legal Altitudes

Under the ICRC’s reading of Article 36, every significant change in a “means or method of warfare” obliges the State employing the relevant weapon system to reopen its legal file.  Applied to a weapon such as Fury, imported from the United States, the legal review obligation potentially collides with American export control layers. First, Anduril treats its machine-learning weights and proprietary autonomy kernel as commercial crown jewels. Second, U.S. export licensing under the International Traffic in Arms Regulations (ITAR) locks critical software behind encrypted modules. If allies cannot examine the algorithm that decides a priority threat from a false signal, Article 36 reviewers cannot satisfy themselves that the system can be used in a way that respects the principles of distinction and proportionality. The importing State may further breach a non-delegable duty. Common Article 1 of the Geneva Conventions requires every State to “ensure respect” for IHL at all times, including by others. That means a country can also be responsible if it transfers weapons likely to be used in ways that violate IHL.

Furthermore, legal concerns take on sharper operational urgency in coalition settings. When a U.S. squadron and an allied detachment share the same patrol line, the aircraft that matter least to an enemy air defender may be the ones that matter most to lawyers. Fury’s concept of manned-unmanned teaming places the drone at the forward edge of sensor pickup and threat engagement. In a purely American package, the aircraft’s suite of autonomy software, which handles everything from navigation and threat assessment to engagement decisions, can proceed once a supervising pilot doesn’t veto within a preset interval, a control method endorsed by the Pentagon’s 2023 Directive 3000.09 (p. 3).

The United Kingdom’s defense doctrine flags an autonomous time-critical strike as a scenario that demands positive human confirmation (p. 18-19). While in current coalition practice a single partner retains legal and operational control over deployed autonomous systems, both NATO doctrine and UK Defence guidance highlight the growing challenge of harmonizing legal and operational standards as autonomy advances. As AI and software-defined capabilities proliferate, ongoing dialogue and doctrinal development are needed to avoid friction at the intersection of national legal review processes and allied interoperability, even if responsibility remains with one state.

In practice, high-autonomy algorithms are at the forward edge of sensor discrimination, threat labelling, and shoot/no-shoot decisions. Meanwhile, the claim that Fury can be assembled in any machine shop in America raises export-control puzzles. The United States may find it harder to gatekeep allies’ software updates than to track physical components. A deeper solution than ad hoc workarounds would require shared verification tools that read the autonomy settings pushed to each national fleet. This architecture could replace bilateral design disclosures with a collective assurance framework.

Until then, coalition partners deploying Fury will depend on export licenses permitting limited code inspection and software feature flags that adapt the system to different national doctrines. The dilemma is not academic. A single code update might stall for weeks inside a European weapons export office while lawyers trace the provenance of new training data. Because key evidence now lives in training-data provenance, algorithmic weights, and human-override architecture, allied lawyers will need deep technical access under bilateral security agreements.

Quo Vadis?

The United States, though not a party to Additional Protocol I, already subjects every new system to an internal review under DoDD 3000.09’s appropriate levels of human judgment rule. While the policy is broad, it is most rigorously applied to systems that introduce novel autonomy or AI-driven targeting and engagement. The Fury case can be turned into a template. If export-licence templates began to share the code-disclosure annexes that Article 36 teams need, Washington’s policy regime and Europe’s treaty duty could lock together without formal treaty change.

A second path is multilateralisation. NATO already issues a common airworthiness “Form 1” for hardware safety. While it is true that fewer than half of NATO members conduct formal Article 36 reviews, lawyers on both sides of the Atlantic admit that an alliance-wide Article 36 cell could prevent the looming patchwork of caveats that coalition air planners dread. An ICRC survey has noted that joint or regional review bodies would be one practical way to keep pace with rapid cycles of AI updates.

Of course, the hardest question concerns not just speed, but whether legal frameworks can keep pace with the rapid delegation of decision-making and the persistent erosion of meaningful human control. Anduril promotes the onboard AI software that controls Fury’s autonomous decision-making as being able to learn and adapt faster than adversaries. Unless allies streamline their weapons-review triggers, they risk fielding a drone whose most important features are one version ahead of their legal paperwork. Before the Air Force finalizes its CCA selection, lawyers have a narrow window to align review frameworks with the operational realities of autonomous systems.

***

Davit Khachatryan is an international law expert and researcher with a focus on operational law, international criminal law, alternative dispute resolution, and the intersection of various legal disciplines.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

Photo credit: Master Sgt. Gustavo Castillo