Human Responsibility Retained: U.S. Positions on Judgment and Oversight for LAWS

by , | May 4, 2026

Human responsibility

Editors’ note: This is the fourth post in a series dedicated to Lethal Autonomous Weapons Systems (LAWS) and the questions of human oversight and legal accountability under international humanitarian law. Previous posts have focused on LAWS, China, and Russia.

The first session of the 2026 Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) under the Convention on Certain Conventional Weapons (CCW) concluded in Geneva on 6 March 2026. Since 2024, the GGE has operated under a three-year mandate, renewed annually, focusing on developing recommendations ahead of the November 2026 Review Conference. Delegations once again expressed sharply divergent views on the rolling text under discussion, with several States pressing for binding language that would require “context-appropriate human judgment and control” throughout a weapon’s entire life cycle. The United States yet again took a different path.

Rather than accepting fixed thresholds that could constrain military operations, U.S. representatives reiterated a long-standing position: weapon systems must be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. This flexible standard comes from Department of Defense (DoD) policy and rests on a simple but fundamental rule: human beings must retain full responsibility for lethal decisions. Artificial intelligence (AI) can assist, for example, by processing sensor and intelligence data to recommend targets, assess risks, and enhance situational awareness, but it should never replace the informed human judgment required before authorizing the use of force, ensuring the commander or operator remains accountable for the outcome.

The Weight of U.S. Influence

Unsurprisingly, the stance of the United States carries significant weight in these debates. Backed by the largest military budget in the world by far, substantially exceeding China’s official and unofficial defense spending, the United States remains the global leader in military innovation and therefore continues to exert significant influence on norms for LAWS. When the United States advocates retaining human responsibility and flexible levels of human judgment over rigid new standards, this position shapes allied doctrine and rules of engagement. Consequently, its influence extends beyond CCW negotiating rooms to practical standards on future battlefields.

Military capital and technological supremacy are core factors in U.S. dominance, amplified by frontier AI companies whose model development sets global innovation benchmarks. As both the largest defense customer and the primary regulator of its own tech ecosystem, Washington wields unique leverage, with recent events illustrating how the administration can compel access and adaptation from domestic AI firms. In the recent Anthropic dispute, for example, the Pentagon resisted additional contractual restrictions on fully autonomous weapons and mass surveillance, insisting instead on broad “any lawful use” access, while the resulting OpenAI agreement referenced existing policy requirements for human control, highlighting tensions between administration priorities, company red lines, and wartime realities.

The CCW GGE and the Evolving Human Control Debate

A central question of the GGE meetings is whether existing international humanitarian law (IHL) is sufficient or whether new rules are necessary to ensure human control over the use of force. In the 2-6 March session, over 70 States expressed support for a binding standard of “meaningful human control” or “context-appropriate human judgment and control” throughout a weapon’s lifecycle, from design and development to deployment and use. These formulations appear repeatedly in the rolling text under negotiation and in working papers over the years from certain Asian, European, African, and Latin American States.

Over the years, the United States has consistently opposed any fixed or one-size-fits-all threshold as, in its view, such standards risk creating arbitrary lines that fail to account for operational context, mission requirements, or the specific capabilities of a given system. Instead, U.S. positions emphasize that IHL already demands human judgment in the key areas required for lawful targeting: distinguishing combatants from civilians; assessing proportionality; and exercising precaution. Accountability, the United States argues, cannot be outsourced to machines; it stays with commanders, operators, and those who design or approve the systems.

This debate is not merely theoretical, but directly affects how States approach weapon development, legal reviews, and rules of engagement in an era when autonomy is increasingly integrated into targeting cycles and defensive systems. The GGE remains the venue where these differences are most clearly articulated, making it the best lens for understanding the U.S. approach.

The U.S. Domestic Foundation

The foundation of the U.S. approach lies in DoD Directive 3000.09 on Autonomy in Weapon Systems. Issued in 2012 and updated in 2023, the directive establishes clear policy for the development and use of autonomous and semi-autonomous systems, and it requires that all such weapons allow for appropriate levels of human judgment over the use of force. In practice, this flexible standard means commanders and operators must retain the ability to understand, intervene in, and override system functions when necessary. Human responsibility cannot be transferred to machines; accountability remains with commanders, operators, and those who design or approve the systems across the entire weapon lifecycle.

The directive also mandates rigorous verification and validation, realistic testing in relevant environments, and legal reviews under the DoD Law of War Program, which are governed by DoD Directive 2311.01 and the DoD Law of War Manual, intended to ensure that any autonomous capability complies with the core principles of distinction, proportionality, and precaution. This domestic framework shapes every aspect of acquisition, training, and employment.

U.S. Engagement and Positions in the CCW GGE

U.S. delegations have shaped the CCW GGE conversation since its formal launch in 2017. Early sessions focused on clarifying how IHL applies to emerging technologies, and by 2019, the Group adopted eleven Guiding Principles that the United States helped draft and has cited ever since. Those principles underscore that IHL remains fully applicable and that human responsibility for compliance cannot be delegated to machines.

With the three-year mandate now in its final stretch, the GGE shifted to negotiating rolling text ahead of the November 2026 Review Conference, and several delegations have been advocating for new language requiring “context-appropriate human judgment and control” or “meaningful human control” at every stage of a weapon’s lifecycle. In contrast, the United States has consistently opposed these fixed formulations, and in both the March 2025 and March 2026 sessions, U.S. representatives stated that such standards are not required by existing IHL and could unnecessarily restrict legitimate military capabilities.

During the March 2026 debate on the Modified Box III text, the U.S. delegation explicitly refused the term “human control” and instead proposed the alternative phrasing “good faith human judgement and care.” This language, it was argued, better reflects the flexible, context-driven standard already embedded in DoD policy while still satisfying IHL obligations, and the United States has consistently reminded the Group that “appropriate” is deliberately not a one-size-fits-all threshold. But what counts as sufficient human judgment? It varies by system, domain, mission, and even the specific function within a single weapon. A defensive counter-drone system operating in a high-threat environment, for example, may require less real-time human input than a loitering munition deployed in a complex urban setting.

Three arguments recur in every U.S. intervention at the CCW GGE. First, existing IHL already demands human judgment where it matters most—distinction, proportionality, and precaution— without needing new legal definitions. Second, fixed control standards risk undermining operational effectiveness against adversaries who face no such constraints. Third, accountability remains squarely with human actors across the full lifecycle. Commanders set mission parameters, operators monitor and intervene when required, and legal advisers conduct Article 36 reviews before any system ever reaches the field. This position has drawn criticism from States seeking a binding instrument by the 2026 deadline, but the United States, for its part, continues to prioritize practical implementation of existing IHL over the negotiation of new treaty obligations.

Practical Challenges to Human Judgment on the Battlefield

The U.S. approach to human judgment in autonomous systems has direct consequences on the battlefield. Consider a naval operator aboard a U.S. Navy destroyer conducting maritime interdiction in the Red Sea. A semi-autonomous loitering munition is launched toward a suspected hostile surface vessel. The system uses AI to identify, track, and target the suspected vessel and any associated contacts (e.g., nearby escorts or small boats), yet the operator must exercise judgment before authorizing engagement. In line with U.S. policy, this reflects the “appropriate levels of human judgment” while permitting the system to operate apace with the threat environment.

The U.S. stance places considerable weight on this flexibility. A rigid “meaningful human control” requirement could compel operators to remain in constant manual mode, limiting capacity and slowing decision-making against fast-moving threats. DoD policy, therefore, allows the level of autonomy that is appropriate to the mission while maintaining operational leadership’s retention of accountability through rules of engagement, pre-mission planning, and post-incident reviews, keeping human judgment as the final safeguard.

However, while it may work well in narrow, short-duration engagements where operators can preserve sustained focus and deliberate judgment, experiences from the Gaza war have cast serious doubt on the practical limits of human oversight in sustained high-volume, AI-assisted targeting operations. During the initial phase of the war that began in October 2023, Israel’s Lavender system generated tens of thousands of potential human targets, which often overwhelmed operators devoting merely twenty seconds to each target before authorizing strikes and frequently serving as little more than a rubber stamp for the system’s recommendations. This rapid approval process, combined with a reported 10 percent error rate and permissive collateral damage thresholds, permitting up to 20 civilian deaths for low-ranking Hamas targets and substantially more when senior commanders were struck, often in their family homes, contributed to an estimated 72,000 total civilian casualties in just the first 15 months of the war.

Although the specific systems and rules of engagement used in Gaza differ markedly from U.S. doctrine, these accounts challenge aspects of the U.S. position in the CCW GGE. They illustrate how, in large-scale and prolonged conflicts, even flexible “appropriate levels of human judgment” can swiftly erode into nominal oversight when operators confront overwhelming volumes of AI-generated targets, potentially undermining meaningful accountability, and expose the limits of human engagement with battlefield AI applications where imposed guardrails are tensile amidst malleable wartime rules of engagement.

Conclusion

From the outset, the United States has maintained a consistent and clearly articulated position in the CCW GGE: human responsibility for the use of force must remain with commanders and operators, and autonomous weapon systems should be designed to permit appropriate levels of human judgment rather than fixed, one-size-fits-all thresholds. This stance, set out in DoD Directive 3000.09, has guided U.S. interventions in the GGE for years, most recently in the March 2026 session when the U.S. delegation proposed the alternative phrasing “good faith human judgement and care.”

The primary arguments of the United States opposing the adoption of new, fixed standards such as “meaningful human control” or “context-appropriate human judgment and control” are twofold: first, that no single threshold can reasonably apply across the wide spectrum of systems, missions, domains, and threat environments that forces will face, and second, that an excessive emphasis on technical “control” over machines can obscure the fundamental IHL requirement that accountability for lethal decisions rests solely with commanders and operators.

As the November 2026 Review Conference approaches, the United States is expected to continue advocating for practical implementation and the sufficiency of existing IHL rather than new binding legal instruments. While the U.S. model may offer genuine operational flexibility that international rules would struggle to match, its long-term viability will depend on the development of stronger complementary safeguards such as minimum review protocols, greater system explainability, and tiered control requirements for high-risk functions. Absent these improvements, the flexible standard risks becoming more formal than substantive in future conflicts. The coming months will therefore test whether the United States can maintain international support for its approach while continuing to integrate autonomous systems into its forces.

***

Dr Gerald Mako is a Research Affiliate at the Cambridge Central Asia Forum at Cambridge University.

Aasim Al-Thani is a Non-Resident Fellow at the Gulf International Forum and an Associate Fellow at the Institute for Peace & Diplomacy.

The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

 

Photo credit: U.S. Marine Corps, Lance Cpl. Anna Higman