Unity in Principle, Variation in Practice: European Approaches to Meaningful Human Control for LAWS

by | May 11, 2026

LAWS

Editors’ note: This is the fifth post in a series dedicated to Lethal Autonomous Weapons Systems (LAWS) and the questions of human oversight and legal accountability under international humanitarian law. Previous posts have focused on LAWS, China, Russia, and the U.S.

As artificial intelligence (AI) rapidly transforms the modern battlefield, the core principles of international humanitarian law (IHL) are confronting their most significant tests of the twenty-first century. Consequently, the emergence of lethal autonomous weapons systems (LAWS)—weapon systems that can select and engage targets without meaningful human control—has generated sustained discussion within the Group of Governmental Experts (GGE) on LAWS operating under the Convention on Certain Conventional Weapons (CCW).

This post analyses the European approach, which does not equate to a unified “European Union (EU) approach.” Although the EU has articulated a shared institutional position on the applicability of IHL and the necessity of meaningful human control over LAWS, defense and security policy remain core areas of national competence. Collectively, member States make the EU the world’s third-largest defense spender after the United States and China. European States have consistently affirmed that IHL remains fully applicable to such systems and that humans must retain the capacity to make legal judgments over the use of lethal force. However, while a common EU baseline exists on the necessity of human oversight, national positions reveal important differences in emphasis, preferred legal instruments, and tolerance for varying degrees of autonomy, variations that carry implications not only for ongoing CCW deliberations but also for future European defense cooperation and the broader evolution of IHL in an era of autonomous weaponry.

Unity in Principle: The EU Common Position

The EU has articulated a coherent institutional position on LAWS that underscores the centrality of meaningful human control. In successive statements delivered to the CCW GGE, as well as in its submission to the United Nations Secretary General, the EU has emphasized that IHL applies fully to all weapon systems, including those incorporating autonomous functions, and States must ensure compliance with the fundamental principles of distinction, proportionality, and precaution. Central to this stance is the requirement that human beings retain the ability to make decisions regarding the use of lethal force, exert control over lethal weapons systems they use, and remain accountable for those decisions. The EU has repeatedly endorsed the two-tier regulatory model, prohibitions on systems that cannot be used in compliance with IHL and robust regulation of all others, while insisting that the CCW remains the appropriate forum for developing normative elements.

This common baseline found practical expression in the joint statement delivered at the September 2025 session of the GGE. France and Germany, together with 37 other High Contracting Parties, declared the revised rolling text a sufficient basis for negotiations and affirmed their readiness to move forward. Although the United Kingdom did not join that statement, it continues to align with the EU on the overarching requirement of human oversight.

Germany and the Two-Tier Approach

Although the landmark national security strategy of the EU’s largest defense spender is completely silent on the military dimension of AI, and detailed conceptual work has been limited to lower-level studies, Germany has emerged as one of the most consistent and influential advocates of a structured regulatory response to LAWS within the CCW GGE.

In close cooperation with France, it has actively promoted the two-tier approach since 2021: a legally binding prohibition on systems that cannot be used in compliance with IHL, coupled with detailed regulations ensuring that human control is retained at all times for all other systems with autonomous functions. Germany maintains that the decision over life and death must be made by humans, and that the required level of human control depends on the operational context and the characteristics and capabilities of the weapons system. This framework encompasses both control in design and control in use, including measures that prepare and support human decision-making throughout the life cycle of any system incorporating autonomous functions. Berlin has endorsed both the Responsible AI in the Military Domain (REAIM) initiative and the Political Declaration on Responsible Military Use of AI. Further, it joined the September 2025 statement delivered on behalf of 39 High Contracting Parties, which declared the revised rolling text a sufficient basis for negotiations and expressed readiness to move forward on a legally binding instrument.

At the national level, Germany has reinforced its stance through export-control policies and Bundeswehr guidelines that explicitly reject fully autonomous lethal systems operating outside meaningful human oversight. In Berlin’s view, meaningful human control is not merely a policy preference but a legal imperative derived directly from the fundamental principles of IHL.

France: Strategic Autonomy and Pragmatic Flexibility

As the EU’s only nuclear power and largest military force, France made AI a strategic priority. In contrast with Germany, the country has been actively building dedicated institutions and allocating substantial budgets to support development and integration. It was the first member State to publish a dedicated defense AI strategy in 2019, which prioritizes the rapid development of AI capabilities for operational advantage while remaining relatively light on detailed governance mechanisms.

Paris’s perspective on military AI and LAWS reflects its longstanding emphasis on strategic autonomy and independent capability. Consequently, France supports the negotiation of a structured legal instrument within the CCW framework based on the two-tier approach, rather than overly prescriptive or comprehensive new regulations negotiated outside the Convention. It has endorsed both REAIM and the Political Declaration, yet views the latter as more aligned with its preference for flexible guidelines that preserve national freedom of action.

France has aligned itself closely with Germany in advocating the two-tier regulatory model within the CCW GGE, but it has consistently introduced a measure of pragmatic flexibility regarding the precise degree of human involvement required. In its submission to the Secretary-General pursuant to General Assembly resolution 78/241, Paris explicitly endorsed the two-tier approach: systems operating completely outside human control and a responsible chain of command, described as “fully autonomous lethal weapons systems,” must be prohibited, while “partially autonomous” systems can satisfy the requirement of sufficient human control through appropriate policies and measures implemented across the entire life cycle. French statements in the GGE have further clarified that human control and judgment must be exercised in the development and programming phases as well as in the definition of the system’s operational framework, although the specific degree and timing of that control will depend on the system, the context of use, and the mission plan.

This nuanced position reflects both legal and industrial considerations. France’s participation in the long-troubled Franco-German-Spanish Future Combat Air System (FCAS), which aims to build Europe’s first sixth-generation fighter incorporating advanced human-machine teaming and AI-assisted functions, underscores the importance of preserving operational flexibility while maintaining human oversight. Like Germany, France joined the September 2025 joint statement calling for negotiations on the basis of the revised rolling text. In practice, French policy demonstrates a slightly more permissive stance toward systems that permit human intervention at critical stages, even as it upholds the overarching imperative of meaningful human control derived from IHL.

United Kingdom: Context-Appropriate Human Involvement

The UK, which left the EU in 2020, has been devoting substantial resources and institutional capacity to military AI, and its approach is shaped by a pragmatic strategic culture that emphasizes operational advantage, close interoperability with the United States, and flexible ethical governance.

The UK shares the European commitment to human oversight of lethal force but has consistently emphasized a more flexible formulation: context-appropriate human involvement and judgment. In its submission to the Secretary-General under General Assembly resolution 78/241, London unequivocally stated that it does not possess fully autonomous weapon systems, defined as those operating without context-appropriate human involvement or outside human responsibility and accountability, and has no intention of developing them. The UK opposes the development and use of weapons with autonomous functions that would operate without such context-appropriate levels of human involvement, while insisting that this involvement must result in meaningful human control sufficient to satisfy policies, ethical principles, and obligations under IHL.

This approach reflects a preference for pragmatic, context-sensitive application rather than rigid pre-set thresholds. British statements in the CCW GGE have repeatedly stressed that human control and judgment must be exercised across the life cycle of any system, taking into account the specific operational environment and mission requirements. Notably, the UK did not join the September 2025 statement in which France, Germany, and 37 other States declared the revised rolling text a sufficient basis for negotiations on a legally binding instrument. Instead, it has continued to support the ongoing work of the GGE within the CCW as the appropriate forum and has co-sponsored working papers that focus on elaborating the practical meaning of context-appropriate human control without prejudging the nature of any eventual outcome. In practice, therefore, the UK position illustrates the more cautious end of the European spectrum, prioritizing operational flexibility and existing IHL compliance mechanisms over ambitious new treaty obligations.

The UK’s position, while formally expressed within the European context, closely mirrors that of the United States. Both States emphasize “context-appropriate” or “appropriate levels of” human involvement and judgment rather than a fixed threshold of meaningful human control; both maintain that existing IHL and national implementation measures are adequate; and both have declined to support calls for negotiations on a new legally binding instrument. In practice, therefore, the UK stands somewhat apart from the more proactive stances of Germany and France and closer to the cautious approach adopted by Washington, a reflection not only of shared doctrinal preferences but also of the deep interoperability that characterizes transatlantic defense cooperation.

It would, however, be a mistake to assume that the UK is alone in preferring a flexible, context-appropriate formulation of human oversight over LAWS. Several other European states, notably the Netherlands, various Nordic, Baltic, Central and Eastern European NATO members likewise emphasize operational context and existing IHL mechanisms rather than rigid definitions of meaningful human control, thereby reinforcing the more reserved current within Europe.

Industrial Realities

The questions of military AI and LAWS are closely intertwined with the EU’s broader stance on artificial intelligence. While the EU AI Act expressly excludes defense and military applications from its risk-based framework, the European Defence Fund simultaneously embeds the requirement of meaningful human control over lethal decisions as a binding condition of funding, creating a hybrid regulatory approach that links civilian AI ethics, IHL, and defense-industrial policy.

This normative spectrum coexists with ambitious European efforts in weapons production that harness AI and autonomous functions. The FCAS is being developed as a networked “system of systems” in which a next-generation crewed fighter will operate alongside families of remote carriers capable of executing complex mission profiles with a high degree of autonomy, all connected through an advanced combat cloud, while official program descriptions consistently stress that these autonomous components remain under meaningful human control. Similarly, the Global Combat Air Program (GCAP), developed by the UK, Italy, and Japan, incorporates AI-enabled avionics, sensor fusion, and collaborative combat aircraft designed to function alongside the crewed Tempest platform, with human oversight retained across the life cycle.

This industrial variation is nevertheless amplified by a pronounced asymmetry in foundational AI capabilities. U.S. companies such as Anduril and Palantir have secured multibillion-dollar contracts to supply advanced AI platforms, data-fusion systems, and autonomous software directly to the Pentagon and allied forces, while Europe lacks comparable indigenous companies operating at the same scale or speed of innovation, a gap rooted in Europe’s failure to nurture global technology champions during the 1990s digital boom. As a result, even flagship European programs continue to rely heavily on U.S.-origin cloud infrastructure, sensor-fusion algorithms, and high-end computational components, despite the EU’s efforts to close the technological gap. This technological dependence raises important questions about the extent to which European States can effectively enforce their preferred standards of meaningful human control over LAWS in programs that incorporate substantial U.S. technology.

The broader regulatory picture is further complicated by the EU’s broader approach to AI. While the EU AI Act deliberately excludes all military applications from its risk-based regulatory framework, the European Defence Fund simultaneously makes eligibility for funding conditional upon the presence of meaningful human control over LAWS. The resulting hybrid regulatory approach links civilian AI ethics, IHL obligations, and defense-industrial policy in a single normative thread, reinforcing both the continent’s unity in principle and the practical challenges of translating that principle into operational reality.

Conclusion

European States share a principled commitment to meaningful human control over lethal autonomous weapons, yet this apparent unity coexists with significant variation in strategic cultures and persistent transatlantic technological dependence. Divergent national preferences, some favoring binding prohibitions and strict safeguards, others preferring flexible, context-sensitive oversight, are now visibly reflected in collaborative programs such as FCAS and GCAP.

In the end, the tension between normative ambition and industrial reality continues to shape Europe’s influence in the CCW. How European States manage this internal divergence during the ongoing 2026 sessions of the GGE may ultimately determine whether meaningful human control remains a credible and effective safeguard amid accelerating autonomy and deepening reliance on U.S. technology.

***

Dr Gerald Mako is a Research Affiliate at the Cambridge Central Asia Forum at Cambridge University.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

 

Photo credit: Canbay via Wikimedia Commons