Human Oversight with Chinese Characteristics: Lethal Autonomous Weapons in the CCW GGE
The rapid rise of AI in warfare has thrust Lethal Autonomous Weapons Systems (LAWS) squarely into the spotlight of international humanitarian law (IHL). Far from being a speculative threat on the distant horizon, these systems, or at least their advanced precursors, are already on the doorstep. AI-enabled drones and targeting platforms with significant autonomous functions have been used in recent conflicts (see, e.g., here and here), while major powers continue to field and scale autonomy-driven capabilities at an ever faster pace.
The absence of any specific treaty prohibiting or comprehensively regulating such systems has left States to grapple with the question of whether existing IHL rules, particularly those that give practical effect to the principles of distinction, proportionality, and precautions, are sufficient to govern the development, deployment, and most importantly the use of LAWS, or whether new normative frameworks are urgently required.
As a possible answer to this question, the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems, operating under the Convention on Certain Conventional Weapons (CCW), worked steadily through its 2025 sessions in March and September and into the Seventh Review Conference in 2026 to formulate elements of a potential instrument that could clarify or supplement existing IHL rules. While drawing on the eleven Guiding Principles affirmed by State parties in 2019, these meetings have centered on formulating elements of a potential instrument without prejudging whether it would be binding. Serious divisions persist among participants: some advocate broad prohibitions, others emphasize regulation through existing IHL, and a growing number explore tiered approaches that prohibit certain systems while regulating others.
The Importance of the GGE
The GGE is important in itself, but the Group also serves as a litmus test for Chinese opinions on LAWS. Beijing has consistently positioned the CCW as the appropriate forum for deliberations about LAWS. Indeed, China has spelled out its views on LAWS primarily in statements to the GGE, where it has held to a clear line: such systems must stay under Meaningful Human Control (MHC) if they are to meet IHL standards. MHC refers to the requirement that humans, rather than AI alone, retain sufficient oversight, judgment, and accountability over critical LAWS functions, particularly target choice and engagement. Time and again, Chinese delegates insist that if any such system is to satisfy fundamental IHL requirements, it must always remain under human control, a stance that ties well into Chinese and broader calls for responsible military AI.
China’s Rhetoric of Meaningful Human Control
Although China’s Global AI Governance Initiative stresses human oversight in military applications, it would be a mistake to conclude that Beijing is positioning itself as a leading defender of IHL against unchecked AI militarization. China promotes the GGE as the right venue because it includes the major powers, and it backs a binding instrument but, as a 2024 statement puts it, only “when conditions are ripe,” meaning after consensus on definitions and scope. In a statement at the 80th UN General Assembly First Committee in October 2025, Beijing reiterated its support for negotiating a legally binding instrument on LAWS “when the conditions are mature,” yet again emphasizing the need for prior consensus on working characterizations and regulatory scope. This stance lets Beijing voice strong humanitarian principles without closing off room for supervised autonomous technologies, striking what Chinese delegations describe as a balance between those principles and legitimate security needs, while pushing back against any broad ban that risks limiting AWS development.
Cumulative Criteria and IHL Accountability Gaps
At the core of Beijing’s approach is a two-tiered framework: outright prohibition of fully autonomous or “unacceptable” systems—those without meaningful human involvement—and; regulation of the remaining systems to guarantee “appropriate” human involvement. In a 2022 working paper China defined prohibited systems by five cumulative traits: lethality; full autonomy with no intervention possible; impossibility of termination; indiscriminate effects; and uncontrolled evolution. Anything short of meeting each criterion can incorporate autonomy as long as “appropriate” human involvement is there, and Chinese statements in the GGE keep coming back to the need for systems to stay “always under human control” if they are to comply with IHL. In practice, however, this framing leaves considerable scope for autonomy in other systems, aligning more closely with supervisory, “on the loop” models than with requirements for direct human approval in every targeting decision.
Beijing’s stance is an undeniably comfortable one with little risk. By insisting that a system must exhibit all five characteristics at once before it faces outright prohibition, China has drawn a remarkably narrow line for what counts as unacceptable. This cumulative threshold, unchanged in Beijing’s contributions through the 2025 GGE sessions, effectively excludes a wide range of emerging autonomous capabilities, many of which Beijing is developing. These could still fall short of IHL in practice, by failing to reliably distinguish civilians from combatants, assess proportionality in complex environments, or implement feasible precautions, especially when predictability breaks down in cluttered or dynamic settings.
This narrow threshold means that systems incorporating substantial autonomy could still be deemed compliant under China’s framing, yet IHL attaches significant importance to attributing responsibility: it requires that commanders answer for violations of distinction or proportionality. However, when a system incorporates substantial autonomy but stops short of full autonomy, relying instead on what China calls “appropriate” human involvement, pinning down who bears accountability grows considerably harder.
As a party to Additional Protocol I, China bears the obligation to review new weapons under Article 36, but Beijing offers precious little visibility into those processes. Its approach is not unlike that of other major powers, who, with their overlapping priorities in preserving strategic flexibility amid ongoing AI developments, share little about the LAWS they are building. This in turn renders verifying compliance with existing IHL or any prospective new norms largely tilting at windmills, as responsibility rests primarily on national self-assessment through Article 36 reviews, with only limited external oversight from bodies such as the International Committee of the Red Cross, UN fact finding missions, or the International Criminal Court.
While this approach preserves a rhetorical commitment to human control and leaves ample room for interpretation, this lack of transparency leaves serious questions about whether existing rules can meaningfully curb developments that dilute clear lines of human oversight. The tension stemming from this opacity presents a challenge to how well current law, including the protective spirit of the Martens Clause, can regulate evolving technology.
Challenges and Broader Implications
Owing to the consensus requirement that gives any State an effective veto, progress in the CCW GGE has always been slow. The 2025 sessions in March and September did manage to refine a rolling text on possible normative elements, but deep splits remain over definitions, the scope of prohibitions, and the extent of human involvement that can be deemed satisfactory. Beijing’s narrow, cumulative criteria for banning only the most “unacceptable” systems help keep those splits wide, preserving room for supervised autonomy while still nodding to humanitarian concerns.
Meanwhile, the CCW GGE on LAWS not only suffers from its slow pace due to the consensus rule but also tends to produce consensus at the lowest common denominator, with more ambitious proposals frequently moderated to secure agreement. For example, in 2023, a coalition of States including Argentina, Colombia, Costa Rica, Ecuador, El Salvador, Guatemala, Kazakhstan, Nigeria, Palestine, Panama, Peru, the Philippines, Sierra Leone, and Uruguay submitted a working paper proposing a new CCW protocol that would prohibit LAWS incapable of complying with IHL and regulate others through strict human oversight requirements. The draft went beyond consensus elements by mandating prohibitions on unpredictable or indiscriminate systems and explicit accountability measures and was voted down accordingly. In 2025, 39 States delivered a joint statement at the September GGE session urging immediate negotiations on a binding instrument based on the rolling text, emphasizing a two-tier structure of prohibitions (e.g., on systems lacking context-appropriate human control) and regulations to ensure accountability, elements partially absent from the moderated final summary.
Yet there are welcome developments as well. The UN General Assembly’s overwhelming support in 2025 for urgent action on a binding instrument, with 156 States backing Resolution L.41 to complete normative elements ahead of the November gathering signals a broadening coalition ready to bridge longstanding divides. This draft resolution, adopted by the First Committee on 6 November 2025, builds on the ongoing work of the CCW GGE and underscores the need for a comprehensive multilateral approach to address LAWS challenges from humanitarian, legal, security, technological, and ethical perspectives, while stressing human responsibility in the use of force to ensure IHL compliance. However, this is not to suggest that the resolution altered the strategic calculations of the major powers: Beijing abstained, while Washington and Moscow predictably voted against it. Heading into the 2026 Review Conference, if IHL is to keep pace with technology, clearer standards on what counts as MHC look essential. The alternative offers little comfort, as ambiguities will persist, and developments will continue to outpace the rules.
Conclusion
As discussions head toward the 2026 Review Conference, glimpses of hope appear: the consensus rule in the CCW shows real, if slow, progress. As of early 2026 there have been no new working papers or shifts in rhetoric during the GGE consultations, and there is no reason to assume that Beijing’s stance will markedly change soon. Chinese delegates will almost certainly reiterate familiar themes throughout the year, pushing for maturity in conditions before any binding commitments, all while safeguarding room for the People’s Liberation Army’s AI ambitions.
It is a stance that Beijing has honed over years of GGE deliberations and one that underscores a broader truth: for a power like China, a true pivot toward stricter norms would require external pressures that are not materializing and show no realistic prospect of materializing in the foreseeable future. Instead, the most probable outcome is incremental tweaks to the rolling text, including modest concessions on terminology around human judgment, yet nothing that would fundamentally challenge the status quo. In the end, the CCW GGE may very well highlight Beijing’s growing power in shaping global arms discussions, but the gap between rhetoric and enforceable rules will persist.
***
Dr Gerald Mako is a Research Affiliate at the Cambridge Central Asia Forum at Cambridge University.
The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: Infinty 0
