No New Rules Needed: Russia’s Minimalist Vision of Human Oversight for LAWS

by | Apr 13, 2026

LAWS

With the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) having wrapped up its second session of 2025, and the Seventh Review Conference quickly approaching, Russia’s interventions in Ukraine have drawn considerable attention, actively deploying semi-autonomous systems on the battlefield. Russia has used these forums to both articulate its views and defend the CCW as the only legitimate venue for the discussion, a point it reaffirmed last fall in a statement to the UN General Assembly First Committee.

Russia’s approach rests on two straightforward propositions. First, humans, not machines, must remain accountable for the use of lethal force. Second, how accountability is exercised falls squarely within each State’s sovereign discretion. Far from calling for new treaty rules, Russia insists that existing law, including the law of armed conflict’s principles of distinction, proportionality, and precautions in attack, in addition to Article 36’s weapon review obligations under Additional Protocol I to the Geneva Conventions, sufficiently apply to all types of autonomous systems that may emerge in the future. This stance is built upon the 11 Guiding Principles affirmed in 2019 by consensus in the GGE and on Russia’s March 2024 working paper, which set out its national approach.

Similarly to Beijing, Moscow deems the GGE the “best suited platform,” believing it strikes a “reasonable balance” between humanitarian concerns and legitimate defense interests. Russia maintained its stance in its October 2025 UN General Assembly First Committee statement, which argued that moving the discussion elsewhere would be “counterproductive.” With the 2025 GGE sessions complete, the 2026 CGE sessions having just begun, and the Seventh Review Conference set for November 2026, the forum provides the cleanest, most continuous record of Russia’s arguments.

Russia’s Core Positions on Human Control and LAWS

Russia’s submissions in the CCW GGE focus on the central requirement that human beings must remain responsible for decisions involving lethal force. The commander or official who assigns a mission and orders its execution carries international humanitarian law (IHL) accountability. Simultaneously, Russia treats the “forms and methods” of exercising that human control as an internal sovereign matter. It has repeatedly called attempts to define a single international standard, such as “meaningful human control,” inappropriate and unrelated to existing law, seeing no need for externally imposed thresholds on operator involvement or real-time supervision.

Instead, Russia lists several practical mechanisms that can satisfy human responsibility: design-level safeguards (reliability, fault tolerance, and fail-safes); operational constraints (limits on target categories, geography, time windows, or engagement scale); thorough testing and validation in realistic conditions; operator training and certification; production-quality controls; and the technical capacity for human intervention or system deactivation. Russia argues that commanding officers can, without remaining physically in the loop for every engagement, maintain oversight through mission planning and abort authority.

Russia contends that instead of eroding IHL, well-designed LAWS can actually improve compliance, offering greater precision, immunity to fatigue or stress, and enhanced sensor fusion. These benefits may reduce civilian harm and better distinguish military objectives in complex environments, each of which are advantages unavailable to purely human-operated systems under the same pressures, an outlook resting on the conviction that current IHL is sufficient and needs no modernization or new legally-binding instruments. Article 36 weapon reviews during development and acquisition provide the necessary legal filter, and Russia states that it conducts these reviews in accordance with its established national procedures.

Domestically, these principles appear in two key policy documents. The 2022 Concept of Activities of the Armed Forces of the Russian Federation in the Development and Use of Weapons Systems with AI Technologies sets out internal guidelines for human accountability in AI-enabled platforms. The broader National Strategy for the Development of Artificial Intelligence Over the Period Extending up to the Year 2030 reinforces the same framework across military and civilian applications. Russian criminal law supplies additional enforcement through penalties for IHL breaches, ensuring that accountability ultimately rests with identifiable human decision-makers.

From Geneva to Ukraine: Do AI-Assisted Drones Improve IHL Compliance?

These arguments naturally conjure up the question: what does evidence from the Russo-Ukrainian War say? Do AI-enhanced weapons and semi-autonomous systems already improve compliance with IHL? The short answer is that Russia’s drone use in Ukraine has undermined rather than than strengthened IHL compliance. For example, fiber optic-guided FPV drones deployed since mid-2025 and striking the outskirts of Kharkiv for the first time in late February exemplify the “timely intervention and deactivation” that Russia highlights in the GGE. The physical cable provides jam-resistant command linkage, ensuring that operators retain real-time control and abort authority even under intense electronic warfare, precisely the design safeguard which Russia argues satisfies IHL accountability without constant direct supervision. Yet, the very same real-time human oversight is present when Russian forces employ these systems to deliberately target civilians in what has been termed a “human safari.”

Indeed, rather than delivering the expected results of greater precision, reduced civilian harm, and better adherence to distinction, proportionality, and precaution, the large-scale deployment of AI-assisted drones in Ukraine has markedly degraded civilian protection. Russian short-range FPV drones, Lancet loitering munitions, and long-range Shahed drones frequently strike residential areas, civilian vehicles, hospitals, and schools. Compared to 2024, civilian casualties from short-range drones alone surged by 120 percent in 2025, with 577 civilians killed and 3,288 injured. Moreover, the psychological toll from their constant overhead presencegenerates fear and stress, further eroding the humanitarian space beyond physical casualties. UN reports and Human Rights Watch investigations describe these as widespread and systematic patterns, including deliberate targeting tactics that spread terror and force population displacement, conduct which may constitute crimes against humanity.

While Russian doctrine in the CCW GGE insists that design safeguards and human oversight enable LAWS to enhance IHL compliance, well-documented patterns of deliberate targeting and indiscriminate harm reveal the limitations of relying on technological safeguards alone, which are prone to fail. Battlefield reality shows persistent challenges, as capabilities such as machine-vision or terminal guidance have not prevented indiscriminate effects or abusive, coercive persistence. The underlying reason is self-evident: machine-vision, for example, aids navigation and target recognition in contested environments, yet final engagement decisions remain with the human commander, consistent with Russia’s insistence on upper-level oversight and national Article 36-style reviews.

In other words, the conduct of the Russian Armed Forces in Ukraine has thus far demonstrated that its current semi-autonomous and AI-enhanced systems are more likely to corrode than to strengthen IHL compliance, particularly absent robust international governance and meaningful human control standards. In comparison, Ukrainian use of AI tools for tasks like targeting support, reconnaissance, or demining, appears more restrained and generally align with IHL obligations, presenting precision benefits in military strikes. However, the overall proliferation of drone warfare on both sides has intensified the war’s tempo and scale, contributing to a 4,000 percent global rise in drone attacks between 2020 and 2024. This trend has raised serious concerns about an accelerating arms race that outpaces existing legal frameworks.

Broader IHL Framing and Operational Implications

As Russia’s position rests squarely on the sufficiency of existing IHL, the State sees no legal gaps that require new treaty obligations. Russia continues to argue that the core rules of distinction, proportionality, and precautions in attack, together with command responsibility under Articles 86 and 87 of Additional Protocol I, as well as the Martens Clause, sufficiently govern the development and use of any weapon system, including those with autonomous functions.

This view aligns Russia with the United States and a handful of other States often described as “traditionalists.” These powers maintain that IHL already imposes obligations on States and individuals, not on machines, that Article 36 weapon reviews during acquisition provide the proper safeguard, and both Russia and the United States have warned that shifting LAWS discussions outside the CCW risks fragmenting the debate and weakening the consensus-based approach that produced the 2019 Guiding Principles. At forthcoming reviews and meetings, Russia is poised to reaffirm the GGE as the “best suited platform,” while resisting binding instruments and pushing for non-binding recommendations that codify existing IHL sufficiency.

Conclusion

Russia’s stance on LAWS presents a curious exercise in legal consistency. On one hand, the State’s emphasis on retained human responsibility and the sufficiency of existing IHL follows the traditional framework of command accountability. On the other hand, this position is ironically advanced by a State whose forces have integrated AI-enhanced and semi-autonomous systems into its wartime operations while flagrantly disregarding the core principles of IHL. As the Seventh Review Conference of the CCW draws nearer in November 2026, Russia shows no sign of softening its positions, and will likely maintain its opposition to any new legally-binding instrument while accepting only non-binding recommendations or continued expert discussions under the GGE’s consensus process.

Russia’s message is straightforward and pragmatic: existing international humanitarian law, backed by strong national implementation and unmistakable chains of human command responsibility, offers the most workable foundation for the responsible use of emerging autonomous technologies. Yet, Ukraine’s battlefield continues to test whether doctrine alone can bridge the gap between Geneva’s consensus and the realities of modern warfare.

***

Dr Gerald Mako is a Research Affiliate at the Cambridge Central Asia Forum at Cambridge University.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

 

Photo credit: Student News Agency, Mehrdad Esfahani