Future of Warfare and Law Series – The Law and LAWS

by | Nov 7, 2025

LAWS

Editors’ note: This post is part of a series featuring topics discussed during the Third Annual Future of Warfare and the Law Symposium. Christina Colclough’s introductory post is available here.

In May of 2025, the third Future of Warfare and the Law Symposium invited lawyers, technical experts, and academics to discuss and debate the most challenging issues projected to feature in future warfare. Topics ranged from the collateral effects of directed energy weapons to military use of autonomy in weapon systems and the incorporation of artificial intelligence (AI) and machine learning in lethal decision-making.

In one notable dialogue, many of the scientists and technology experts acknowledged that they do not yet fully understand how future lethal autonomous weapon systems (LAWS), AI-enabled targeting platforms, and other agentic systems capable of executing combat operations without human intervention, will arrive at lethal decisions that can be carried out in compliance with the law of armed conflict (LOAC). They further noted that they do not yet know how precise these systems will be when they are fielded.

Symposium participants discussed potential approaches to these issues. One technologist noted the ongoing examination of the viability of competing algorithms: one that performs automatic target recognition; while the other seeks to avoid civilian harm. Another expert noted that studies are underway to determine if settings favoring false positives over false negatives, and vice versa, could be used as a “dial” for commanders to account for mission-specific risk tolerance based on METT-TC (mission, enemy, terrain, troops, time, and civilian) considerations. It is noteworthy that none of the technical experts in the room were prepared to specify what confidence, precision, or accuracy levels should be expected from LAWS, even ten to fifteen years from now.

The core insight from these exchanges is that technology will not be sufficiently advanced so as to allow future LAWS to—independently—comply with all aspects of the LOAC, in all conditions and circumstances. These near-future LAWS will not be 100 percent effective at avoiding unanticipated collateral damage, ensuring the concrete and direct military advantage to be gained is not outweighed by the anticipated collateral damage, and distinguishing military objects from civilian persons and objects. In other words, the LAWS we employ will be imperfect.

This post seeks to address the underlying question at the heart of these exchanges: are new laws required to address the employment of “imperfect” LAWS?

Existing Arguments

There is nothing approaching consensus among States. Within the UN-convened LAWS Governmental Group of Experts (GGE), some States have asserted that LAWS operating without any human control are unlawful. States in this camp disagree as to the degree of human control required throughout the life cycle of such systems (e.g., design, development, review, fielding, employment, etc.). Other States question whether human control is even necessary to ensure compliance with LOAC. Despite over a decade of discourse among GGE participants, contributing States have not yet agreed to a definition of “human control.” States have offered some form of the following terms when debating human control, but none are supported by a majority: meaningful human control; human intervention; human oversight; appropriate human judgment; human agency; and human on/in/out of the loop.

Many international humanitarian organizations (and some countries) have argued new laws are necessary. Human Rights Watch and the Stop Killer Robots Coalition argue in favor of an outright ban on both use and development of LAWS. The International Committee of the Red Cross (ICRC) advocates a somewhat less restrictive approach, lobbying to limit LAWS to targeting only those objects which are by their nature considered military objects. The ICRC approach would also set limits on the total number of engagements LAWS are authorized to conduct, proscribe the use of LAWS in densely populated areas, and require effective human intervention (essentially requiring a human-on-the-loop) or a self-destruct function.

Whether the focus of the debate du jour centers on a particular munition, system, or on the broader implications of AI and algorithms making lethal decisions, the discourse almost invariably boils down to an argument for or against a lex ferenda proposition. This demonstrates a fundamental misunderstanding of the LOAC as it has been applied, for quite some time, to automation and autonomy in weapon systems.

Automation and Autonomy in Weapon Systems

Autonomy, or at least some level of automation, has featured in weapons systems for centuries. Mines, for example, are regarded as rudimentary autonomous weapons because their operation after emplacement occurs without operator input (see U.S. Department of Defense Law of War Manual § 6.5.9). Under the Amended Mines Protocol, mines are a “munition placed under, on, or near the ground or other surface area and designed to be exploded by the presence, proximity or contact of a person or vehicle.”

The first mines designed to be exploded by the presence or contact of a person were devised and employed by an American: Brigadier General Gabriel J. Rains. He buried artillery shells with pressure caps in the ground while battling Native Americans in the 1840s, later deploying them against Union troops during the Civil War. General William T. Sherman famously remarked in his memoirs that the rudimentary mine systems he encountered were “not war, but murder.”

Despite Sherman’s foreboding take on the munition, mine systems have been deployed in most conflicts since. More importantly, the development of a robust regime of mine-specific international law and numerous attempts to ban anti-personnel mines never resulted in an outright prohibition on the use of landmines. Indeed, several eastern European countries have recently taken steps or are considering steps to withdraw from the Ottawa Convention, demonstrating that the utility of a weapon system can overrule the desirability of humanitarian limitations.

Though mines are a good example of how more rudimentary autonomous systems have been governed under international humanitarian law, a legal regime as robust as the one that developed for mines is not necessary to govern more capable LAWS. What is required to effectively govern employment of LAWS is much simpler: the LOAC, as it stands, is already ensuring the appropriate employment of LAWS and AI-enabled targeting on current battlefields.

Examples from the Field

Recent conflicts demonstrate the extent to which imperfect LAWS are already employed, often without issue. The conflict in Gaza introduced the world to the HARPY Loitering Munition and GOSPEL, an AI system capable of scanning massive amounts of raw data and turning that data into target recommendations, and perhaps even targeting decisions. Ukraine has become the testing ground for numerous autonomous drone platforms. Russia has employed the Lancet-3 since the outset of its invasion of Ukraine. The Helsing HX-2 AI-enabled strike drone is a relatively recent addition, and almost 4,000 of these systems were promised to be delivered to Ukraine in December 2024.

In late 2020, Azerbaijani unmanned systems like the HAROP Loitering Munition defeated or neutralized a significant percentage of Armenian air defense in the Karabakh War. A year later, the UN Panel of Experts on Libya alleged that the Turkish-made STM Kargu-2 was employed against forces loyal to General Khalifa Hafta.

Some of these weapons systems have received their own individual treatment based on alleged and/or forecasted violations of the LOAC. The remainder have been scrutinized more generally because of the overarching implications of AI and algorithms contributing to lethal decision-making. Despite scrutiny over the potential pitfalls of LAWS and the uptick in their actual use in various conflicts, calls to ban or curtail the use of LAWS have not gained traction with the international community. Why? The lack of consensus can be attributed, at least in part, to the fact that States have generally been able to employ existing LAWS without running afoul of the LOAC.

The Law of Autonomy

A very thorough digest of how the LOAC and U.S. policy already effectively govern the employment of LAWS was recently published by the Military Law Review: “Pacing China: LAWS and Object-Based Targeting.” The article introduces U.S. policy on the design and development of LAWS, which requires appropriate levels of human judgment over the use of force, and that those who employ LAWS do so in accordance with LOAC and applicable rules of engagement (ROE). The author then asserts that U.S. policy-based weapon system legal reviews ensure that LAWS, by their nature/design, will not violate LOAC. The article thereafter demonstrates how existing and developmental LAWS are capable, through sophisticated programming and implementation of feasible precautions (or other control measures), of complying with the principles of necessity, humanity, proportionality, and distinction.

There is no need to further summarize the Military Law Review article’s thorough analysis of the law applicable to LAWS in this post. However, it is worth highlighting the lone, conspicuous issue with “Pacing China: LAWS and Object-Based Targeting.” The article unnecessarily limits its review of whether LAWS can comply with LOAC to LAWS programmed only for object-based targeting. This approach is problematic for two reasons.

First, by only addressing LAWS that can engage in object-based targeting, the article lends a modicum of credence to proposals that LAWS should only be employed in such a manner. The article at one point proposes that LAWS “could be designed to only target objects that, by their nature, are military objects.” This language mirrors the ICRC’s proposed restrictions on LAWS and thereby inadvertently serves those who seek to limit the advantages offered by LAWS. The better proposition is that such a limitation, if desirable, is programmed into LAWS as an option, rather than an outright restriction on the capabilities of a given system. Such an option could enable commanders and policymakers to limit the application of force by LAWS when strategic or operational imperatives dictate, similar to the manner in which ROE have been implemented by U.S. forces in recent operations.

Second, through statements and submissions to the GGE on LAWS and to the UN General Assembly, the United States has clearly and consistently articulated its position on LAWS and the requirements it imposes on the design, development, and employment of LAWS. In 2016, 2018, 2019, and 2023, the United States scoped the issue by asserting that its “approach to LAWS starts with the recognition that [LOAC] already provides the applicable framework of prohibitions and regulations on the use of LAWS in armed conflict.” The United States added that the LAWS it develops and employs will be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” U.S. policy also requires significant internal review and testing to ensure it produces reliable weapons engineered to perform as expected. Lastly, the United States has regularly emphasized that LAWS will provide greater humanitarian benefits in conflict because of their ability to increase awareness of civilians and civilian objects on the battlefield and to reduce the need for immediate fires in self-defense.

Conclusion

U.S. statements and submissions on LAWS have never capitulated to calls for restricting the types of engagements that LAWS can carry out. The U.S. position, although not explicitly stated in this manner, is that LAWS are legally capable of carrying out all forms of lethal engagements. The question of whether the more exquisite LAWS of the future are technologically capable of complying with LOAC and applicable U.S. policy/ROE will be answered in due time.

***

MAJ Bryan Jack currently advises on national security and intelligence law for the Headquarters, U.S. Army Transformation and Training Command.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

Photo credit: Ashwin Kumar