Nova 2, Legion-X, and the AI Political Declaration
The race to develop autonomous weapons systems for military deployment has entered a new phase, with the use of artificial intelligence (AI) technology for the navigation of small drones in active combat. In the latest round of Gaza conflicts commencing on October 7, the Israel Defense Forces have reportedly fielded the IRIS robot, called “throwbot,” capable of driving down Hamas’s vast underground tunnel network, using sensors to detect objects and people, relaying pictures back to its operator, and potentially detonating booby-traps planted by Hamas.
Israel also has Nova 2 at its disposal, an AI-enabled small drone developed by U.S.-based Shield AI. Nova 2 has the ability to autonomously navigate and map complex subterranean or multi-story buildings without GPS, communications, or a human pilot. Another capability is Legion-X, developed by Israel’s Elbit Systems, and designed to control multiple drones that can carry a grenade-sized explosive charge, turning them into loitering munitions. The combat effectiveness of these drone capabilities will be put to the test over the coming weeks as Israeli forces engage in subterranean warfare against Hamas.
Meanwhile, forty-six States have endorsed the “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy” (the Declaration), as announced by U.S. Vice President Harris on November 1, 2023. A set of non-legally binding guidelines unveiled in February, the Declaration is designed to promote best practices for the responsible use of AI in a defense context without altering existing legal obligations or creating new obligations under international law. As Shawn Steene and Chris Jenks note in their commentary on the Declaration, existing rules under the law of armed conflict (LOAC) including the obligation to conduct legal reviews of new weapons are an integral part of responsible use. The unique self-learning features of AI-enabled military capabilities necessitate more rigorous testing and safeguard mechanisms. Relevant legal and ethical considerations must be embedded across the capabilities’ entire life cycles, from the conception and design stages through fielding, in order to put appropriate safeguards in place.
While the Declaration is by and large non-binding, one potential exception that caught my attention. It is the safeguarding requirement appearing as Point J of the Declaration, which provides:
States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example by disengaging or deactivating deployed systems, when such systems demonstrate unintended behavior.
This post considers the extent to which these safeguard mechanisms are required under current international law and the legal implications of the Declaration when their implementation goes beyond what is required under existing law. As will be shown, it is time to move on from the hollow notion of human control as the linchpin of regulatory debates regarding autonomous weapons systems. We must, as many working in the field already do, shift our focus toward “systems of control” to ensure that AI-enabled military capabilities operate within the bounds of the law.
Safeguard Mechanisms
The idea of safeguards finds its application in a few specific areas of weapons law. Anti-personnel landmines, for example, must be equipped with self-destruct and self-deactivation mechanisms after a certain period of time unless their presence is clearly marked, protected, and monitored to prevent civilian harm. Automatic submarine contact mines must be constructed to become harmless as soon as control over them is lost. As I noted in a previous post, this idea has the potential to gain a broader agreement among States, consistent with the requirement of cancellation and suspension under Article 57(2)(b) of Additional Protocol I.
A regulatory challenge is to determine when or under what circumstances safeguard mechanisms should be activated to stop AI-enabled capabilities from operating. Time-bound limitations would be ineffective to trigger the safeguard when required. The weapon’s performance may degrade well before the time limit, or its operational value may be lost when the cutoff is set too short. One might envisage a safeguard mechanism triggered when human control over the weapon is lost. But such a control-based trigger would throw the baby out with the bathwater as one of the crucial advantages AI-enabled capabilities have to offer is the ability to operate autonomously even when communications are disrupted. A safeguard mechanism triggered by “unintended behavior,” as envisaged by the Declaration, is difficult to calibrate because a variety of factors, anticipated or adversarial, contribute to an actual or perceived degradation or loss of intended functionality.
From a legal perspective, critical thresholds exist where AI-enabled weapons systems, such as grenade-carrying Legion-X swarm drones, lose the ability to identify intended targets and to distinguish them from civilians or civilian objects. Given the centrality of distinction to the LOAC, States could plausibly argue that AI-enabled capabilities must maintain a high degree of reliability all the time and even a slight degradation of functionality should be sufficient to trigger a safeguard mechanism when these capabilities are designed to cause a lethal or destructive event. On the other hand, a degradation of navigational functionality can be tolerated to a large degree if it simply means, for example, that a Nova 2 drone loses the ability to identify or circumvent obstacles.
Shawn Steene and Chris Jenks indeed note that the potential consequences of the failures must be considered to determine an appropriate safeguard, with significant or stringent standards appropriate when consequences are serious. The idea of differentiated standards makes sense as a general approach. However, the question that system designers and developers must grapple with is how those standards can be expressed and programmed in objective and measurable terms. Sooner or later, they will find the need to develop safeguard metrics as a technical guide for the classification of failure events and corresponding safeguarding procedures.
Normative Implications
Safeguard requirements would be no more than a good faith implementation of existing rules under the LOAC if the envisaged triggering events are limited to cases where AI-enabled capabilities cannot perform their mission without falling foul of relevant obligations. However, the Declaration’s choice of “unintended behavior” as the basis for implementing appropriate safeguards appears to suggest that its intention is broader than merely complying with the existing law. If that is the intention, the Declaration may not entirely be devoid of a norm creating character and could potentially develop into a legal obligation that dictates the design and development of AI-enabled capabilities, not only for endorsing States but more broadly.
As a political document, the Declaration is not intended to be a legally binding international agreement. The use of hortatory language (“should”), rather than compulsory language for establishing an obligation (“shall”), confirms this characterization. Its emphasis on compliance with applicable international law reinforces the view that the commitment to the Declaration does not alter existing obligations. However, the endorsing States’ commitment to safeguard mechanisms, even in cases where their existing obligations do not require such safeguards, seemingly suggests an intention to impose restrictions that do not currently exist.
The expression of such an intention being made public could be construed as a unilateral declaration committing the endorsing States to a legally binding obligation. The International Court of Justice has indeed recognized the legally binding nature of a unilateral declaration when official statements are made publicly to express an intent to be bound in clear and specific terms (Nuclear Tests, paras. 43, 51). On that basis, the Court found that France had assumed a legal obligation to cease atmospheric nuclear testing.
Such an intent to be bound must be ascertained through an interpretation of the Declaration in light of its text, all the factual circumstances in which the Declaration was made, and other States’ reactions to it (ILC Guiding Principle 3). As discussed above, the text of Point J indicates a commitment that goes beyond what States are required to do under existing obligations. The United States announced the Declaration during the Responsible AI in the Military Domain Summit (REAIM 2023) in The Hague as part of the dialogue to forge a common understanding of the opportunities, dilemmas, and vulnerabilities associated with military AI. It remains to be seen how non-endorsing States might react to this commitment but it is not unreasonable to expect them to be critical about the endorsing States’ use of AI-enabled military capabilities without safeguards.
Even if the Declaration itself does not create any new obligations, its endorsement still forms part of State practice (ILC Draft Conclusion 6) which, when combined with opinio juris, could develop into a new rule of customary international law. The endorsing States are expected to implement their commitments and start integrating some type of safeguard mechanisms into military capabilities that utilize AI technology. In the event that a uniform matrix, as envisaged above, is developed to guide their development of military AI capabilities, uniform and widespread State practice may emerge and crystallize into customary international law requiring States, as a legal obligation, to implement safeguard mechanisms according to this uniform matrix.
This potential development may well be strategically beneficial, especially if the endorsing States are to implement safeguard measures as a matter of policy anyway. Strategic competitions and mutual distrust among Great Powers have stymied arms control efforts to conclude international agreements in strategically significant areas, such as cyber, space, and autonomous weapons systems. Although we are unlikely to see any treaty restricting the use of AI capabilities, the development of safeguard requirements under customary international law will be binding equally on non-endorsing States. This means that the People’s Republic of China, Iran, and Russia will also be bound by safeguard requirements unless they are prepared to express a dissent. Even if these States were to contest the applicability of safeguard requirements, the uniform and widespread practice would lend credence to the claim that military AI capabilities they develop or use without safeguards are unlawful.
From “Meaningful Human Control” to “Systems of Control”
Whether it is required as a new obligation or implemented as a matter of policy, the introduction of safeguard mechanisms is a positive development in the regulation of AI capabilities for military purposes. It follows in the footsteps of previous agreements in weapons law regulation, curtailing the undesirable risk of adverse impacts. It also marks an important milestone in the shift away from the hollow notion of human control in regulatory debates on AI. The idea of safeguards necessarily envisages the use of AI capabilities in circumstances where human control is restricted or absent.
The notion of meaningful human control was coined in the early stages of civil society advocacy on autonomous weapons systems. The idea that humans need to retain control and responsibility over autonomous systems appealed to many humanitarian activists. Its simplicity was also appealing to many diplomats debating the regulation of lethal autonomous weapons systems because it can be interpreted in so many different ways. Although it may have served its purpose in abstract debates, those who are involved in the design, development, acquisition, or deployment of AI capabilities require much more specific guidance.
Perhaps, the human control debate was a necessary step to facilitate conversations and reveal complexities in regulating the development and use of autonomous weapons systems. In this respect, it is akin to the distinction between offensive and defensive capabilities debated during arms control negotiations in the 1920s-30s, which was ultimately abandoned due to various conceptual flaws. The reality of modern warfare, illuminated by the development of Nova 2 and Legion-X drones, makes plain the need for a more sophisticated approach to the regulation of AI capabilities.
An alternative approach has already emerged that focuses on “systems of control” aimed at the whole lifecycle of AI capabilities. Originally proposed by Australia, the idea hinges on the system of processes and procedures that many military organizations already employ to achieve their intended military effect in compliance with their legal obligations and policy objectives. Although it was broad-brush when presented in 2019, the idea appears to be gaining traction due to its potential for setting a workable framework in which various control measures can be implemented and tailored. Indeed, various control measures are currently contemplated to establish technical baselines in areas such as data management, risk assessment, and verification, validation, and testing procedures for acquisition. The introduction of safeguard mechanisms will form an integral part of these efforts to develop AI-tailored systems of control.
Concluding Observations
The growing list of States endorsing the Declaration since its announcement in February is a testament to the general consensus emerging on the regulatory framework for the design, development, acquisition, and deployment of military AI capabilities. Although the safeguard requirement has the potential to be legally binding, more efforts need to be made to bridge the large gap between high-level political direction and technological requirements for the development of AI. Field experiences in the deployment of new AI-enabled capabilities, such as Nova 2 and Legion-X, will further inform us how various legal and ethical considerations need to be integrated into systems of control from the early stages of conception and design.
***
Hitoshi Nasu is a Professor of Law in the Department of Law at the United States Military Academy.
Photo credit: Lilykhinz via Wikimedia Commons