Rules of Engagement as a Regulatory Framework for Military Artificial Intelligence

by | Aug 27, 2024

Regulatory AI

Proper regulatory frameworks are required for the development, deployment, and use of artificial intelligence (AI) for military purposes. Any such frameworks must comply with international law. In addition, because existing international law does not provide specific guidance on military applications of AI, regulatory frameworks should support international law’s application and implementation.

This post proposes that rules of engagement (ROE) can serve as a framework for regulating the use of military applications of AI, based on an analysis of core instruments for preparing and conducting military operations with AI. The post argues that ROE respond well to the regulatory needs of AI, notably because they represent a holistic, specific, and concrete yet flexible framework. ROE can be particularly useful for regulating human-machine teaming and human control over AI systems.

The Need for Military AI Regulations

The emergence of military applications of AI requires new regulations due to its inherent features and characteristics, which are different from traditional military hardware and computer programmes. This is notably the case when AI enables autonomous decision-making by weapon or decision support systems.

AI systems with at least partly autonomous capabilities are currently being developed or are already deployed in military operations. The U.S. AI-guided Long Range Anti-Ship Missile (LRASM) is reportedly capable of autonomously selecting and engaging targets. In Libya in March 2020, a Turkish Kargu-2 drone allegedly followed and engaged human targets without a human operator’s direct control. The Israeli Lavender and Gospel systems, which support targeting decisions, also rely on AI.

Various international initiatives and processes are thus seeking to establish international regulatory frameworks for the development, deployment, and use of military AI. These initiatives and processes include the Group of Governmental Experts on Lethal Autonomous Weapon Systems, the summits on Responsible AI in the Military Domain, the U.S.-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy and related process at the UN General Assembly.

Military applications of AI also require regulations at the domestic level. Numerous States and NATO have therefore adopted policies on the use of AI for defence and military purposes, such as the U.S. Department of Defense Ethical Principles for Artificial Intelligence and the NATO Principles of Responsible Use of Artificial Intelligence in Defence. Yet, both international and national rules and policies need proper regulatory frameworks that specify and operationalise their objectives and substance while responding to the specific nature of AI.

Rules of Engagement

ROE constitute a well-suited tool for regulating military applications of AI. Modern militaries widely use ROE to delineate the circumstances for and limitations on the deployment of military forces. As such, ROE allow armed forces to translate political and military strategic objectives and international legal requirements into operational contexts. In this sense, according to the San Remo Handbook on Rules of Engagement, ROE are “a mix of military and political policy requirements [that] must be bounded by extant international and domestic legal parameters.”

More specifically, ROE provide authorisation for and/or limits on, among other things, “the use of force, the positioning and posturing of forces, and the employment of certain specific capabilities.” They tend to contain basic elements, including general instructions for the commander and general political and legal issues relevant to the operation. Instructions can also cover warnings prior to the use of force, the use of specific weapons, and restrictions on and permissions for using force to defend civilians and civilian objects and/or attack certain military objectives. ROE can also state which commanders in the chain of command are responsible for authorising specified actions.

ROE are part of a larger regulatory framework related to the deployment of military forces and the use of force. As such, they interact with other types of military directives, notably targeting and tactical directives. Targeting directives provide specific instructions on targeting, including restrictions on objects that can be targeted and the minimisation of collateral damage. Tactical directives are “orders directed either at the force as a whole or at specific types of units or weapon systems, regulating either the conduct of specific types of missions within the operation as a whole or restricting the use of specific weapon systems during the conduct of the operation.”

ROE also take diverse forms. They may be issued as execution and deployment orders, operational plans, and standing directives. They are usually written and managed by military legal advisers. Generic ROE and template documents, such as the San Remo Handbook on Rules of Engagement and NATO’s MC362/1 Document, can serve as a basis for drafting ROE. While ROE are generally not disseminated to all ranks, soldiers are often provided with memory cards containing simplified versions of the ROE they need to be cognisant of.

Applying ROE to Military AI

ROE can serve as a proper regulatory framework for the use of military applications of AI, notably because they respond to central requirements policymakers have identified as necessary to AI. Indeed, international and national discussions on the regulation of military AI have led to the insight that regulations need to be holistic, specific, and concrete, yet flexible.

Firstly, regulations governing the use of military AI need to be holistic in the sense that they need to reflect a variety of considerations beyond simple military or legal ones. Ethical considerations notably remain a fundamental issue to be addressed in such regulations. This particularly concerns AI systems’ level of autonomy and the form of meaningful human control over them, but, among other issues, this also covers AI systems’ levels of acceptable predictability and error. ROE are a good regulatory framework notably because they represent an amalgamation of political, military, and legal considerations and are capable of including further considerations, such as ethical limitations that are specific to a particular technology.

Secondly, regulations need to be specific because there is no “one size fits all” formula for the use of AI. AI for highly autonomous weapon systems needs different parameters than AI applications for the management of health care, for instance. But a specific type of technology may also need different operational limitations based on what the technology is used for or how it is used. Different ROE—or comparable rules of behaviour—can be enacted for different use cases. While international law, policies, military doctrines, and military directives can determine the fundamental normative baselines and standards for using AI in the military context in general, ROE can specify the regulations for specific applications.

Thirdly, regulations need to be both concrete and flexible, because they need to properly guide behaviour in different situations. ROE can be written and adopted for every specific mission and can comprise specifications for particular situations that could arise during a particular mission. Because events may evolve during operations, the specifications may need to be changed. ROE usually include a cascading letter of authority that defines who has the authority at a specific level of command to change defined parameters for action. The parameters can thus be adapted according to the needs of the situation, which can be useful notably for guiding the use of AI systems that have the capacity to learn based on new inputs.

ROE on Human-Machine Teaming and Human Control

In the context of military operations and missions, ROE can notably define the parameters for human-machine teaming and human control over AI systems. ROE can establish how a commander or operator needs to monitor and control the system during deployment. For example, ROE can determine a geographical zone or a list of potential tasks for which operators are authorized to use AI systems. ROE can also define the form of human control necessary in specific situations, such as direct, shared, or supervisory control, and how it will be implemented. This is particularly relevant when AI is used in the context of targeting, notably when States have decided not to allow AI systems to make targeting decisions in general, or specifically not against humans.

ROE can also be used to define time checks or other limits, such as pre-set instructions to the system and/or operator to engage or not engage specific targets. ROE can further address or refer to other sources such as manuals and directives covering how to implement various forms of human control. Similarly, ROE can foresee that a system should flag unexpected events or issues relevant to the operator and when the operator must inform his or her superior of these changed events or issues. ROE may also limit commanders’ or operators’ authority, which may force them to refer up the chain of command for authorization. This can be a significant role for ROE covering human-machine teaming in military operations, notably when confronted with unanticipated situations or issues for which the system or its use had not previously been tested, trained, and/or authorized.

ROE can also define when commanders must seek legal advice. This can be necessary in particular when human-machine teaming and human control are subject to the law of targeting. ROE can also specify if a new legal review of an algorithm would be necessary if it develops new features with potential new legal implications. 

Conclusion

In sum, ROE can be a useful tool to guide the use of military AI, notably because they represent a holistic, specific, and concrete but flexible regulatory framework. ROE can complement and implement policies, regulations, and guidelines at the higher echelon, thereby enabling the transposition of military, political, legal, and ethical principles and objectives into concrete action. ROE can be particularly useful for human-machine teaming and the concretization of meaningful human control over AI systems, notably in the context of targeting. ROE can thereby function as a core instrument for preparing and conducting military operations using AI. Furthermore, they can support the concretization, application, and implementation of existing and potential future international law.

***

Dr Tobias Vestner is the Director of the Research and Policy Advice Department and the Head of the Security and Law Programme at the Geneva Centre for Security Policy (GCSP).

 

 

 

 

Photo credit: Unsplash

Print Friendly, PDF & Email