Responsible AI Symposium – Responsible AI and Legal Review of Weapons

by | Dec 27, 2022

Weapons review

Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For a general introduction to this symposium, see Tobias Vestner’s and Professor Sean Watts’s introductory post.


 

States conduct legal reviews to ensure that weapons, means, or methods of warfare comply with legal obligations. Yet the question arises whether a legal review is also an appropriate mechanism for ensuring a weapon is consistent with the DoD AI Ethical Principles.

The question is timely as in June 2022, the Department of Defense released its Responsible AI Strategy and Implementation Pathway.  One of the lines of effort of the Strategy, in part asks:

[W]hether or how legal review processes can support implementation of the DoD AI Ethical Principles, including the review of the legality of weapons per DoD [Directives] 2311.01, 5000.01, 3000.03E, and 3000.09.

This post addresses this question. It argues that the legal review of weapons is not the appropriate mechanism to determine whether a particular weapons system complies with the DoD Ethical Principles. If the goal is to have Responsible AI (RAI) be effective, this determination simply cannot become another question the legal review addresses. Rather everyone involved must assess and evaluate the issue throughout the acquisition process.

This post will first examine the legal review process. Then it will offer three reasons why the weapons review is not the appropriate vehicle to ensure implementation of the DoD AI Ethical Principles. These reasons include: (1) Weapons reviews take place at specific times in the acquisition process, often as one of the last steps before fielding; (2) Lawyers are not ethicists and ensuring implementation of ethical principles must be considered by all personnel throughout the process and; (3) many systems that will incorporate AI are not “weapons” and therefore will not get a legal review.

DoD Weapons Review Process

The United States is not a party to Additional Protocol I to the 1949 Geneva Conventions so there is no explicit treaty obligation to conduct legal reviews of new weapons, means or method of warfare. Some commentators note that there is “an applied obligation” to conduct such review as indicated by the practice of certain States prior to the adoption of Additional Protocol I. The International Committee of the Red Cross, in its 2006 Guide to the Legal Review of New Weapons, Means and Methods of `Warfare, also takes the view that the requirement to assess the “legality of all new weapons, means and methods of warfare . . . is arguably one that applies to all States, regardless of whether or not they are party to Additional Protocol I.”

The DoD Law of War Manual reiterates that long-standing U.S. policy requires a legal review of the intended acquisition of a weapon system to ensure its development and use is consistent with the law of armed conflict (LOAC). This policy predates adoption of Additional Protocol I. Each military service has issued regulations implementing this policy. For example, Army Regulation 27-53  sets forth the requirements for legal reviews of new weapons and states that the legal review of the acquisition or procurement of a weapon system should occur at an early stage of the acquisition process to ensure its legality under LOAC, domestic law, and international law, including the research and development phases.

The legal review requires the appropriate lawyer performing the review to consider three questions to determine whether the acquisition or procurement of a weapon system is prohibited. Those are: (1) Whether there is a specific rule of law, whether as a treaty obligation or viewed as customary international law, prohibiting or restricting the use of the weapon; (2) if there is no specific prohibition or restriction, then the review should determine whether the weapon’s intended use is calculated to cause superfluous injury; and (3) whether the weapon is inherently indiscriminate.

Part of the calculus of such a weapon review is to delineate whether there are legal restrictions on the weapon’s use that is specific to that type of weapon or whether other practical measures are needed, such as training or rules of engagement specific to the weapon.

Accordingly, whether a State conducts a weapon review under AP I, Article 36 (State Party to AP I), or as a matter of policy (such as the United States), a legal review of a new weapon, or “means or method of warfare” is a critical component of ensuring that such weapons are used in compliance with that State’s international legal obligations.

Although neither Article 36 nor DoD policy specifies how the legal review must take place, the limited number of States that actually conduct a weapon review (estimated to be around 20 States) has implemented a multi-disciplinary examination of the technical description of the weapons system as well as an analysis of the various tests to evaluate the performance of the weapon system. For the DoD, this evaluation is done at the earliest stages of the development or acquisition process, but a final legal review must be conducted prior to the fielding of the weapon system.

This process, which has been in place since 1974, has worked very well. However, the question remains how this process will work with systems that incorporate autonomous features, specifically AI. Others have addressed this topic (here and here), so this post will focus on why the legal review process cannot be the primary mechanism to ensure RAI.

DoD Weapons Review Take Place at Specific Times in the Acquisition Process

For the Department of the Army, the legal review of new weapons generally should take place twice during the acquisition process. An initial review of a new weapon should be done “at the earliest possible stage” before full-scale development. Army Regulation 27-53 provides that a final legal review “must be made prior to the award of the initial contract for production” to determine whether the weapon’s intended use is consistent with all applicable U.S. domestic law and international legal obligations of the United States, including arms control obligations and LOAC.

Although the Regulation specifies the two points where the legal review of the weapon may occur, depending on the weapon system, there may be more informal opportunities for the legal advisor and developers to address legal issues as the weapon moves through the acquisition process. However, that does not occur in every instance.

Jane Pinelis, the Chief of AI Assurance for the DoD, notes that “responsible AI is, kind of, everybody’s job in the department.” She notes that while many tasks for RAI occur with testing and evaluation, there are many more pieces that require everyone across DoD to take responsibility.

As the Regulation only requires one legal review prior to fielding, for RAI to be effective it must be part of the entire acquisition process and not just left for the lawyer to check the box during the legal review.

Lawyers Are Not Ethicists

Lawyers are good at providing legal advice. In conducting legal reviews of weapons, legal advisors determine whether the weapon and its intended functioning are consistent with applicable law and policy. Yet RAI requires more than lawyers to address its concerns. There must be a senior level working group within DoD that is responsible for AI ethics. For example, Twitter had an AI Ethics Team, or at least it did until Elon Musk fired them.

DoD should also consider forming an AI Ethics Team to help drive RAI within the Department.  These individuals would need the necessary skills, experience, and knowledge to properly implement AI ethics. They should include four groups of individuals, to include senior policy level leaders, technologists, legal experts, and ethicists. Each group would bring its own unique skill set and would be best placed to collectively understand the issues AI can help solve for the Department.

Senior policy leaders will set policy and help mitigate risk and ensure that AI is used in a way that helps the Department achieve its objectives. Technologists can advise on what is technically feasible with respect to AI. The lawyer can help advise on how existing law and regulations impact AI, in particular in weapon systems. Ethicists would provide guidance into the ethical and reputational risks involved with AI and advise on bias and other issues that need to be addressed for RAI.

Responsible AI is everyone’s job. All skill sets must be brought together to ensure that AI continues to be implemented in a positive way. As noted by then Lt.Gen. Jack Shanahan, the DoD “owes it to the American people and our men and women in uniform to adopt AI ethics principles that reflect our nation’s values of a free and open society.

Many AI Systems are not “Weapons”

The third, and most important, reason why the legal review of weapons cannot be the mechanism for RAI is that many of the systems incorporating AI are not “weapons.” This is where the United States differs from those States that are party to Additional Protocol I and conduct reviews under Article 36. There is no explicit requirement for non-AP I States parties to review new “means or methods of warfare.” In the U.S. response to a Stockholm International Peace Research Institute questionnaire, the United States noted that “DoD policy does not establish a requirement to review the lawfulness of new methods of warfare as such” or “impose a specific requirement to review the legality of military doctrine.”  One would need to look no further than Project Convergence 2020 (PC20) and 2021 (PC21).

Project Convergence 2020 was the first of annual demonstrations that use next generation AI, network and software capabilities to show how the Army intends to fight. In this demonstration, the Army used three systems, TITAN, Prometheus, and FIRESTORM to reduce the time it took sensor data from all domains, transform it into targeting information, and then select the best weapon system to respond to the threat from 20 minutes to 20 seconds. None of these three systems constitutes a weapon.

TITAN (Tactical Intelligence Targeting Access Node) is a portable ground station under development that would take data from space-based sensors, combine with data from other domains, and use AI to create targeting data. Prometheus takes the data from TITAN and uses an AI algorithm to help identify targets. Once the targets are found, Prometheus sends those target coordinates to FIRESTORM (Fires Synchronization to Optimize Responses in Multi-Domain operations).

FIRESTORM knows the positions and capabilities of weapons connected to the network. It is able to use AI to match targets to weapons. The system then pushes those firing solutions to the battlefield commander who selects the target. During PC20, FIRESTORM was only able to find six firing solutions for a particular target. By PC21, FIRESTOM used an enhanced algorithm to find 21 firing solutions for a target. This was possible as there were many more sensors and weapons connected to the network.

TITAN, Prometheus, and FIRESTORM demonstrate the incredible potential of the use of AI on the battlefield. They can dramatically reduce the time it takes to identify and engage an enemy threat. Each of these AI systems provides intelligence and analysis of a particular threat target, but none selects or engages a target on its own. The battlefield commander still decides whether to select and engage a particular target. Accordingly, none of these systems will require a weapons review.

If the weapon review was the mechanism to check implementation of the DoD AI Ethical Principles, it is easy to see that systems such as these would never get that review. It does raise the question of whether the United States should expand legal reviews to such systems that incorporate AI that inform humans’ decision-making related to targeting. It may not be feasible or it would be too burdensome, but as of today, such systems would not require a legal review.  This is why ensuring Responsible AI must be broader and involve other stakeholders than just the legal advisor.

Conclusion 

The answer to whether or how legal review processes can support implementation of the DoD AI Ethical Principles is mixed. Responsible AI requires input and support from everyone in the acquisition process. It starts with leadership, preferably through a group such as an AI Ethics Team, which would provide overarching policy guidance to weapons developers. Lawyers and the legal review process can play a part in ensuring RAI, but it cannot be the sole mechanism to do so.

***

Michael W. Meier currently serves as the Special Assistant to the Army Judge Advocate General for Law of War Matters. As the senior civilian adviser, he advises on legal and policy issues involving the law of war, reviews proposed new weapons and weapons systems, serves as a member of the DoD Law of War Working Group, and provides assistance on detainee and Enemy Prisoner of War affairs.

 

Photo credit: Pexels

Print Friendly, PDF & Email