Programming Systems Like Soldiers: Using Military Control Mechanisms to Ensure AWS Are Operated Lawfully
Time is overdue for moving on from discussing whether autonomous weapons should be banned to looking at how to ensure autonomous weapon systems (AWS) are used in a manner that complies with the law of armed conflict (LOAC). This post illustrates that although autonomous weapons pose challenges with regard to LOAC compliance, it is possible to limit their scope of application to situations of lawful use. Identifying when and how AWS can be used lawfully enables us to take advantage of the many benefits offered by modern technology, including a reduction in the scope of human error or human loss of control.
What Is Autonomy?
One of the challenges in discussing autonomous systems is that there is no common understanding or definition of what autonomy is. To further complicate the discussion, it is not always clear whether people are talking about systems that exist now, are expected to exist in the near future, or are currently technologically impossible.
The approach taken here is that autonomy refers to a system’s ability to behave in a certain manner or to achieve pre-defined goals without the need for further instructions from humans. This understanding of autonomy does not preclude the possibility of human oversight or override should the need arise, but it does not depend on it either. Autonomy can be employed in various degrees, and for a variety of different tasks. AWS are weapon systems that employ such technology in one or all of the elements of the decision-making cycle concerning the use of weapons. The most controversial aspect of AWS is probably their ability to autonomously make the determination to attack.
The development of autonomous technology and the use of such technology during armed conflict has great potential. Autonomous systems offer the potential for greater endurance, less risk to soldiers’ lives, and lower operating cost. For instance, they can be employed in areas and situations that are considered too dangerous for humans; they can maintain readiness during prolonged operations; and they can carry out tactical tasks that would otherwise require a fully equipped unit with necessary logistical and medical support.
But the potential advantages of autonomy do not only benefit soldiers. Autonomous technology can be used to enhance protections for civilians. By way of example, autonomous systems can process large amounts of information in a short time – thereby improving situational awareness. They can also be used to scan for preapproved targets such as specific buildings, vehicles, or persons, and they can track targets for longer periods of time without being detected, which can provide more robust information about a target and make it easier to attack in a way that reduces harm to civilians.
Principle of Distinction
The question is, how do we make sure that civilians actually benefit from autonomous technology? The most relevant rules in this regard are the principles of distinction, precautions in attack, and proportionality.
First, AWS must pass the Article 36 new weapon review. Weapons that are inherently incapable of being used in a manner that complies with the principles of distinction or proportionality, or that are expected to cause unnecessary suffering or superfluous injury, cannot even be made available for use.
Assuming an AWS passes the Article 36 review, the first question is how to ensure the system can be directed at lawful targets. It is common practice to use target lists to control the categories of targets military forces are permitted to attack in a particular conflict. Lists are also made of objects and persons that are restricted from attack (restricted target lists) because, for instance, they are politically or legally challenging, and that are prohibited from attack (no strike lists), such as hospitals.
These lists can be important tools for controlling the use of force by AWS, because they set out lawful and permitted targets and prohibited non-targets. Although some targets, such as military vehicles, buildings, and equipment generally are easy to identity, other lawful targets can be more complicated. Objects that are militarized by their use may regain their protection, and the determination that objects are militarized by location or purpose is usually context dependent. Civilians taking a direct part in hostilities and the need to distinguish them from other civilians adds additional complexity.
These challenges are, however, not unique to attacks carried out by AWS, and control mechanisms are already in place to help ensure soldiers use force only when it’s lawful. One way to address the fact that the status of potential targets may change, is to apply a time limitation on how long targets are approved for attack. This can help ensure that the determination that something is a lawful target remains valid at the time of attack. For example, while bridges in a particular area may become lawful targets because opposing forces are expected to cross them at a certain time, bridges far away from any current or expected fighting may not be lawful targets. Including a time and geographical limitation on AWS’s authority to attack bridges is one way to help ensure that their use complies with the principle of distinction.
Distinguishing non-participating civilians from those who are directly participating and therefore have become lawful targets, is arguably even more complicated and is a topic which itself has been subject to debate for many years. The same distinction must be made between combatants and those hors de combat. But once again, the challenge is not unique to the employment of autonomous technology, and there are relevant tools in place to aid the process. Perhaps the most important tool in this regard is the Rules of Engagement (ROE) for the operation, which can be used to limit the situations where attacks may be carried out. For example, the ROE can limit the authority to attack persons to only unambiguous situations where an individual is actively attacking one’s forces. Similarly, although a person taking a direct part in hostilities can be targetable for a longer period, especially if acting on behalf of an organized armed group, the authority to attack can be limited to situations of actual participation. The result of these ROE is a limited scope for the application of AWS, but these restrictions show that with the right tailoring it is possible to use AWS in a lawful manner, even against persons. Whether this is acceptable from an ethical perspective, is a different question.
Precautions in Attack and Proportionality
Having determined that something or someone is a lawful target is of course only the first aspect of ensuring an attack is lawful. If there is a risk of harm to civilian persons, civilian objects, or the civilian population, all feasible precautions must be taken to reduce that risk. For AWS, one way of achieving this is to program the system to identify the presence of non-targets in the vicinity of the lawful target. As mentioned earlier, one of the benefits of autonomous systems is their ability to process a large amount of information in a short time. They can also collect information from locations that would be deemed too risky for soldiers or remain in the area longer than soldiers can.
Once all feasible measures to collect information have been taken, any anticipated collateral damage must be assessed in light of the expected military advantage. The proportionality principle in LOAC is both objective and subjective in its form, and the subjective aspect of the test arguably requires a human assessment. What is the military value of an attack? And when does the anticipated collateral damage become excessive? These assessments are generally considered part of the commander’s discretion and cannot be replaced by a computer. But in many cases, it is not something the soldiers on the ground are authorized to determine either, and military forces have developed procedures to make sure the difficult decisions are made at the right level.
Particularly relevant here is the Collateral Damage Estimation (CDE) Methodology, which includes a requirement to measure the distance between the target and area expected to be affected by the attack, and the nearest collateral concerns, meaning persons or objects not identified as lawful targets. The shorter the distance, the higher up in the chain of command the decision to carry out the attack must be made—i.e., the higher the Target Engagement Authority (TEA) must be to approve the attack. Determining the proper TEA is based among other things on the known effects radius of the given ammunition on a variety of targets. One way to prevent AWS from violating the proportionality rule would therefore be to program them to only carry out attacks if the risk of collateral damage is anticipated to be zero or very low. If unidentified people appear in the in the vicinity of the target and within the weapon’s effects radius, AWS should be programmed to cancel the attack. This would preclude the need to make the subjective assessments inherent in the proportionality rule at the time of attack.
Ensuring Human Accountability
Integrating current military control mechanisms such as target lists, ROE, TEA, and CDE into AWS will, together with time and space limitations on their operation, go a long way toward ensuring their lawful use. However, this alone is not sufficient.
The requirements of distinction, proportionality, and precaution are set out in a manner that reflects the often chaotic reality of war, referred to as “the fog of war.” Absolute certainty is difficult to obtain, and the focus is on actions taken before an attack and the mindset of those planning, deciding, and executing attacks. LOAC requires that those who plan and decide an attack must collect information about the target and its surrounding area and, based on their assessment of the information reasonably available at the time, only order the attack to be carried out if they honestly believe that the target is lawful and that it will not cause excessive civilian harm. Thus, although LOAC does not expressly require human involvement, many of the determinations involved require human assessment because of their subjective nature. This will apply equally to the decision to employ AWSs, but here the decisions are made further in advance of the attack when the above-mentioned limitations on target category, CDE, time, and space are defined.
The people involved in using AWS must therefore understand how they operate, so that they can sufficiently predict the outcomes and determine the extent to which further human involvement is required to ensure lawful use. However, in the near foreseeable future, it seems unlikely that everyone will have the detailed knowledge necessary to understand how AWS will operate in a given context. One way to compensate would be to require all personnel expected to be involved in decisions relating to the use of AWS to have basic knowledge and understanding, and to supplement this with a requirement that specialists with advanced knowledge of the system at hand are involved in any decision to employ it.
The more complex the task or the environment, the harder it will be to predict the outcome of autonomous processes, and the scope of relying on autonomous technology is therefore reduced. Using control mechanisms to define the target categories, time and space of use, and the degree of collateral damage permitted, it is nonetheless possible to define a scope for using AWS which is largely legally unproblematic and where the subjective evaluations are already taken into account. Once the legal room for manoeuvre has been identified, military commanders (and politicians) may go on to consider whether further limitations are needed, inter alia, for ethical or political reasons.
Conclusion
The development of and training in control mechanisms is crucial because regardless of how advanced technology becomes, humans remain responsible and must ensure they retain the degree of control necessary to ensure lawful use of force. Command responsibility is not relieved merely because the subordinate used an AWS to carry out the attack instead of a manual weapon or automatic weapon system.
As autonomous technology becomes more advanced, the ability to predict its outcomes becomes more difficult. If or when developers or operators no longer can predict their outcome or have sufficient certainty in their lawful use, such systems should be stopped by the Article 36 review.
***
Dr. Camilla G. Cooper is an Associate Professor of operational law at the Norwegian Defence University College.
Photo credit: Milrem Robotics