Responsible AI Symposium – Implications of Emergent Behavior for Ethical AI Principles for Defense

by | Nov 30, 2022

Emergent behavior

Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For a general introduction to this symposium, see Tobias Vestner’s and Professor Sean Watts’s introductory post.


 

Whether one considers the use of an autonomous weapon system to target humans, such as in Libya in 2020, or the ongoing development of Artificial Intelligence (AI) systems to process vast amounts of data across platforms to increase the speed of action on the battlefield, the truth is the same: AI systems with autonomous capabilities are changing the character of conflict. This post presents a brief examination of one aspect of complex systems designed to operate in dynamic conflict domains, namely the possibility that such systems will exhibit “emergent behavior.” First, I will define what is meant by emergent behavior and why this concept is important given the need to put ethical AI principles into practice. Next, I will present possible implications of emergent behavior on notions of reliability and predictability. Ultimately, if we are to successfully apply ethical principles to emerging technology, we must consider practical implications of the technology with a nuanced, multidisciplinary approach.

Emergent Behavior

Emergent behavior has been defined as “[b]ehavior that arises out of the interactions between parts of a system and which cannot easily be predicted or extrapolated from the behavior of those individual parts.” An example of emergent behavior in nature is the construction of complex underground structures through the work of thousands of individual ants in an ant colony. Realistically, complex systems that use AI are likely to present emergent behavior when operating in a dynamic, real-world conflict environment. Such emergent behavior presents challenges to the practical implementation of ethical AI principles.

Former U.S. Secretary of Defense Ash Carter highlights the novel challenges posed by security systems that use AI and the urgent need for practical approaches to their responsible use in a June 2022 essay. In the article, Carter states:

Many kinds of AI algorithms exist in practice…They all make enormous numbers of tiny calculations that combine to make overall inferences that cannot be made quickly by humans, have not been recognized by humans, or even perhaps would never be recognized by humans. These computational methods make literal transparency, normally the starting point for ethical accountability, completely impractical.

The Swiss Drone and Robotics Centre of Switzerland’s Federal Office for Defence Procurement is actively supporting research to advance practical approaches to determining the ethical risks presented by AI systems with autonomous capabilities. One such approach was successfully used to assess four robotic systems in 2020. A question that was raised during the evaluations was how to fully address the possibility of emergent behavior and its implications for ethical principles such as transparency, reliability, and predictability. In the case of AI systems with autonomous capabilities, such emergent behavior creates tension between the expected behavior of a system and how that system effectively behaves.

Understanding the potential of complex AI systems to exhibit emergent properties is critical as the growing body of ethical AI principles for defense include notions that must be considered in light of such emergent behavior. For instance, the NATO AI principles include the concepts of explainability and traceability stating: “AI applications will be appropriately understandable and transparent.” Similarly, the Australian document, A Method for Ethical AI in Defence,discusses transparency stating that the basis of AI decisions should always be discoverable, though what this means in practice remains up for debate.

In addition, the UK Ministry of Defense (MoD) Ambitious, Safe, Responsible document explicitly recognizes the challenge of unpredictability defining it as “The risk that some AI systems may behave unpredictably, particularly in new or complex environments.” The MoD’s document also includes “understanding” as one of its five Ethical Principles for AI in Defence explaining: “Mechanisms to interpret and understand our systems must be a crucial and explicit part of system design across the entire lifecycle.” And further clarifies, “Whilst absolute transparency as to the workings of each AI-enabled system is neither desirable nor practicable, public consent and collaboration depend on context-specific shared understanding.” Though the terminology varies from organization to organization, the significance of emergent behavior is the same and must be addressed if ethical principles are to be operationalized.

Variance in Predictability and Reliability

Notions of predictability and reliability are impacted by emergent behavior. These impacts demonstrate practical considerations when operationalizing ethical AI principles. For example, the emergent behavior of an aerial swarm system, or drone swarm, could mean that the behavior of individual agents within the swarm is less predictable while the reliability of the overall swarm system increases. In other words, a decrease in predictability at a fine-grained level can also result in an increase in reliability and robustness of a system at a macro level due to the emergent properties of the complex systems operating in real-world conflict domains. Therefore, efforts to operationalize the increasing body of principles for responsible AI for defense cannot simply call for predictability, transparency, and reliability – such notions are not binary. A practical and nuanced ethical evaluation must consider various principles’ impact on other interrelated ethical principles. The level at which a system’s behavior and performance are ethically evaluated must be considered, as well as the degree to which notions such as predictability, transparency, and reliability are required when determining risk tolerance levels.

Even if a particular system is not armed, risk tolerance levels related to ethical principles must be clearly defined because the stakes in conflict domains involve human life. In a conflict domain, operations may be disrupted, communications may be degraded, and a clear understanding of the operating environment may be unavailable. In such environments, being unable to fully predict the behavior of complex AI systems with autonomous capabilities can be both a strength and a vulnerability. For example, capabilities such as those touted by Halcon for the Hunter 2-S swarming drones, in which multiple drones coordinate with each other, present the possibility of emergent behavior. How a Hunter 2-S swarm uses the capabilities of individual elements in a real-world deployment will likely be innovative and, therefore, unpredictable. However, unpredictable individual drone-level behavior can increase reliability and robustness in achieving the swarm’s overall objective by making such a system more difficult to defend against.

Opponents of robotic swarm technology may argue that unpredictability at the micro level means there is no longer the required level of explainability or transparency. Therefore, swarms are problematic according to the growing body of ethical AI principles. In contrast, proponents of such a system may argue that increased reliability and robustness at the macro level make a swarm system the logical choice for real-world conflicts. However, as shown above, both positions are valid depending on the level at which one assesses the use case.

Conclusion

As stated by former Secretary of Defense Carter, the practical application of ethical AI principles lags the development of the technology. Policymakers must confront the possibility that complex AI systems with autonomous capabilities, operating in dynamic, open context environments will exhibit emergent behavior. Such emergent behavior has novel implications for the growing body of ethical AI principles for defense, including notions related to predictability, transparency, and reliability. A more in-depth discussion of these novel implications of emergent behavior on ethical principles is currently under review. Ultimately, multidisciplinary efforts involving policy and legal experts, engineers, programmers, and practitioners must develop innovative ways of operationalizing the growing body of organizationally specific ethical AI principles.

***

Daniel Trusilo is a researcher at the University of St. Gallen, Switzerland and visiting scholar at the Institute for Practical Ethics at the University of California, San Diego.

 

Photo credit: Unsplash

Print Friendly, PDF & Email