Responsible AI Symposium – Translating AI Ethical Principles into Practice: The U.S. DoD Approach to Responsible AI

by | Nov 23, 2022

AI ethical principles

Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For a general introduction to this symposium, see Tobias Vestner’s and Professor Sean Watts’s introductory post.


 

Over the past year, several defense and military organizations have adopted approaches to the responsible and ethical development and use of Artificial Intelligence (AI) in defense. In June 2022, U.S. Deputy Secretary of Defense Kathleen Hicks signed the Department of Defense’s (DoD) way forward for ensuring ethics in AI development and acceleration: the Responsible Artificial Intelligence Strategy and Implementation Pathway. That same month, the UK Ministry of Defence released their approach to AI ethics by means of a stand-alone policy document: “Ambitious, Safe, Responsible.” In October 2021, NATO released their AI Strategy, agreeing to operationalize six principles of responsible use for AI in defense and security.

These documents affirm the three organizations’ commitments to acting as responsible AI-enabled organizations and, moreover, establish a global, trusted AI ecosystem that builds confidence with end-users, warfighters, the public, and international allies and partners. However, these initiatives are merely steps in a much longer and more intricate journey to accelerating responsible AI (RAI) in defense organizations. Implementing these ethical principles and approaches across AI-enabled technologies, operating structures, and organizational culture is even more important than adopting them in the first place. The U.S. DoD RAI Strategy and Implementation Pathway is intended to instill and operationalize the DoD’s AI Ethical Principles, adopted in February 2020. It addresses common misconceptions and provides guidance on how to translate principles into practice. Governments, defense and military organizations that wish to pursue the responsible adoption and use of AI may want to address the following three misconceptions and six RAI implementation tenets.

Common Misconceptions

First, the DoD AI Ethical Principles (hereinafter the Principles) are not new. Nor do they substitute or deviate from existing ethical, legal, and policy commitments. The Principles are based on existing and widely accepted ethical, legal, and policy commitments under which the DoD has operated for decades and will continue to operate. Important legal constructs that provide the foundation for the Principles include the law of war, the U.S. Constitution, and Title 10 of the U.S. Code. The Principles provide AI-specific guidance that builds on those existing frameworks. Their implementation into existing processes and practices contributes to the lawful, ethical, and responsible development and use of AI by the DoD.

Second, the Principles should not be perceived as a set of rules or constraints that hinder AI adoption and use. Rather, they are intended to contribute to the efficiency, effectiveness, and legitimacy of the DoD’s AI capabilities. When it comes to AI for military applications, a frequently repeated concern is that defense organizations must move quickly or risk losing on the battlefield, in particular to adversaries that do not share these ethical and moral standards. Effective RAI adoption therefore requires an organizational culture that implements RAI as an enabler for AI adoption and allows developers and users to have appropriate levels of trust in the AI system. This trust, in turn, enables rapid adoption and operationalization of AI capabilities, strengthening the DoD’s competitive edge. To achieve that, it is vital that the Principles are understood as force multipliers, rather than impediments to success.

Third, RAI cannot be achieved by means of an ethical review of a specific AI capability prior to acquisition or fielding. Nor can RAI be assured through a final rule-check that rests on the commander who decides to use the AI capability, or a task that can be completed merely by putting an AI capability through a testing and evaluation (T&E) process. The purpose of the Principles is not to prescribe what is considered right and wrong, enforced through a single review (no matter how comprehensive that review may be). Instead, they provide guidance to DoD personnel and other relevant stakeholders on how to consider ethics in their respective roles, across the lifecycle of the AI system, and in the context of their mission. This includes, but is not limited to, acquisition personnel, senior leaders, engineers and data scientists, policy-makers, lawyers, project managers, T&E experts, and warfighters. As such, ensuring RAI is a collective effort. It is a shared responsibility that involves the actions of various individuals and processes across the organization.

Six RAI Implementation Tenets

The above misconceptions can be addressed, in part, by ensuring relevant stakeholders within the organization are aware of the Principles’ role, and also their own role in operationalizing them. The RAI Strategy and Implementation Pathway sets out six foundational tenets to ensure RAI activities are implemented across AI technologies, operating structures and organizational culture.

The first tenet, RAI Governance, aims to modernize governance structures and processes that allow for continuous oversight of DoD use of AI, taking into account the context in which the technology will be used.

The second tenet, Warfighter Trust, aims to achieve a standard level of technological familiarity and proficiency for system operators to achieve justified confidence in AI capabilities and AI-enabled systems.

The third tenet, AI Product and Acquisition Lifecycle, aims to exercise appropriate care in the AI product and acquisition lifecycle to ensure potential AI risks are considered from the outset of an AI project. It seeks to ensure that efforts are taken to mitigate or ameliorate such risks and reduce the likelihood of unintended consequences, while enabling AI development at the pace the DoD needs to meet the National Defense Strategy.

The fourth tenet, Requirements Validation, aims to use the requirements validation process to ensure that capabilities that leverage AI are aligned with operational needs while addressing relevant AI risks.

The fifth tenet, Responsible AI Ecosystem, aims to promote a shared understanding of responsible AI design, development, deployment, and use through domestic and international engagements.

The sixth tenet, AI Workforce, aims to ensure that all DoD AI workforce members possess an appropriate understanding of the technology, its development process, and the operational methods applicable to implementing RAI commensurate with their duties within the archetype roles outlined in the 2020 DoD AI Education Strategy.

Conclusion

Clearly, the articulation of AI ethical principles in defense is an important first step, but it is also just the beginning of a broader effort to create an RAI trusted ecosystem. For these ethical principles to be effectively integrated across defense organizations, there is a need to provide further guidance and tools for operationalization across the entire workforce. There is a long history of operationalizing legal and ethical principles into practice through, for example, the issuance of manuals, doctrine, instructional guides, methodologies, technical tools, rules of engagement, knowledge sharing, and more. Similarly, AI ethical principles require a number of implementation mechanisms to clarify not only what they mean, but also how they should be applied in the context of the development and use of AI capabilities, operating structures, and organizational cultures in defense organizations. The RAI Strategy and Implementation Pathway is an example of what such an implementation mechanism may look like.

***

Dr. Merel Ekelhof is a Foreign Exchange Officer with the Strategy, Policy, and Governance Directorate of the U.S. DoD Office of the Chief Digital and AI Officer (OCDAO), where she deals with topics related to autonomous weapons, governance & ethics, and strategic and international partnerships.

 

Photo credit: Office of Naval Research

Print Friendly, PDF & Email