Responsible AI Symposium – Introduction
Artificial Intelligence (AI) is increasingly developed, deployed, and used for defense and military purposes. This offers opportunities yet also poses challenges regarding its governance and regulation. While diplomatic efforts have focused on regulating lethal autonomous weapon systems so far, the integration of AI concerns a much broader spectrum of defense-related and military applications.
In this context, the United States has introduced the concept of “responsible AI” (RAI) to guide its efforts on defense and military AI. NATO has included principles on the responsible use of AI in its AI Strategy. Other States and organizations are conducting similar efforts, with the Netherlands organizing a Summit on Responsible AI in the Military Domain in 2023.
To what extent and how policies and measures on RAI interrelate with international law deserves clarification. To this aim, the Geneva Centre for Security Policy (GCSP) has launched the Geneva Process on AI Principles.
This led to an expert workshop and this Symposium with Articles of War, offering several analyses on the nexus between international law and the responsible development, deployment, and use of AI for defense and military purposes.
Tobias Vestner starts the Symposium by assessing the fundamental nexus between RAI and international law, offering an overview of the issue, and identifying six legal touchpoints with principles on RAI.
Merel Ekelhof then explains the U.S. efforts notably regarding the recently adopted Responsible Artificial Intelligence Strategy and Implementation Pathway and offering reflections regarding common misperceptions and tenets of implementation.
Daniel Trusilo then assesses the challenge that AI systems’ emergent behavior poses to RAI, arguing that unpredictability at the micro level may lead to increased reliability and robustness at the macro level.
Juliette François-Blouin goes on to examine how AI-related biases, which should be mitigated for achieving responsible AI, may directly or indirectly affect compliance with targeting law under the law of armed conflict (LOAC).
Chris Jenks then considers how appropriate levels of judgment and responsibility applies to LOAC, first in general and then in the more specific context of command responsibility.
Michael W. Meier then analyses to what extent legal reviews of weapons are affected and need to consider States’ principles on RAI, arguing that legal review processes can play a part in ensuring RAI but cannot be the sole mechanism to do so.
These forthcoming posts provide insights and inform further analysis on how responsible AI for defense and military purposes interrelate with international law, as should other academic and political work.
***
Tobias Vestner is Head of the Research and Policy Advice Department and Head of the Security and Law Programme at the Geneva Centre for Security Policy (GCSP)
Sean Watts is a Professor in the Department of Law at the United States Military Academy, Co-Director of the Lieber Institute for Law and Land Warfare at West Point, and Co-Editor-in-Chief of Articles of War.
Photo credit: Pixabay