Responsible AI Symposium – Prioritizing Humanitarian AI as part of “Responsible AI”

by , | Mar 17, 2023

Humanitarian AI

Editor’s note: The following post highlights a subject addressed at an expert workshop conducted by the Geneva Centre for Security Policy focusing on Responsible AI. For a general introduction to this symposium, see Tobias Vestner’s and Professor Sean Watts’s introductory post.


The Responsible AI in the Military Domain (REAIM) summit, the first international conference dedicated to exploring the responsible use of AI in military applications, recently concluded in The Hague. Organized by the Ministry of Foreign Affairs of the Netherlands together with South Korea, the conference was an important step toward the use of AI by militaries that is consistent with international law and reflects common values and principles. The two-day event highlighted that there are both opportunities and risks for military applications of AI, and culminated with a Call to Action endorsed by 57 States and a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy presented by the United States.

While REAIM aimed to consider opportunities as well as risks, in practice risks tended to dominate the conversation. Our contribution focused on civilian protection – where we believe AI holds significant potential. We observed a positive movement toward consensus on using AI to mitigate civilian harm, and hope that future events will further explore the breadth of these possibilities.

Humanitarian AI

Civilian protection is often a blind spot for militaries. Protecting the population is all too often simplified to narrow, rigid processes for collateral damage estimation. We see it as a more complex process focused on understanding civilian presence and systematically mitigating risks to civilians – during and at the end of war. The transition from war to peace is both a final stepping-stone to victory and a critical time for civilian protection, during which steps must be taken to address humanitarian crises, reduce vulnerabilities of the population, and set conditions for a lasting peace. While militaries may have policies regarding protection, the past two decades of military operations have shown that armed forces have rarely implemented those protection policies effectively in combat.

The missed big picture extends to the use of technology in armed conflict. While militaries around the world race to develop applications of emerging technologies—such as artificial intelligence, autonomy, and blockchain—these applications generally seek to improve intelligence capabilities, the effectiveness of various forms of attack, force protection, or perhaps logistical functions that support operating forces involved in combat. In government and military decisions regarding how to use emerging technologies. Where is civilian protection?

At the REAIM summit, civilian protection was mentioned primarily in discussions of risks: if the inherent bias in AI cannot be eliminated, autonomous systems will run amok, and civilians will end up paying the ultimate price. While we agree that there are risks in using AI in military applications that must be mitigated, we sought to illuminate this blind spot and the importance of civilian protection goals for the use of AI, which we call “humanitarian AI,” through an interactive discussion.

We developed a number of hypotheses on the use of AI for civilian protection and asked the audience whether they agreed with the statements made. Using the BetterBeliefs platform, the audience provided their reactions in real time, as we shared evidence and our study findings. We then analyzed the audience’s views during the panel and highlighted areas of agreement and divergence on the hypotheses.

Findings

Three main findings emerged from this highly interactive event on humanitarian AI.

First, participants unanimously agreed with our first hypothesis: “States must protect civilians from harm in war.”  This provided a common foundation for the discussion. There is no question that the protection of civilians is a legal and strategic imperative for States engaged in armed conflict. For the participants, a natural conclusion from this was that States should invest in the development and deployment of humanitarian AI. The hypothesis “Building a humanitarian role for AI must become a priority for governments and militaries” was also universally agreed upon. This suggests that there is broad support in favor of a more humanitarian role for AI.

Second, our work at REAIM demonstrated the need to educate, explain, and provide evidence on how AI can help reduce civilian vulnerabilities. The audience was nearly unanimous in agreeing with the statement: “AI can be used to reduce civilian harm in war”which was the focus of our session. In contrast, 60% of the participants disagreed with the statement that “AI applications can facilitate the end of conflict and maintain peace.” Due to time limitations, our presentation focused on the mitigation of civilian harm through AI, and a significant amount of time was spent on explaining and sharing evidence on that specific point. The near unanimous agreement on the hypothesis related to civilian harm and AI suggests this evidence was persuasive.

These findings illustrate that there is a natural resistance to the overall view that AI can be used to promote protection, but also that this resistance can be overcome with sufficient evidence. If the panel session had allowed more time to present evidence for AI applications facilitating the end of conflict and the transition to peace, the audience’s conclusions may have shifted in a similar way. We take this to mean that more needs to be done to explain how AI can be channeled during and at the end of war to reduce uncertainty, vulnerabilities, and hardship on the civilian population.

Uncertainty can be a substantial obstacle to peace; and AI can help reduce such uncertainty and facilitate the transition from war to peace. Autonomous systems, drones, and satellite imagery can gather credible information on damage in conflict areas and the movements of fighters and civilians. Machine learning tools can then analyze such data and create a more accurate picture for decision-makers. AI can also help verify and trust information about the position of forces, the adherence to ceasefires, or the demobilization and disarmament of forces.

AI also serves an important humanitarian purpose by reducing the vulnerability of civilians. It can predict refugee paths based on patterns and weather, anticipate border crossings and other mass movements, and better allocate resources accordingly. The potential of AI in this field is well-known to humanitarian organizations working to alleviate the effects of disasters but less so to militaries.

Finally, the audience displayed some mixed beliefs in response to “AI brings more concerns than opportunities for civilians in war.” This saw a 40-60 split, with 40% agreeing and 60% disagreeing. Neither view was dismissive of risks—rather the difference showed a view that a bit over half believed opportunities were also important and should not be neglected. A concern over risks of AI was also apparent in the high level of agreement with the statement, “The use of AI in war puts refugees at heightened risk.” Refugees and displaced persons often have no choice but to share their biometric and other data to receive services and assistance in times of conflict. The use of AI-powered tools makes refugees vulnerable to exploitation and abuse, as their personal data can easily end up in the wrong hands. We agree with the participants that this area of AI raises distinct issues that require further thinking and adequate safeguards.

Concluding Thoughts

We commend the organizers of the REAIM summit for including debate on this topic, and we hope this will begin a wider discussion of how governments and militaries can more effectively use AI to protect civilian population in the waging of war and in the transition from war to peace. Our work points to an underlying consensus as to the need for States and militaries to prioritize humanitarian AI, the importance of further education and evidence on emerging technology can be channeled toward humanitarian roles, and specific areas of concern such as the impact of AI on refugees and displaced persons in conflict.

Note: While you may not have been able to attend the REAIM summit, you can still be part of our continuing discussions on this topic, including future virtual and private events and ongoing participation in the discussion of these hypotheses through the BetterBeliefs platform. Please contact one of us if you would like to share your views on our hypotheses or if you wish to be a part of future discussions on this topic.

 ***

Dr. Daphné Richemond-Barak is Assistant Professor in the Lauder School of Government, Diplomacy and Strategy at Reichman University (Israel). Together with Prof. Laurie Blank she co-founded the End of War Project which has explored how technology can alleviate human suffering during and at the end of war.

Dr. Larry Lewis founded CNA’s Center for Autonomy and AI and has led many projects regarding AI and autonomous systems for DOD.

 

Photo credit: Unsplash

Print Friendly, PDF & Email