Life, Love & Lethality: History and Delegating Death on the Battlefield
In military circles, the 1939 Einstein-Szilard letter to President Roosevelt is a well-known document that changed the course of history. It was instrumental in the establishment of the Manhattan Project in 1945, resulting in the world’s first atomic bomb. What is less discussed are Einstein’s writings expressing his deeply regretted role in the creation of weapons of mass destruction. In a similar vein, a key founder of artificial intelligence (AI), Geoffrey Hinton, recently left his position at Google raising his concerns with the direction of developments of this technology. Flagging the potentially dangerous use of AI in fake news, overly fast learning, lack of predictability, and “battle robots,” his strongest concern rests with the historical fact that those with the most capable systems always win. That scientists express remorse and regret about the trajectory of their research is not new, but are we listening carefully enough?
All tools – all science – can be applied to enhance humanity, to cause it harm, or even destroy it. AI is one of these technologies. AI, an umbrella term for several technologies, has caused a cacophony of voices to call for bans, led industry to call for regulation, and inspired parallels with the destructive power of nuclear weapons.
Rather than stepping into this necessarily noisy discourse, we would like to urge for a return to first principles in a reframing of how we are thinking about permissive use of AI, especially on the battlefield. There is much to reflect upon when applying one of the oldest tools we have – the Martens Clause – as well as examining historical parallels in other areas where, “the dictates of public conscience” have given pause, or even prevented the use of certain technologies.
Some actions, we suggest, are deeply and intrinsically human – things such as procreation, death, and human interaction, including sex. What makes us human? What should not be outsourced? What can we learn from other historical developments to help locate the right level of debates around warfare and AI? Who gets to decide who is making these decisions, and should it be a broader church than the usual suspects? Do we need to better represent the complexity of society to create more useful guardrails practically relating to weapons?
Procreation and Life
In 1959, scientists in a clinical trial witnessed the birth of a mammal via in vitro fertilization (IVF) resulting in developments that led to the world’s first IVF baby in 1978. Births from IVF now total an estimated 8 million globally.
Advances in scientific interference, into what had been up until that time a purely human activity, raised numerous ethical questions that continue to be debated and discussed globally. In 2020, Kjell Asplund wrote:
In [IVF] there is an intricate interaction between rapid scientific development and changing societal values . . . . Decision-making concerning IVF cannot be based only on clinical and economic consideration; these cannot be disentangled from ethical principles.
Deeply complex questions arise around what it means for society if bearing and birthing children involve science and technology. From issues relating to extending age limits of childbearing, the type of non- traditional families that can be involved, the ownership and storage of embryos, egg donations, surrogacy, public funding and commercialization of this process have all been hotly debated and there is a raft of complex legislation across the world on such matters. Religious concerns have also been voiced due to the nature of reproductive and conception options and attitudes of various religions have different layers of prohibitions.
CRISPR Cas-9, a technique used for very specific gene modification, was met with similar concerns in 2015 when it was crowned “breakthrough of the year.” Gene modification techniques existed previously, but were not as reliable, cost-effective, and accessible. CRISPR-Cas-9 enabled specific gene modification that could potentially impact generations, at speed and at scale. The same year, scientists called for a temporary moratorium, including: a hold on use beyond research; the creation of expert forums; suggestions for more transparent research; and a globally representative group to recommend policy approaches.
Subsequently, numerous frameworks were developed to guide the use of CRISPR-Cas9. Both the UK and the U.S. (and probably many more States) now permit human gene editing under certain circumstances, but with some very strong overarching principles and guardrails on their use. More recently, the ability to create life using three (or more) parents has raised even more questions. The ability to intervene in reproduction and the female body continues to be examined and discussed at length within varying sectors of society – from legislators, religious, theoretical, academic, and practitioner perspectives. We are yet to see the same level of diverse discourse on the automated ability to kill. Nor do there seem to be the necessary “cross-over” ethical lessons learned.
Love (and Sex)
With a choice of eye, hair and skin color, and an incredibly lifelike appearance, the use of sex dolls is not new, albeit met with a range of reception and ethical review. What is new is that more recently, these sex dolls have had learning algorithms embedded, or are being used as disembodied companions, previously only represented in science fiction. Recently, when algorithms offering companionship were decommissioned, users were devastated without support. More recently, a man’s death has been linked to conversations with an automated system.
One such learning algorithm was presented in one of these dolls recently at an exhibition. What started out as a clean, kind doll, after ingesting all the questions, suggestions and conversations with gallery participants,was a far less savory interaction. What remains unresearched is the link between mistreatment of automated sex objects and humans, with some concern that there are links.
Interactions with robot pets have raised similar issues. It is not what we do to or with the robots that is of concern – robots are not, and may never be – sentient, but rather, what these interactions tell us about ourselves, and how they potentially even shape our own behaviour with humans. On the upside, interactions with these systems may assist those who struggle socially, but again, much thought and research is going into how we interact with these physical objects that are standing in for love or sex.
Death
This brings us to death, war, and lethality. Not only how we create life, and how we experience physical affection or sex, but also how we choose to allow life to be taken, are deeply philosophical questions. International humanitarian law (IHL) sits in the tension between military necessity and the principle of humanity and contains many provisions requiring balance as well as outright and clear-cut prohibitions. Identifying what is “superfluous injury or unnecessary suffering” and unpacking what is required for “constant care” to be taken in precautions in attack, are critical but not simple obligations.
Such rules can only be applied with a depth of thought about context. Traditionally, reviews on the context of societies engaged in conflict have been undertaken by military commanders, who are often less aware of realities and nuances experienced by affected communities. While this is changing as militaries adjust to represent the wider social landscape, automated systems are based on historical data, always reflecting the past. The approach of rushing forward with the use of data and systems embeds these values in tech in ways that are impossible to rewind. It is inevitable that these concerns, raised in the civilian sector, will cascade down to methods and means of warfare involving new technology.
Conclusion
Recent history indicates the significant and wide-ranging ethical, religious, and sociology debates relating to the regulation of women’s bodies – the giving and taking of life in a reproductive sense. The complex legal frameworks across the globe on IVF, and the call from some commentators on a need for streamlining of regulations, demonstrates the appetite of strictly managing technology in the area of procreation.
As we continue to commercialize relationships with robots and automated systems, there appear to be significantly fewer questions being asked than there should be about how these relationships fit within an ethical framework, or how they may change us. When it comes to using automated systems in warfare, the ethical and broader conversation seems to not be surfaced sufficiently, nor at the level witnessed in other areas, and it does not address fundamental questions about what is it that societies want. Surely if we want complex debate on new technologies in the creation of life, automation of sex, and other intrinsically human acts, we need the same level of ethical, religious, practitioner, and sociological discourse on the taking of life during times of armed conflict.
Instead, there seems to be a strong focus on technical discussions about taxonomies of autonomy and the regulation of systems. Broader philosophical debates accompanying the conversations are less present. What are the core values we should honor, regardless of system used? What new questions are posed by automating killing in warfare that we should be struggling with beyond the normative framework?
The Martens Clause, less in fashion of late, refers to the speech given by a Russian delegate named Friedrich Martens, and later adopted in the Preamble of the 1899 and 1907 Hague Conventions:
Populations and belligerents remain under the protection and the rule of the principles of the law of nations, as they result from usages established among civilised peoples, from the laws of humanity, and the dictates of the public conscience.
Rather than mostly examining the taxonomies, the technicalities and the detail of the ever-advancing systems, perhaps we need to also go back to our basic values. With fast-moving technology, we can be sure that the future battlefield will create dehumanizing situations in ways we cannot imagine and that automation in warfare will raise new challenges. Broad principles with clear unpinning ethical frameworks have served us well in the past – not only trying to capture each technical element which can be endless when the subject is fast evolving.
Automation of killing potentially creates greater power imbalances, destabilizes our global order, and may dehumanize us further. Past experiences and carefully listening to the warnings of experts, are the best data sets we have to navigate the future. We also need to ensure that we continue to be able to apply the salient equilibrium required by IHL with military necessity fairly balanced against the principle of humanity – a principle that hasn’t been stripped of the human(e).
***
Dr Helen Durham is a global expert in international humanitarian law, humanitarian action and diplomacy.
Dr Kobi Leins is a global expert in AI, international law and governance.
Photo credit: Pexels