Deepfake Technology in the Age of Information Warfare

by | Mar 1, 2022

Deepfake Technology

Prior to its invasion of Ukraine, there were speculations that Russia was planning to produce a graphic fake video showing a Ukrainian attack as a pretext for an invasion. Although this “false flag” operation did not play a major role in the end, deepfake technology is increasingly recognized as a potentially useful and effective tool in armed conflict. This post unravels a large swath of unregulated space in which hostile actors can use deepfake as a method of information warfare without legal liability.

The Application of Artificial Intelligence for Information Warfare 

Deepfake is a simulation of reality in computer imagery generated with the application of artificial intelligence to replace one person’s likeness with another in recorded video. Its use to create a misleading video has already been prevalent in a political context and has raised concerns about its potentially adverse impact on democratic processes. When it is used as a deliberate means to deceive the public in international relations, deepfake can be broadly classified as a form of information warfare.

While variously defined, information warfare refers to the denial and disruption of an enemy’s communication functions (which today forms part of cyber operations), as well as the manipulation of information for deceptive purposes (which is also described as psychological warfare). It is the latter form of information warfare that has gained force by taking advantage of the enhanced ability to manipulate imagery and audio data and effectively disseminate deepfake videos on social media.

In the United States, psychological operations form part of the integrated employment of information-related capabilities to influence, disrupt, corrupt, or usurp the enemy decision-making. Disinformation is considered one of the informational tools that comprise an information warfare strategy (Joint Publication 3-13.2, II-11).

China, on the other hand, has integrated this method of warfare in the 2003 People’s Liberation Army Political Work Regulations by adopting the concept of “Three Warfares.” The idea consists of (1) public opinion warfare, (2) psychological warfare, and (3) legal warfare. Relevantly, psychological warfare seeks to undermine an enemy’s morale or will to conduct combat operations by employing rumor and false narratives, as well as harassment or threats.

Russia has also been known for its aggressive approach to disinformation campaigns for propaganda purposes. With advances in artificial intelligence known as generative adversarial networks, Russia has been honing capabilities to produce realistic visuals and to spread false information in their guise. The reported plan to use deepfake to provide a pretext for invasion is a manifestation of Russia’s military strategy to effectively contest the information environment.

Deepfake under International Law

The problem of false information for the maintenance of peaceful relations among States was recognized even before the Second World War. In 1936, States agreed to regulate the broadcasting of false information by adopting the International Convention on the Use of Broadcasting in the Cause of Peace. However, only a limited number of States are party to this Convention, undertaking to prohibit and stop any transmission of false information when it is “likely to harm good international understanding” (art. 3(1)).

Although it is not otherwise prohibited under international law, the use of deepfake could be calibrated to intensify deliberate attempts to interfere with domestic affairs of another State which could amount to an intervention prohibited under customary international law. The principle of non-intervention prohibits States from engaging in coercive interference, directly or indirectly, with the domestic affairs of another State.  (See this article for more detailed analysis by the author.)

The alleged plan to use a deepfake video as evidence of Ukrainian “genocide” against Russian speaking populations could have amounted to an intervention if it was designed to force Ukraine to change its national policy. However, an element of coercion is the very essence of intervention prohibited under this principle. As such, the illegality of such a plan is less clear if it was simply intended as propaganda to shore up domestic support for military plans or generate dissent and encourage insurgency in another country. Indeed, the dissemination of false information has been commonplace in State practice as a non-coercive form of interference.

There are indications in recent State practice that the bar could be lowered for the assessment of coerciveness as a requisite element of intervention. For example, senior officials from the United Kingdom and the United States have both referred to the manipulation of an electoral system and its outcome as an example of prohibited intervention. It may therefore be plausible to argue that the use of deepfake as a means of disrupting the political or economic system of another country should also be prohibited as an intervention. Such an argument may receive greater support among States that find themselves increasingly vulnerable to hostile information operations and associated exploitation of social media.

However, the diffused nature of threats posed by deepfake renders its nexus to coercive effects inevitably tenuous. This practical difficulty, as Michael Schmitt observes, necessarily moves such activities in the direction of permissible interference and away from prohibited intervention. As in the context of the cyber domain, there are also technical challenges in attributing the creation and dissemination of deepfake videos to a State as the basis for establishing its responsibility.

In the event that deepfake videos were to be used as a pretext for invasion, it would be clear that the deployment of military forces cannot be justified on that basis in the absence of an actual event that qualifies as an armed attack. Deepfake may well be an effective tool to produce a desired psychological impact in the mind of a target population, but it has no impact whatsoever in determining a factual basis for exercising the right of self-defense. In this respect, there is no material difference to the fabrication of a hostile event, such as the Mukden incident of 1931 which was used by the Japanese imperial army as a pretext for invading Manchuria. 

Deepfake under the Law of Armed Conflict

In general, the deliberate dissemination of deepfake is not subject to the law of targeting because such an operation is unlikely to constitute an attack involving an act of violence causing death or injury to persons or damage or destruction of objects. Nevertheless, when it is used as part of a military operation, the creation and dissemination of deepfake is subject to a general obligation to exercise constant care to spare civilians from the negative effects of military operations. This constant care, however, is an obligation of due diligence, requiring commanders to take into account the possible negative effects on civilians and to take steps, when feasible, to avoid or reduce negative effect as much as possible (U.S. DoD Law of War Manual, §5.3.3.5).

As Eric Jensen and Summer Crockett discussed previously, the use of deepfake would be prohibited when it constitutes an act of perfidy by creating a particular expectation of protection under the law of armed conflict, or when it is primarily intended as a threat of violence to spread terror among the target civilian population and produces the requisite harm of injury or death to persons or, for States Parties to Additional Protocol I, capture. The creation and dissemination of deepfake would otherwise be considered lawful as a ruse of war that has long been accepted as a legitimate method of warfare. During the May 2021 conflict in Gaza, for example, Israel Defence Forces reportedly used false information about the advance of troops into Gaza as a ploy to lure Hamas fighters into the tunnel system so that the Israeli forces could attack them.

Problems arise, however, when deepfake is employed as part of a hybrid warfare strategy, combined with conventional military operations, to disrupt the target State’s ability and chance to mount an effective response. Such hybrid threats may exploit a legal “grey zone,” where it is unclear how the situation should be legally characterized. For example, false images and videos may be used to disguise special forces’ penetration into enemy territory as local insurgent activities so that hostile activities can be conducted under the false pretence that an international armed conflict is not engaged.

Prior to Russia’s invasion of Ukraine, deepfake could have effectively been used and disseminated to conduct a false flag operation by deliberately misrepresenting the nature of events that transpire in the area of confrontation. To face off such hybrid threats, the defending State is left with difficult choice—whether to pursue peaceful solutions, as the Ukrainian President was hoping to achieve, or to adopt military response at the risk of escalating the conflict.

Concluding Observations

The predicted onset of Russian invasion against Ukraine showed a glimpse of what deepfake might be capable of achieving by fabricating events to provoke violence and create a cloak of legitimacy for military action. International law does not provide adequate protection against the use of deepfake as a means of disrupting international relations, leaving a swath of unregulated space open for exploitation.

An effective response to disinformation campaigns is difficult to develop due to various psychological mechanisms that facilitate the spread of false information. Blocking or removing deepfake contents is unlikely to be effective in changing people’s beliefs or, even worse, may draw increased attention to it due to the “Streisand effect.” Many States have been moving toward criminalizing the creation and dissemination of false information to mitigate its adverse societal impact, but the deterrent effect of criminal sanctions is negated when the offending State itself sponsors or supports such activities. 

The Biden administration chose the third approach—strategic public releases of unclassified intelligence regarding Russia’s plans to correct misinformation disseminated by Russian government. This strategy has arguably paid off in countering Russia’s false narrative and complicated its efforts to create a pretext to send troops or use an element of surprise to its advantage. It remains to be seen how effective such a strategy can actually be in a future conflict environment, especially when deepfake is employed to mobilize hybrid threats rather than an outright military invasion.

***

Hitoshi Nasu is a Professor of Law at the United States Military Academy.

Print Friendly, PDF & Email