Lieber Studies Big Data Volume – Attacking Big Data: Strategic Competition, the Race for AI, and Cyber Sabotage


| Feb 8, 2024


Editors’ note: This post is based on the authors’ chapter in Big Data and Armed Conflict (Laura Dickinson and Ed Berg eds. 2024), the ninth volume of the Lieber Studies series published with Oxford University Press.

Prevailing in strategic competition with China is now the centerpiece of United States national security policy. A key component of this policy is maintaining dominance in technology and innovation. However, China is engaged in its own strategy to outpace and overtake the United States in this key area. While it has yet to succeed, it is quickly closing the gap. As the bipartisan National Security Commission on Artificial Intelligence (NSCAI) recently noted, “For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat.” Emerging technologies have the potential to fundamentally reshape key aspects of the international order such as the economic and military balance among States. Artificial intelligence (AI) is chief among these emerging, potentially game-changing technologies, and China has publicly announced its strategy to become the leading AI power by 2030.

Developing effective AI systems is extremely complex and depends on a myriad of variables. One critical component of the development process is the availability of sufficiently large, relevant, and accurate data sets for AI systems and their underlying algorithms to train on. China’s mass data collection and aggregation practices offer at least one distinct point of comparative advantage in the AI race. However, like any digitally-based system, AI systems and the data sets they rely on are vulnerable to attack.

As the NSCAI notes in its report, “Given the reliance of AI systems on large data sets and algorithms, even small manipulations of these data sets or algorithms can lead to consequential changes for how AI systems operate.” “Adversarial AI” attacks are far from theoretical, with reports of incidents already emerging in the commercial sector (see here and here). Data poisoning, the act of deliberately polluting the training data to mislead the underlying machine-learning algorithms and skew results, and other cyber operations aimed at manipulating, corrupting, or denying the data sets China is using to advance its AI R&D (all methods of cyber sabotage) could, under the right circumstances, offer a non-forcible means of impeding China’s progress. Therefore, it is important to consider whether, as a theoretical means for offsetting China’s data advantage, such operations could be conducted consistently with international law.

As the brief analysis below will demonstrate, we believe that various cyber means allow for sabotage of China’s AI development in ways that do not violate the prohibition on the use of force nor, depending on how the cyber means are employed, implicate the doctrine of prohibited intervention. Carefully crafted precision tools might be used to impede China’s progress in AI without crossing any international law thresholds.

Cyber Sabotage and the Jus ad Bellum

History is replete with examples of States and partisan groups using sabotage as a method of warfare to degrade an enemy’s warfighting capacity. Sabotage is not, however, an activity States have relegated to use in warfare. There are numerous examples of States engaging in sabotage as a tool of peacetime statecraft to subvert adversaries and deter or prevent war or other national security threats.

For example, in the 1916 so-called “Black Tom” bombing, Germany destroyed two-million tons of war materials stored at a train depot in New York harbor as part of an extended campaign of disruptive attacks against the United States to stem the flow of war supplies to the Allied Powers before the United States’ entry into the First World War. During the Cold War, both the Soviet Union and the United States routinely used sabotage as a tool of covert statecraft in both the physical and cyber realms. In the early 1980s, the United States “supplied” the Soviet Union with Trojanized software that eventually led to a major pipeline disaster in what is commonly referred to as the Farewell Dossier. The Farewell Dossier episode served as a prelude to what some have dubbed “cybotage,” operations conducted in the cyber domain intended to disrupt or impede gathering national security threats short of actual war. These examples are but a few (see Iranian uranium site power failure caused by Israel, and the United States possibly interfering with North Korean missiles).

The starting point for assessing the international law implications of a State engaging in sabotage is the jus ad bellum. This body of customary international law, reflected in Articles 2(4) and 51 of the UN Charter, sets out a presumptive prohibition against States threatening or using force in the conduct of their international relations. This general prohibition admits only three exceptions. First, actions conducted with the consent of the affected State do not violate Article 2(4). Second, States can lawfully use force under a Chapter VII enforcement action authorized by the Security Council. Finally, under UN Charter Article 51, States are permitted to use force in self-defense when faced with an actual or imminent “armed attack.” Once triggered, the right of self-defense permits the victim State to respond with defensive force, subject to the principles of necessity and proportionality. Any act of “cybotage” amounting to “force” against Chinese data would have to comply with these legal restrictions.

For a host of reasons dealt with more fully in our full-length chapter, it is unlikely in our view that data-poisoning operations would rise to the level of a use of force. Thus, when it comes to data-poisoning or many other types of cyber sabotage, debates over the rigidity or malleability of imminence and the parameters of self-defense can quickly fall away. That is not to say, however, that cyber sabotage can never rise to the level of a use of force (as discussed below). For example, many argue Stuxnet constituted a use of force, although to date no State has asserted this claim. But the vast majority of cyber operations, even those that achieve some disruptive or denial effect, would likely fall below the use-of-force threshold and therefore not implicate the justification of self-defense.

Cyber Sabotage Below the Use of Force Threshold

International law’s scope of regulation does not cease below the use-of-force threshold. The legality of any non-use-of-force counter-AI cyber operation against China’s data would still have to engage with other accepted restraints on inter-State actions. Primary among these is the rule of prohibited intervention.

The principle of non-intervention “forbids all States or groups of States to intervene directly or indirectly in internal or external affairs of other States.” The International Court of Justice has characterized the principle of non-intervention as containing two necessary components: an act of coercion; aimed at the State’s domaine réservé. Even if the exact parameters of the principle remain vague and elusive, it is a proscription with a long history that is well recognized in customary international law and generally accepted as applicable to cyber operations.

Determining whether an offensive counter-AI operation involving data poisoning would breach the non-intervention rule requires a nuanced analysis. First, the nature of the data is not the key legal factor. Whether the data is private or public may have significance with respect to the domaine réservé, but it has little bearing on whether the act is coercive. The act of corruption itself is also not the key legal issue. Rather, what matters is the intended outcome of the data corruption. If, by corrupting the data, the United States intends to dictate China’s actions or deprive it of some protected prerogative, it may then amount to coercion. However, the analysis must be done on a case-by-case basis and considered against the paucity of current State practice, if any, regarding coercive cyber interventions.

Concluding Thoughts

Given the above analysis, we offer several conclusions. First, we apply the doctrines of use of force and prohibited intervention from the perspective of China’s actions, concluding that China’s development of AI is extremely unlikely to meet the threshold of an imminent armed attack or a use of force against the United States, thereby foreclosing the option of responding in self-defense. We further conclude that in most cases, such R&D does not amount to a prohibited intervention, which would perhaps open the door to counter-AI cyber operations as countermeasures. However, this does not mean that the United States is precluded from any actions against China’s AI R&D data.

Furthermore we conclude that whether military operations could be conducted consistently with international law against China’s data would turn on a number of variables, because, like espionage, it is not possible to identify a rule of international law that deems cyber sabotage an internationally wrongful act per se. Given current understandings and empirical evidence, cyber means could be employed to conduct sabotage in ways unlikely to rise to the level of a prohibited use of force. In some circumstances, however, such operations might constitute a form of coercive intervention into China’s internal affairs, depending on the source of the data and the current value of the data on the spectrum of AI development, among other factors. On the other hand, carefully crafted sub-use of force counter-AI cyber operations might not implicate this rule and would not otherwise be barred by international law.

As such, while we are not advocating for such actions, international law would not foreclose the above actions per se as policy options.


Gary Corn is the Director of the Technology, Law & Security Program and Adjunct Professor of Cyber and National Security Law at the American University Washington College of Law.

Eric Talbot Jensen is a Professor of Law at Brigham Young University.




Photo credit: Unsplash