Augmented Reality Battlefield


| Apr 28, 2022

Augmented Reality

States are prioritizing measures to enhance soldiers’ situational awareness during military operations, including the development of augmented reality capabilities. Augmented reality, as distinguished from virtual reality, superimposes digital content on a live view of the real world as perceived by the user. This post explores the law of armed conflict implications of augmented reality implementation on the battlefield.

Augmented Reality Tools and Capabilities

Augmented reality (AR) relies on GPS data, satellite imagery, and artificial intelligence to generate scene-understanding features for users. AR technology channels this digital information through an interface device for visual display.

Various types of interface devices are equipped with sensors and user-tracking features to correctly register virtual and real images. The U.S. Army awarded Microsoft a contract for 120,000 augmented-reality headsets, adapting the HoloLens headset to military purposes. The custom-made Integrated Visual Augmentation System (IVAS) will combine a computer, an array of sensors, and a wide lens with an internal display to produce, share, and enhance digital information for individual soldiers wearing this headset.

Target Identification

The greater situational awareness afforded by AR offers soldiers the possibility of innovative engagement with their adversary. For instance, it may enable soldiers to see “through” buildings and walls, by feeding satellite imagery and information from various sensors right into the soldier’s (enhanced) field of vision. A thermal site on a weapon that is wirelessly connected to the IVAS headset would allow soldiers to see a target in all conditions and provide the distance to it. As another example, soldiers inside an armored vehicle might enjoy 360-degree vision in real time by using data from externally mounted sensors.

Existing capabilities already feed large amounts of data from various sources, such as unmanned aerial systems equipped with extended loiter capabilities and multi-spectral sensor suites, which facilitate identification of the nature of a potential target, day or night (see Schmitt at 401). AR devices and capabilities will add yet another means of enhancing the targeting process. However, belligerent parties must be mindful of the legal obligations relevant to the use of such devices under the law of armed conflict.

In conducting hostilities, belligerent parties must distinguish between civilians and combatants and between civilian objects and military objectives. Attacks may not be directed against civilians or civilian objects. Augmented reality capabilities must afford soldiers using them with sufficient detail to determine the status of individuals appearing as virtual images to comply with these rules of distinction.

For example, an AR capability that provides a soldier with the image merely of a group of people may be insufficient for targeting purposes unless only enemy combatants are known to be present in the area of operations. Absent such conditions, a soldier would require other indicia, such as military uniforms, other fixed distinctive signs, or weapons to discern protected civilians from lawful targets.

Further complexity arises considering that the law of armed conflict allows not only for status-based targeting but also for conduct-based targeting. Civilians are protected from direct attack under the law of armed conflict, “unless and for such time as they take a direct part in hostilities.” The precise application of this exception to civilian protection is contested. Assessments of civilian direct participation in hostilities are highly contextual and deeply dependent on factual circumstances. For example, as a general matter, a civilian engaged in a cell phone call (without further information) would obviously not constitute direct participation in hostilities. However, if the facts demonstrate that the civilian is relaying targeting information about friendly forces to the enemy’s armed forces, this could very well be a case of direct participation.

To effectively contribute to determinations of conduct-based targeting, AR capabilities must afford soldiers sufficient detail to distinguish acts that indicate either enemy status or direct participation and those that do not. The integration of artificial intelligence (AI) into AR capabilities could satisfy this requirement. For example, AI algorithms could progress to the point where details about a person’s gait, handling of objects, gestures, and other movements indicate—to varying degrees of certainty—whether a particular act reveals status or direct participation in hostilities. To the extent that these details are reasonably reliable and help soldiers making attack decisions, such capabilities could help them use lethal force in compliance with law-of-war targeting rules related to distinction.

Furthermore, what specific items and equipment potential targets carry may be relevant. A soldier’s decision as to whether the persons are lawful targets could depend, for example, on whether they carry weapons and ammunition or farming equipment and cameras. In short, an AR capability must present soldiers sufficient data to allow them to assess the legal status of individuals identified by the capability, considering a totality of circumstances that can feasibly be apprehended by the soldier through AR system.

In addition to identification of civilians and civilian objects, AR technology might also aid soldiers in applying rules related to the avoidance and minimization of collateral or incidental harm to civilians. These rules include the prohibition against disproportionate attacks and the requirement to take feasible precautions in attack. To be operationally effective and to allow combatants to satisfy their obligations under the law of armed conflict, AR devices must provide the soldiers using them with sufficient data, both qualitatively and quantitatively, to make reliable proportionality assessments when engagements will foreseeably result in harm to civilians or civilian objects.

Target Verification

Perhaps the greatest contribution AR devices might make to the conduct of hostilities concerns target verification. An important feature of such devices is their interconnectivity, which enables information sharing and decision-making in the real-time battlefield environment. Augmented reality has the potential to allow soldiers to instantaneously summon vast amounts of information about a prospective target, the surrounding terrain, and the location of enemy and friendly forces.

The ability to share targeting information over a visual medium such as an AR device would provide individual troops with additional means to evaluate attacks. This accords with the duty to take feasible precautions to reduce the risk of harm to protected persons and objects, for instance, by verifying the target as a legitimate military objective and minimizing incidental damage to civilians and civilian objects. If it becomes apparent prior to the attack that the target is not a military object or that the attack would be disproportionate, the attack must be canceled or suspended. In this regard, an AR device, in a legal sense, may serve as a measure of precaution taken by soldiers in the conduct of hostilities.

However, there are three considerations that commanders should account for in prescribing the use of augmented reality devices as a practical means of exercising precautions.

First, as Heather Harrison Dinniss and Jann Kleffner warn, an uncontrolled feed of raw data and information sharing involves the risk of “information overload,” which can lead to fatal mistakes. Poor military judgment resulting in errors or mistakes is not by itself a violation of the duty to take feasible precautions. Nevertheless, AR must be equipped with an intuitive user interface to enable soldiers, with adequate training, to use the increased amount of information without being overwhelmed in combat situations.

Second, commanders should anticipate and understand the risk of signal interference—the spoofing of sensors—or simply the inaccuracy of data being collected. Any substantial rate of failure to function as intended would undermine confidence in the reliability of augmented vision. Commanders and other decision-makers must apply the law of armed conflict in good faith and based on the information reasonably available to them. Information streamed through AR devices will often be imperfect, incomplete, or lacking. Nonetheless, this does not preclude the use of AR devices from military decision-making in the conduct of hostilities. Doubt and uncertainty about the factual circumstances are inevitable elements of armed conflict.

With this last point in mind, it is worth noting that AR tools likely will include back-end processes to consolidate and manage gathered information to present in a user-friendly way. Furthermore, it is conceivable that AR tools could include artificial intelligence that analyzes data, makes predictions or estimations about the battlefield, and even proposes desirable courses of action for soldiers to take.

Third, it must be understood who carries responsibility for the implementation of the legal obligations discussed above. We must never lose sight of the fact that belligerent parties conduct military operations through a chain of command. While the commander will of course rely on information and advice provided by other combatants (for example, intelligence personnel and weaponeers and targeting personnel), it will not be the case that every combatant involved in a military operation will apply the principle of proportionality independently or de novo. As the United Kingdom asserted in its Statement of Ratification of the First Additional Protocol, “[T]he obligation to comply with” law-of-war targeting rules pertaining to the cancellation or suspension of certain attacks “only extends to those who have the authority and practical possibility to cancel or suspend the attack.”

It follows that the operational and legal risks associated with the use of AR devices will often be calculated and assessed by members of the military chain of command above those soldiers using such devices. This is not new or unique. Military operations, including decisions about means and methods of warfare, have always been made by commanders and executed by subordinates. Thus, soldiers directed by commanders and other superiors to equip themselves with AR devices while conducting hostilities will often have no choice but to trust that their superiors have adequately assessed the functional integrity and relevant risks. For example, a soldier using an AR device to identify a target of attack ordinarily would be entitled to rely on the data provided by the device in conducting the attack.

This means that, as with all means and methods of warfare, commanders and military superiors have an ultimate responsibility to ensure that equipment such as AR devices is functionally sound and reliable under adverse conditions. This responsibility is demanded by both operational and legal considerations.

Concluding Thoughts

The use of AR on the battlefield does not necessarily raise objections under the law of armed conflict. Instead, such technology could result in more situational awareness and better decision-making, which could enhance legal compliance.

Nonetheless, this technology, and perhaps more critically, its development and implementation in the conduct of warfare is not without controversy. Notably, in a high-profile incident in 2019, Microsoft employees publicly demanded that the company withdraw from a contract to supply AR headsets—precisely the type of technology discussed in this article—to the U.S. Army. The employees wanted no part in, as they described it, “turning warfare into a simulated ‘video game.’”

The moral implications of the use of AR and other advanced technology in warfare is complex and certainly warrants close attention. From our vantage point, it seems inevitable that the future of warfare will include technology that enhances the physical and cognitive capabilities of soldiers as States continue to seek new means and methods of warfare based on technological development. As just one example, Neuralink is currently developing brain-machine interface devices that will enable human user’s brain to communicate directly with machines.

States must think carefully about the role of human decision-makers in the conduct of hostilities when they are aided by machine processes. This issue is obviously not unique to AR. It features prominently in the debate about deployment of lethal autonomous weapons systems.

If and when such technology is implemented in armed conflict, it will be critical that States pay close attention to the implications of such use on belligerent parties’ obligations under the law of armed conflict. However, much will depend on the details of how soldiers interact with such tools. These considerations must be explored fully and should be neither under nor overstated prior to deployment of AR capabilities.


Robert Lawless is an Assistant Professor in the Department of Law and Managing Director of the Lieber Institute for Law & Land Warfare at the United States Military Academy, West Point.

Hitoshi Nasu is a Professor of Law in the Department of Law at the United States Military Academy.



Photo credit: Bridgett Siter, U.S. Army