Military AI as Sociotechnical Systems

by | Jun 10, 2025

Sociotechnical

Concern about incorporation of AI into the use of weapons over the last several years has focused overwhelmingly on lethal autonomous weapons systems (LAWS). While there is no consensus on the meaning of this term, many States, as well as members of a Group of Government Experts (GGE), working under the aegis of a United Nations initiative on this subject define LAWS as weapons that are able to identify and strike targets without human intervention. The goal of the UN initiative is to reach agreement on a new protocol of the Convention on Certain Conventional Weapons (CCW) to prohibit and/or regulate these types of weapons.

The most controversial use of AI in modern warfare, however, does not involve LAWS. Israel is not using AI in its operations in Gaza to identify and strike targets without human intervention. Rather, it is using AI for what is called decision support, a function that is gaining increasing attention (for example, here, here, here, and here) among those who follow the use of technology in military operations.

In this use of AI, algorithms generate lists of individuals and buildings that are potential targets, but humans must determine whether to strike them based on consideration of all sources of information. This is an increasingly common practice, as States rely on machine learning for tasks such as object recognition, estimation of the effects of strikes on different targets, and determination of which weapons will best achieve desired military impacts while conforming to law, rules of engagement, and policy. A human thus uses various forms of AI as tools to inform a final decision, not to delegate that decision to AI.

As criticism of Israel illustrates, however, significant risks can arise even when AI is used for decision support. Some observers have argued, for instance, that the directive to strike as many targets in Gaza as possible as quickly as possible has reduced the ability of humans to fully consider all sources of information in authorizing a strike. The result may be a tendency to assume that AI’s ostensibly scientific and objective identification of targets is a sufficient basis to attack a target.

Other considerations affecting the accuracy of targeting and harm to civilians include factors such as: the features used to train a machine and classify an individual as a Hamas militant; the threshold confidence level regarded as sufficient to put an individual in this category; the data on which the machine was trained to determine the number of occupants in a building; the degree of divergence between the data on which the AI programs were trained and the operational conditions at the time of a potential strike; the ability of operators to realize when this divergence exceeds the parameters of a model’s expected use; and the extent to which operators understand the underlying bases for an algorithmic target identification.

All these can have a profound impact on the ability to use a weapon in compliance with international law, even if that weapon is not a LAWS. If we are concerned about the potential harms from incorporating AI into the process of using a weapon, limiting attention to LAWS thus is an unduly narrow focus. It overlooks the risks that can arise from incorporating AI to perform a variety of functions beyond identification and engagement of targets. As Marta Bo and Jessica Dorsey have described, the “unregulation” of the use of AI for decision support means that there has been inadequate consideration of the consequences of these risks for compliance with basic IHL principles.

A useful conceptual framework to expand our focus draws on systems theory to characterize reliance on AI in using a weapon as a sociotechnical system. Olya Kudina and Ibo van de Poel explain, “A system might here simply be understood as a number of elements (e.g. technologies, humans,) that somehow are interrelated and function together to fulfill a shared goal or objective.” With respect to AI, as Valerie Hafez and her colleagues put it, “AI systems are complex sociotechnical systems – that is, they consist of material and social components which, by being put into particular kinds of relations, work together in specific ways.” We can think of a sociotechnical system as one in which these components are the key elements in a system. The use of AI for decision support in the targeting cycle involves coordination of technology, humans, and institutions to achieve the goal of using a weapon.

As I describe below, this cycle is itself a subsystem within an even larger sociotechnical system comprised of the lifecycle of AI capabilities that are used in targeting. Conceptualizing weapons as sociotechnical systems enables us to focus on the range of risks arising with any incorporation of AI into the use of a weapon, including but not limited to LAWS.

Sociotechnical Systems and AI

The idea of a sociotechnical system is not new, but it is especially valuable in analyzing the use of AI. AI’s striking capabilities can lead to a focus only on its technical features, neglecting the fact that the value of its contributions depends upon how humans organize activities to use it. To achieve human goals, any technology must be effectively incorporated into a social subsystem that involves shared understandings of how humans can collaborate to use technology for certain purposes.

The systems perspective offers a corrective to the assumption that a given technology will increase productivity simply by being inserted into existing work routines and organizational processes. This assumes that the burden is on humans to reorganize their relationships to adapt to the technology. This may be necessary to some degree, but such reconfiguration may not automatically improve performance.

Treating an AI use case as a sociotechnical system comprised of complex relationships among humans, technology, tasks, and work routines helps identify the ways in which these relationships can advance or undermine the goals of the system. It also illuminates that elements in one part of the system respond to activities in others, which in turn generates additional responses in a process that involves feedback loops rather than simple linear causal relationships, as well as emergent behavior in some cases. This approach has been especially important in thinking about enhancing the safety of critical infrastructure and high-risk operations.

A sociotechnical framework also highlights that human use of technology involves a lifecycle that begins with the decision whether to develop a given technology for a particular purpose and continues through its deployment and retirement. This has become a prominent focus in thinking about the use of AI in general and military applications of AI in particular. Stages in the latter lifecycle begin with considering the rationale for developing an AI component for military use and continuing through stages such as: development; design; testing, evaluation, verification, and validation; deployment; and post-deployment review of these components and their effective integration in achieving military objectives.

Attention to this lifecycle is crucial. AI requires continuous monitoring and feedback because the operational environments in which it is used may diverge to varying degrees from the models on which it has been trained. The lessons learned from ongoing assessment must be incorporated into all phases of the lifecycle, so that post-deployment assessment feeds back into the pre-development stage to continually refine the performance, and minimize the risks, of defense systems that incorporate AI.

Military AI through a Sociotechnical Lens

A sociotechnical systems approach has been applied to the use of AI in various settings, but can be especially valuable in the military context. First, it can enhance the design process by making it sensitive to the ways in which humans and AI will be collaborating to perform specific tasks that further a system’s goals. This focus on human-centered design helps ensure that incorporation of AI will improve effectiveness, because it takes account of the forms of human-machine collaboration that are necessary to achieve these goals.

An important aspect of human-centered design is the importance of consultation with and input from a broad range of stakeholders, not just experts on the technology itself. Stakeholders here thus include not only data scientists, computer scientists, and test and evaluation experts, but managers, end-users that include both commanders and operators, and those potentially affected by a system.

Second, viewing military AI use cases through a sociotechnical lens heightens awareness of the dynamism of interactions that comprise the system and the importance of continuous learning. This underscores the need for ongoing adjustments in response to the complex ways in which technological and social subsystems mutually influence one another. Iterative adjustments are necessary as the capabilities and limitations of AI evolve. Gradual divergence of the operating environment from the conditions under which a model was trained and potential informal operator reliance on AI for uses for which it is not intended are just a couple of examples of this. Such emergent system behavior may require revising roles, responsibilities, and forms of human-machine interaction to ensure that the system continues to meet its goals without generating additional risks.

Using a sociotechnical approach to map relationships among the elements of the system helps identify potential points to intervene to enable such adjustments. This is important to avoid making narrow changes to isolated elements of the system that could be ineffective or even counterproductive because they neglect the larger picture. Focusing on the system as a whole can help select adjustments that will have the biggest impacts because they leverage the way in which system elements influence one another.

In addition, a sociotechnical approach can increase awareness of the multiple points at which different types of risks may arise that threaten achievement of the system’s goals and cause harm. As safety science on systems theory underscores, characterizing risks as potential human error or technological malfunction does not fully capture their nature. Identifying the links and feedback loops across a system helps anticipate secondary, tertiary, and other near and long-term unintended risks from well-intentioned behavior.

A good example of this is the incident in 1988 in which the USS Vincennes mistakenly shot down an Iranian commercial aircraft in the Strait of Hormuz, killing all 290 people on board. The official investigation report attributed the incident to human error arising from a stressful situation in which the U.S. vessel was simultaneously involved in a firefight with small armed Iranian speedboats and was monitoring other potentially hostile activity in the area. The complex technology on board the Vincennes correctly indicated that the Iranian plane was ascending after taking off from a nearby airport, and that its transponder signaled that it was a civilian aircraft. Key members of the crew, however, reported that the aircraft was descending and emitting a signal indicative of a military aircraft, which led to fear that it was an Iranian F-14 intending to attack the ship. Relying on this information, the captain authorized firing two missiles that destroyed the aircraft.

While the accident involved human error, research focusing on the lifecycle of the technology on board underscored that decisions made at the software development stage made this more likely. Specifically, while the technology displayed the altitude of the plane, it was not designed to provide information about the rate of change in altitude. As a result, operators had to compare altitude data at different times and calculate the rate of change in their heads or by hand while engaged in hostilities. A systems engineer had suggested an interface that indicated the rate of altitude change, but this had not been adopted in the design stage. The result was technology that contributed to the mistake that occurred.

The fact that different types of risks can arise at different points of human-machine interaction in a system means that there may be ethical concerns that are distinctive with regard to performance of particular tasks as humans and machines collaborate to perform them. Appreciating this fact can enable the use of analyses that are tailored to the particular types of risks and ethical concerns that can arise at different points in the system. This also highlights the need for extensive training of people in specialized roles so that they fully understand the technology and its appropriate uses.

A related benefit is that a sociotechnical approach highlights that a system involves distributed responsibility across multiple participants who perform interrelated tasks. One group of scholars has drawn on this insight, for instance, to suggest that human control and oversight of an AI weapon system involves different forms of human involvement at what they call the technical engineering, sociotechnical, and governance levels of a system. Activity at these levels can occur before, during, and after deployment. The authors argue that effective control over the system requires activity at all three levels during all three phases. This suggests one response to concern about responsibility gaps. Rather than identifying a single agent to bear full accountability, the systems approach allows for accountability to be distributed across different forms of human involvement at multiple points, no one of which alone can guarantee control over the entire system.

Appreciating that a system involves numerous tasks performed through multiple human-machine interactions also sheds light on the concept of AI “explainability.” Humans performing different tasks need different types of explanations to do their work. A software engineer, for instance, requires a different understanding of the system than a commander deciding whether to deploy an AI-enabled weapon, and an operator on the front line will need yet a different type of explanation. This underscores the need for more granular analysis of explainability at different points in the lifecycle of a system, rather than an undifferentiated conception of explainability that is suitable for all participants.

The UN GGE has acknowledged the importance of human-machine interaction in stating that one principle that should guide the work of the group is that human-machine interaction, “which may take various forms and be implemented at various stages of the life cycle of a weapon,” should ensure that LAWS are operated in compliance with international law. In addition, the United Kingdom has suggested the need to focus on “human touchpoints” in the lifecycle of weapon systems more generally. The CCW process, however, has steadfastly confined its focus to LAWS and human control over striking a target.

Conclusion

Conceptualizing military AI use cases as sociotechnical systems is crucial to understanding and managing the risks of the incorporation of AI into the process of using a weapon. Those risks are not confined to LAWS, and focusing only on LAWS forgoes the opportunity to gain insights into the impacts of the different forms of collaboration between humans and machines that comprise any such use of AI.

UN General Assembly Resolution 79/239 in December 2024 on Artificial Intelligence in the Military Domain does refer to “the whole life cycle of artificial intelligence capabilities applied in the military domain, including the stages of pre-design, design, development, evaluation, testing, deployment, use, sale, procurement, operation and decommissioning.” The resolution calls upon the UN Secretary General to seek the views of States on the use of AI in the military domain. This seems like a promising embrace of a sociotechnical approach that would include the use of AI for decision support in targeting. The resolution explicitly says, however, that State submissions should be “with specific focus on areas other than lethal autonomous weapons systems.”

Reflecting this bifurcation, the recent Informal Consultations on LAWS did not draw on submissions in response to Resolution 79/239. This continues the unfortunate separation between discussion of weapon systems and discussion of the ways in which AI may be incorporated into their use beyond autonomous target identification and engagement. Linking the two topics could enable useful exchange of ideas and best practices on identifying and mitigating risks arising from the latter. Indeed, several States have suggested with respect to LAWS that “voluntary initiatives could be useful to share best practices and build norms” even in the absence of agreement on a binding instrument. This would be a valuable step. Expanding the remit of the UN CCW group to include all weapon systems rather than simply LAWS may not be feasible, but discussion pursuant to Resolution 79/239 could provide a separate opportunity to do so.

In the meantime, a welcome development is that other venues are emerging that examine military applications of AI beyond LAWS. Weapon systems that are not LAWS are likely to pose the most significant risks in the next several years from incorporation of AI to perform various functions in the use of a weapon. It’s well past time for the debate to focus on these uses of AI. Analyzing them as sociotechnical systems will help us identify the distinctive risks that can arise from the complex human-machine interactions that comprise these systems, as well as the steps we can take to minimize them.

***

Mitt Regan is McDevitt Professor of Jurisprudence and Co-Director of the Center on National Security at Georgetown Law Center, and Senior Fellow at the Stockdale Center on Ethical Leadership at the United States Naval Academy.

The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

Photo credit: Josef Cole, U.S. Cyber Command