Fighting at Machine Speed: AI and U.S. Army Counterfire Under the Law of War – Part II
Part I of this post explained the growing relevance of counterfire operations, challenges that accompany carrying them out, and potential contributions of artificial intelligence (AI) to mitigating those challenges. Using the Department of Defense (DoD) Law of War Manual as an authoritative guide under jus in bello, this post builds upon Part I to offer legal considerations related to incorporating artificial intelligence into counterfire operations.
The law of war’s core principles (military necessity, humanity, proportionality, distinction, and honor) “provide the foundation for the specific law of war rules” and are a “guide for conduct during war when no specific rule applies,” according to the Manual (§ 2.1.2). Honor, which the Manual indicates requires fairness and mutual respect between opposing forces, undergirds the law of war’s structure and informs the interpretation of the remaining principles. Because it functions as an animating premise rather than an operative decision rule, honor is not treated independently in this post.
This post assumes the lawful development of Maven Smart Systems (MSS), Tactical Intelligence Targeting Access Node (TITAN), and Air Space Total Awareness for Rapid Tactical Execution (ASTARTE) under DoD policy and review processes. Therefore, it includes no evaluation whether the systems are lawful weapons per se. First, I examine how the operational use of AI-enabled decision support in counterfire affects compliance with the law of war principles broadly. Second, and more specifically, I analyze how AI-enabled decision support affects rules governing the conduct of hostilities.
Military Necessity and AI Integration
As a threshold matter, the Law of War Manual recognizes some “types of actions are … inherently militarily necessary” (§ 2.2.3.2). Counterfire constitutes an inherently militarily necessary action because it targets an opposing force’s artillery: objects whose sole function is to contribute to military action. This is most obviously the case in reactive counterfire, which responds directly to an adversary’s prior attack. Humanity, which the Manual understands as military necessity’s “logical inverse,” remains implicated insofar as necessity must be assessed and bounded in each operational context (§ 2.3.1.1).
The legal questions raised by AI integration into counterfire operations arise most acutely under the principles of distinction and proportionality. Those principles and related rules govern target identification, target prioritization, and the assessment of collateral effects, functions that AI-enabled systems increasingly inform or accelerate. Military necessity and humanity structure and constrain the analysis, but the legality of AI-enabled counterfire ultimately turns on whether these systems maintain compliance with distinction and proportionality at speed and scale.
Legal Constraints on Counterfire
Common across all three systems is their ability to receive and process large amounts of data to provide attack recommendations to a decision-maker. The Manual’s Chapter V outlines how the principles and rules govern the conduct of hostilities and impact those systems’ capabilities during an attack. The four rules below, from the Manual’s § 5.4.2, are particularly relevant to AI-enabled counterfire “attack” decisions:
1. Combatants may make military objectives the object of attack, but may not direct attacks against civilians, civilian objects, or other protected persons and objects;
2. Combatants must refrain from attacks in which the expected loss of life or injury to civilians, and damage to civilian objects incidental to the attack, would be excessive in relation to the concrete and direct military advantage expected to be gained;
3. Combatants must take feasible precautions in planning and conducting attacks to reduce the risk of harm to civilians and other persons and objects protected from being made the object of attack;
4. In conducting attacks, combatants must assess in good faith the information that is available to them.
Counterfire missions satisfy the first rule as the law of war categorically recognizes an opposing force’s artillery (or weapon) as a military objective. Even so, because modern artillery is highly mobile, counterfire is inherently time sensitive, and displacement may raise concerns about whether the target’s location remains a valid military objective at the moment of engagement. Sensitivity to timing thus shapes how the remaining rules governing the conduct of hostilities apply in counterfire operations.
The fourth rule listed above requires information to be assessed in good faith. Section 5.3 of the Manual specifies that “[c]ommanders and other decision-makers must make decisions in good faith and based on the information available to them.” The rule does not require perfection and protects against post hoc determinations. At the same time, the rule likely presupposes that decision-makers will meaningfully engage with information that is reasonably available to them at the time of the decision. The rule affords commanders a degree of maneuverability by recognizing that information will often be incomplete, ambiguous, or time-sensitive, particularly in dynamic targeting environments. That flexibility, however, is not unlimited. The obligation to act in good faith implies not only protection from hindsight bias, but also an affirmative duty to consider relevant information that is accessible and operationally useable.
Systems such as MSS, TITAN, and ASTARTE materially alter what information is “available” by rapidly collecting, sorting, and contextualizing data that would otherwise exceed human cognitive capacity in compressed timelines. As a result, a good-faith inquiry increasingly turns not on whether the commander possessed perfect information, but on whether available decision-makers used tools for processing and synthesizing information reasonably and appropriately. While the law does not mandate adoption of any specific technology, the growing prevalence of AI decision support raises a harder question: when the military fields systems designed to reduce uncertainty, will a decision-maker’s failure to understand or use these systems fail the requirement to assess available information in good faith?
More information does not necessarily improve compliance, nor must commanders accept every AI-generated recommendation. Rather, the emerging legal tension lies in how commanders exercise judgment over information-rich environments, balancing reliance, skepticism, and time constraints, when technology exists specifically to make judgments more informed. In that sense, AI systems do not displace the good faith requirement; they recalibrate the expectations of how it is satisfied.
How to Satisfy Good Faith Requirements
The question of good faith moves from merely what information was available at the time, to how the commander processed and prioritized the vast amounts of available information. In the context of these new AI systems, compliance with the good faith assessment of information rule requires commanders and decisions-makers to become familiar with how these systems work and ensure proper integration into their planning and decision-making procedures. To do otherwise would fall short of this rule’s requirements.
Next, consider proportionality under the law of war, Rule 2 above. A commander’s responsibility is significantly implicated when integrating AI into counterfire operations. AI systems can influence how anticipated military advantage and expected incidental harm are assessed, but do not displace the rule or the commander’s responsibility for implementing it. These systems present the largest legal risk when a commander or decision-maker accepts a recommendation that a proportionality assessment appears pre-determined.
The rule allows for proportionality to be implemented through applicable procedures in place for counterfire operations, such as commander’s attack guidance, which specifies the fire command based on target type. The AI systems do not require new attack guidance or novel weaponeering plans; they simply require the proper integration of existing procedures into the programs.
Recommendations
To ensure that proportionality and other legal determinations embedded in AI-enabled counterfire systems remain sound, commanders should implement standardized operating procedures (SOPs) that treat distinction and proportionality as institutional obligations rather than an ad hoc judgments. While there are many ways to ensure compliance with the rules through SOPs, I posit the following five recommendations.
1. Require Units to Routinely Validate and “Scrub” System Inputs
Military units should establish a schedule to verify system inputs such as targeting databases, environmental assumptions, communication links, and the commander’s guidance remain current and accurate. Although many inputs are updated in real time, periodic confirmations such as time, communication, and spot checks reduce the risk of lag or data drift that could distort proportionality assessments.
2. Designate Responsible Personnel at Each Echelon
Commanders should appoint specific officers or soldiers at each echelon of system use with responsibility for system management and update monitoring. Clear ownership ensures accountability and prevents proportionality assumptions from defaulting across staff sections or to contractor oversight.
3. Individual Training on System-Enabled Proportionality
Personnel operating or relying on these systems should receive training that goes beyond system operability to address how distinction and proportionality are integrated and updated. This training should emphasize how to recognize indicators of flawed or incomplete inputs and the steps to take to update or flag those issues.
4. Defined Approval Authority for Weaponeering and Constraint Inputs
SOPs should specify the approval level required for changes to weaponeering logic, collateral estimation parameters, no-strike lists, and rules of engagement (ROE)-derived constraints embedded in the system. Critical to the functionality of the system and adherence to the commander’s responsibility is the ability to effectively input the commander’s guidance at each level of command and by phase of the operation. There must be enough access to allow changes to be made, but with proper oversight and authority.
5. Regular Tiered Training and Certification Cycles
Units employing these systems should establish recurring training requirements (annual, quarterly, and weekly, as appropriate) for fires cells and targeting teams. These cycles should include both basic communication stressors and scenario-based exercises to test distinction and proportionality judgments, similar to current digital fire sustainment training incorporated into most units’ battle rhythm and cyclic training requirements.
These recommendations are not all-encompassing and require mission and unit-dependent tailoring for implementation. Nevertheless, some form of institutionalized maintenance, validation, and training regime is necessary to ensure compliance with the law of war principles for AI systems that shape targeting and weaponeering decisions. When units implement proportionality through ROEs, SOPs, and weaponeering logic embedded in the systems, failure to maintain those procedures risks systemic noncompliance, even where individual commanders act in good faith.
Precautions
Institutional risk becomes more pronounced when considering the law-of-war obligation to take feasible precautions. The procedures listed above integrate distinction and proportionality directly into the system and operations and constitute feasible precautions in planning and conducting attacks. Distinction, like proportionality, also requires “feasible precautions to verify that objects to be attacked are military objectives” (§ 5.5.3.). The Law of War Manual specifies the following precautions in a non-exhaustive list.
Reviewing the accuracy and reliability of the information supporting the assessment that a potential target is a military objective; Checking potential target locations against no-strike and sensitive site lists; Reviewing previously approved targets at reasonable intervals as well as when warranted in light of fresh information and changing circumstances, e.g., to ascertain whether enemy forces continue to use the object for military purposes or whether the object’s destruction or neutralization continues to offer a definite military advantage; Gathering more information, such as visual identification of the target through intelligence, surveillance, and reconnaissance platforms; and taking steps when carrying out a planned attack to confirm that the person or object to be attacked, is, in fact, the intended target of the attack.
MSS, TITAN, and ASTARTE are designed to support compliance with the principle of distinction by increasing the volume, speed, and accuracy of information available to verify that proposed targets are military objectives. By fusing sensor data, intelligence reporting, and contextual information, these systems can facilitate the precautions suggested above. Moreover, some of the suggested SOPs articulated in the proportionality discussion also function as feasible precautions for the purposes of distinction.
When properly implemented, such measures help ensure that increased informational capacity translates into accurate, lawful target verification and comply with the rule to reduce the risk of harm to civilians to the extent feasible. Although MSS, TITAN, and ASTARTE share features that demonstrate their potential for lawful employment, each system’s role in the targeting process gives rise to distinct legal considerations requiring separate examination.
Distinct Legal Issues Facing New AI Technologies
Four major legal issues related to AI-enabled counterfire operations include: authority allocation; force selection; time-constrained judgment; and post-attack obligations.
Authority Allocation
The Law of War Manual repeatedly emphasizes the commander’s role in decisions and judgments for each principle. AI systems expand the factual basis available to subordinate commanders and staff officers, indicating their ability to make decisions on behalf of commander. While the Manual consistently states that judgments are the commander’s responsibility, absent “contrary” direction, subordinates “have the authority to make the corresponding decisions” (§ 5.10.2). The issue of delegation is present in all three systems, but is felt most acutely during reactive counterfire using TITAN, or dynamic targeting conducted using MSS.
Within both shortened decision-making processes, proper authority allocation is especially critical for meeting proportionality rules. Additionally, the increased information available to subordinates may allow commanders to provide less specific guidance and still comply with their legal obligations. The rules do not require the same level of involvement or precision for commanders, who cannot be everywhere at all times. They acknowledge the subjective nature of judgments and that a “degree of deference should be given” (§ 5.10.2.3).
Ultimately, commanders must not be complacent and indiscriminately delegate decision-making to whoever may be running TITAN or ASTARTE, whether qualified or unqualified. But the increased information confidence these systems provide may allow commanders to exercise greater authority and lawfully delegate.
Force Selection
As presently designed and operated, these systems present no, so-called human-in-the-loop concerns. None of them “pushes the fire button” so-to-speak. However, the question of how much force may be lawfully applied against a target introduces the issue of meaningful human control/review or appropriate human judgment. As discussed above, force-selection concerns arise primarily when these systems support real-time targeting decisions rather than deliberate fires planning.
To remain compliant with the law of war, the AI targeting systems should likely preserve some level of human involvement force selection. If operators cannot modify or override system-generated fire commands, then the system, not the human, is effectively making the distinction or proportionality determination, even if a human formally authorizes execution.
Accordingly, lawful employment should require, at a minimum: 1) fire commands that are editable by human operators; 2) continued technical training for fires personnel capable of independently evaluating weaponeering and effects; and 3) periodic verification and updating of command guidance to ensure that system outputs remain aligned with commander intent and current operational conditions. Yet preserving human judgment over force selection does not eliminate the pressures imposed by rapid counterfire timelines, which independently constrain proportionality judgments.
Time-Constrained Judgment
The Manual recognizes that satisfying the principle of proportionality is materially affected by “[t]he time available to make decisions and to take precautions” (§ 5.10.4). Each of the AI technologies enables commanders to have a clearer common operating picture. However, clarity does not change the disparate time available for reactive and proactive counterfire. The law of war does not distinguish between those two situations or specify any sort of minimum standard for feasible precautions beyond accounting for what is feasible itself. Instead, it allows operational variation based on mission type and other matters of battlefield context.
For example, reactive counterfire supported by TITAN may require decision-makers to rely on incomplete information available at the moment of engagement. The law tolerates reduced precautions in such circumstances so long as commanders make judgments in good faith. By contrast, when MSS is employed in a proactive counterfire or deliberate targeting, or when it is used to support dynamic targeting outside of an immediate counterfire response, decision-makers may have greater time and informational resources available. In those circumstances, more refined pattern analysis, target validation, and collateral considerations become feasible, and the law correspondingly expects more.
One way the Manual suggests to effectuate proportionality requirements is training on targeting procedures. Besides training and process implementation, using AI use itself can be a feasible precaution. For example, ASTARTE can provide commanders better situational awareness to prevent incidental harm. Both MSS and TITAN also have similar functions built in that may serve as feasible precautions. AI does not relax distinction and proportionality requirements in time-constrained engagements; rather, it can operate as a feasible precaution by elevating the level of care that can reasonably be devoted to targeting decisions.
Post-Attack Obligations
The law of war does not impose a general obligation to conduct battle damage assessments following every attack, nor does it judge distinction or proportionality based on information learned only after the fact. Nonetheless, when post-attack information is reasonably available, it may positively inform future targeting decisions. One practical advantage of MSS is its ability to close the fire mission loop with battle damage assessment (BDA). This helps address a persistent challenge across targeting and fires cells of incomplete or delayed BDA.
Where MSS enables more reliable post-attack awareness, planners can incorporate that information into subsequent planning to refine target validation, adjust force compliance, and reduce the risk of repeated or unnecessary strikes. In this way, MSS does not create new legal obligations, but it strengthens commanders’ ability to comply with existing law of war principles by improving the quality of information available for future decisions.
Conclusion
Integrating AI into counterfire offers the Army a credible way to compress the sensor-to-shooter cycle against enemy artillery without departing from existing law of war requirements. Newly fielded and emerging AI systems do not change what the law demands; the guiding principles and governing rules remain intact. Properly employed, these systems can strengthen compliance by improving target validation, reducing uncertainty, and accelerating coordination in the precise places where modern counterfire tends to break down.
Three durable implications follow. First, speed does not displace judgment. AI may accelerate recommendations, but commanders remain responsible for deciding what to target, with what force, and under what constraints. Second, better information raises expectations for precautions. As AI expands what is known in the moment, the good-faith baseline shifts: decision-makers must understand the tools, question their assumptions, and institutionalize validation so that legality keeps pace with tempo. Third, accountability can only rest with humans even when analysis is machine-assisted. Units can and should embed proportionality and distinction safeguards into SOPs, training, and approvals, but those structures must preserve meaningful human review over force selection and ensure clear, traceable authority for time-sensitive fires.
AI-enabled counterfire can be both faster and lawful, but only if compliance is embedded into operational design rather than evaluated retrospectively.
***
Captain Megan Ezekannagha is a J.D. Candidate at The University of Texas School of Law and Field Artillery Officer in the Texas Army National Guard.
The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: Eric Durr
