Who Is Liable When AI Is Used in War? Military AI Explained
31st Mar 2026
Artificial intelligence is already shaping battlefield decisions — from intelligence analysis to target selection — and its role in modern conflict is expanding fast. The question is no longer how AI is used in war. It is what happens when those systems get it wrong.
As AI in war becomes operational reality, the legal consequences are becoming harder to ignore. When an AI-assisted system contributes to a wrongful strike or flawed targeting decision, responsibility does not disappear — but it becomes far more difficult to trace. Governments, commanders, contractors, and technology providers may all be involved, yet liability is rarely clear.
This uncertainty sits at the centre of the growing intersection between AI and war. Recent analysis from Chatham House has highlighted rising concern around AI-supported targeting in active conflicts, while the United Nations has confirmed that existing laws of armed conflict apply fully to artificial intelligence in the military domain. At the same time, disputes involving Anthropic and the Pentagon have exposed how unclear the boundaries of control and responsibility remain in practice.
This analysis draws on those developments alongside established legal principles under international humanitarian law, supported by UK and US liability frameworks.
How AI Is Used in War Today
To understand liability, it is necessary to look at how AI is already being used in war. Modern military systems rely on artificial intelligence to process vast streams of data from satellites, drones, and surveillance networks, generating targeting insights and operational intelligence at speeds no human team could match.
That speed advantage is precisely why militaries are adopting AI—because the volume and complexity of battlefield data now exceed what human analysts can process unaided.
These systems do not typically replace human decision-makers. Instead, they shape the information those decision-makers rely on. That distinction is critical. When AI influences the inputs rather than the final decision, responsibility becomes harder to define. A commander may formally authorise an action, but that decision may be heavily shaped by machine-generated recommendations that are difficult to independently verify in real time.
This is where the legal complexity begins. The question is no longer whether AI could be used in warfare, or how AI would be used in war in the future. It is already embedded in operational decision-making. The real issue is how much weight those systems carry — and whether human oversight remains meaningful, or simply procedural.
Artificial Intelligence in the Military Domain
Artificial intelligence in the military domain is no longer confined to targeting systems. It is now embedded across the operational chain — from logistics and battlefield simulation to surveillance, intelligence fusion, and strategic planning. The effect is cumulative. Decisions are being made faster, with more data, and with increasing reliance on systems that can identify patterns beyond human capacity.
This shift is changing how advantage is created in modern conflict. Speed of analysis, predictive capability, and the ability to process fragmented information in real time are becoming as important as traditional military assets. In that sense, AI is not simply an additional tool. It is reshaping how military effectiveness is defined.
Debates about war, artificial intelligence, and the future of conflict often focus on fully autonomous systems operating without human oversight. While those scenarios attract attention, they risk obscuring the more immediate reality. AI is already influencing decisions at multiple levels, even where a human remains formally in control.
The challenge is that reliability and control have not kept pace with deployment. AI systems can be shaped by flawed training data, produce outputs that are difficult to interpret, and behave unpredictably when exposed to conditions outside their training environment. Those limitations are not theoretical. They often only become visible once systems are used in live operational settings — where the consequences of error are significantly higher.
Research from the Alan Turing Institute highlights that AI is expected to augment battlefield command decision-making rather than replace it, reinforcing that responsibility for outcomes still rests with human commanders.
Where Liability Sits — And Why It Does Not Stay There
In legal terms, responsibility for harm in military operations has traditionally been anchored to the state. Governments decide when force is used, define the rules of engagement, and authorise action. That principle does not disappear with the introduction of AI. States remain primarily responsible for the use of force, regardless of the tools involved.
What changes is how that responsibility is distributed in practice. AI introduces additional layers between decision and outcome, making it harder to isolate where control sits. Liability does not stop at the state. It can extend across the chain of actors involved in designing, deploying, and operating the system.
A technology provider may face exposure where an AI system is defective, trained on unreliable data, or performs outside its stated capabilities. A contractor may bear responsibility if safeguards are bypassed or systems are used outside agreed parameters. Operators and commanders may still be accountable where reliance on AI outputs replaces meaningful oversight.
In this sense, AI does not remove responsibility — it fragments it. The legal question is no longer simply who made the decision, but who shaped it, influenced it, and had the ability to prevent harm.
In practice, this creates a paradox. AI is introduced to reduce uncertainty and improve decision-making, yet its integration can make outcomes harder to explain after the fact—particularly where decisions are shaped by complex systems operating under real-time pressure and imperfect data.
Ultimately, liability follows control. But in AI-supported environments, control is rarely concentrated in one place. It is distributed — and that is what makes accountability harder to establish, and disputes more likely to arise.
The Legal Framework: Old Rules, New Complexity
Despite the rise of AI, the legal framework governing its use in war has not fundamentally changed. International Humanitarian Law still requires that military operations distinguish between civilian and military targets, avoid disproportionate harm, and take all feasible precautions. These obligations apply regardless of whether decisions are supported by artificial intelligence.
Alongside these rules sits the principle of command responsibility. Senior decision-makers can be held accountable where they knew, or should have known, that unlawful actions were taking place and failed to prevent them. The presence of AI does not dilute that responsibility. If anything, it raises the standard. Where decisions are influenced by systems that are difficult to interpret, the burden on commanders to ensure meaningful oversight becomes greater, not less.
At the domestic level, the position is similar. In the UK, liability may arise through negligence and statutory duties linked to the protection of life. In the United States, tort law provides the primary framework, although doctrines such as sovereign immunity can limit direct claims against the government. These differences affect how claims are brought, but not the underlying logic of responsibility.
Across these frameworks, the core legal test remains consistent. Courts will look at whether the harm was foreseeable, whether reasonable steps were taken to prevent it, and whether any failure in that process contributed to the outcome. AI complicates how those questions are answered, but it does not replace them.
The Chain of Responsibility in War and AI
The defining feature of war and AI is not that responsibility disappears, but that it becomes layered.
A single decision may pass through multiple stages: model design, training data, system output, human interpretation, and final authorisation. Each stage shapes the outcome. Each also creates a potential point of failure — and a corresponding point of accountability.
This is what makes liability harder to define. Decisions are no longer the product of a single actor, but of a sequence of inputs, each influencing the next. The question is not simply who acted, but who influenced the decision and at what stage.
In legal terms, responsibility follows control at each point in that chain. The more influence a party has over how a system operates, how its outputs are used, or whether its risks are understood, the more likely it is to be held accountable.
AI does not remove the chain of responsibility. It extends it — and in doing so, makes it more difficult to identify where accountability ultimately sits.
When Systems Fail: A Practical Example
Consider a scenario that is no longer hypothetical. An AI system identifies a location as a likely target based on pattern analysis. The output is reviewed briefly and approved under time pressure. A strike follows. Civilian casualties are later reported.
At that point, the question is not whether the system functioned as intended. It is who is responsible for the outcome.
Responsibility is unlikely to rest with a single party. The military authorised the strike, the operator relied on the system’s output, and the model itself shaped the recommendation. Each element sits within the causal chain, and each may be scrutinised in any legal or investigative process that follows.
The critical issue is whether human oversight was meaningful. If AI outputs were accepted without proper interrogation — particularly in circumstances where independent verification was limited — liability may still attach, even where the system performed in line with its design.
In practice, the legal risk does not arise from AI acting alone. It arises where reliance on AI replaces judgment, but responsibility remains human.
Why AI Changes the Liability Equation
AI does not eliminate responsibility. It changes how responsibility is traced.
In traditional decision-making, accountability can usually be linked to a specific action or decision-maker. In AI-supported environments, that clarity begins to break down. Outcomes may be shaped by systems that are difficult to interpret, operating at speeds that leave little room for meaningful human review. The result is not a lack of responsibility, but a loss of visibility over how decisions are formed.
That shift has legal consequences. Where decision-making becomes opaque, it becomes harder to demonstrate that risks were understood, challenged, and properly managed. This is where liability exposure increases — not because AI acts independently, but because the pathway from input to outcome becomes more difficult to evidence after the fact.
Public debate often frames these risks in abstract terms, including questions about what Stephen Hawking warned about when discussing artificial intelligence. Those concerns focus on long-term scenarios. The immediate issue is more grounded. AI systems are already influencing decisions in high-risk environments, and the legal burden remains on humans to show that those decisions were made with sufficient oversight.
The critical point is this: AI does not reduce the standard of accountability. It raises it — while making it harder to prove that the standard has been met.
The Commercial Reality Behind Liability
For companies operating in this space, liability is not an abstract legal concept. It carries immediate and measurable commercial consequences.
Even where governments benefit from legal protections, exposure does not stop at the state. Companies involved in the development, supply, or operation of AI systems can face contractual disputes, indemnity claims, regulatory scrutiny, and exclusion from future procurement. In defence markets, where long-term contracts and government relationships are critical, that exposure can be more significant than any individual claim.
The dispute between Anthropic and the Pentagon illustrates how quickly legal and commercial risk can converge. Questions about how AI systems may be used — particularly in areas such as surveillance or autonomous targeting — do not remain theoretical for long. They can determine whether a company is awarded contracts, restricted in its operations, or excluded from key markets altogether.
This is why the question of who pays damages when AI fails cannot be answered solely through litigation. In practice, liability is often resolved long before a court becomes involved — through procurement decisions, contractual negotiations, and regulatory intervention.
In this environment, legal risk is inseparable from commercial risk. Companies are not just managing potential claims. They are managing their ability to operate, compete, and remain credible in a market where accountability is becoming a condition of participation.
Defining Responsibility Before Deployment
The expansion of AI in war is not removing accountability. It is redistributing it across a wider network of actors.
In practical terms, organisations involved in AI use in war must be able to demonstrate who controlled the system, what safeguards were in place, and how decisions were reviewed. Without that clarity, legal and commercial exposure becomes significantly harder to manage.
The legal question is no longer whether responsibility exists. It is whether it can be clearly evidenced once something goes wrong.
The real risk is not that AI will act without control. It is that control will exist — but cannot be demonstrated after the fact.
Even advanced defence research suggests that AI’s impact on warfare will be uneven and constrained by real-world conditions, rather than delivering a clean technological transformation.
If governments and companies cannot answer that question convincingly, they are advancing technological capability faster than the legal frameworks designed to govern it. And in high-stakes environments such as modern conflict, that is where liability, accountability, and consequence collide.