Freshfields and Anthropic’s AI Deal Exposes a Bigger Risk: Who Is Liable When Legal AI Gets It Wrong?
23rd Apr 2026
The deal between Freshfields and Anthropic looks like a standard legal tech partnership. In reality, it raises a far more immediate and uncomfortable question — one that law firms, in-house teams and businesses are already starting to face: who is liable when legal AI gets it wrong? For any firm or business using AI in legal work, this is no longer theoretical. It is a live risk.
Because once AI tools are used to draft contracts, review transactions or support legal advice, the risk does not sit with the software. It sits with the lawyer, the firm, or the organisation relying on it. And as firms move from simply using AI to actively helping build it, that exposure becomes harder to define — and harder to control.
On the surface, the partnership is straightforward. Freshfields will deploy Anthropic’s Claude AI across its global operations and contribute legal expertise to help develop tools for drafting, contract review and due diligence. In return, it gains early access to new capabilities and a role in shaping how these systems are built. That is the visible story. The more important one is what happens when those tools begin to influence how legal work is produced — not just internally, but across the market.
This is where the legal risk begins to shift. Traditionally, law firms have been users of technology, responsible for how they apply it. Now, they are moving closer to the point of design and influence. That matters because it blurs the line between tool and judgement. If a system contributes to a legal outcome — even indirectly — the question is no longer whether AI was involved. It is whether the firm met its professional duties in relying on it.
Those duties do not become lighter with AI. They become heavier. Supervision, verification and client protection are not optional safeguards; they are the foundation of legal responsibility. The difficulty is that AI does not fail in obvious ways. It produces outputs that appear coherent, confident and complete. The risk is not blatant error. It is subtle distortion — a missed clause, an incomplete risk assessment, a misinterpreted legal position that slips through because it looks right and arrives quickly.
Once that happens, liability is not shared with the technology in any meaningful sense. Courts and regulators will not accept “the AI got it wrong” as a defence. They will look at whether the firm exercised proper oversight, understood the limitations of the tools it deployed, and protected the client from foreseeable risk. In that sense, AI does not reduce legal exposure. It redistributes and, in many cases, amplifies it.
The structure of this deal adds another dimension. The tools being developed are not necessarily confined to Freshfields. They may be commercialised, refined and used more widely across the legal industry. That introduces a form of systemic risk. If multiple firms begin relying on similar AI-assisted processes, the consequences of failure are no longer isolated incidents. They become patterns — repeatable, scalable and harder to detect until they surface in disputes or regulatory scrutiny.
Confidentiality is often presented as the primary concern, and Anthropic has made clear that Freshfields’ data will not be used to train its models. That is an important boundary, but it is not the full picture. Risk does not only arise from training data. It arises from how information is processed, where it is stored, how outputs are reused, and how systems interact with broader workflows. Law firms operate under strict obligations to protect client information, and those obligations extend to every layer of technology they adopt. Introducing external AI systems into that environment increases complexity, not reduces it.
There is also a commercial pressure that cannot be ignored. Clients are already expecting law firms to do more with less — faster turnaround, lower costs, greater efficiency. AI appears to offer a solution. But it also changes expectations. If a firm adopts AI to deliver work more quickly, the implicit assumption is that the quality remains the same. That creates a tension between efficiency and assurance. If something goes wrong, the question will not be whether the firm used AI. It will be whether it relied on it too heavily or without sufficient safeguards.
Recent incidents in the legal sector show how quickly these risks can materialise. In one widely reported case, a leading firm was forced to apologise to a US federal judge after submitting filings containing AI-generated inaccuracies, including misquoted legal authorities. That was not a failure of ambition or innovation. It was a failure of control. And it demonstrated how easily AI-assisted processes can produce outputs that pass initial scrutiny but fail under legal examination.
The Freshfields-Anthropic partnership accelerates this reality. It signals that AI is no longer a peripheral tool. It is moving into the centre of legal work — shaping how documents are drafted, how risks are identified, and how advice is delivered. Once that shift occurs, the legal framework that governs professional responsibility must keep pace.
If it does not, firms risk operating in a space where their exposure grows faster than their ability to manage it.
For businesses and in-house legal teams, the implications are immediate. Many are already using AI tools for contract review, compliance checks and internal advisory work. The lesson is not to avoid these tools, but to understand them. Adopting AI in legal processes is not just a technology decision. It is a legal risk decision. It requires clarity on who is accountable, how outputs are verified, and what happens when something goes wrong.
The deeper issue is that responsibility and control are beginning to diverge. The systems that influence legal work are becoming more complex, more external and more widely shared. Yet the responsibility for the outcome remains firmly with the organisation delivering the advice. That imbalance is not temporary. It is structural. And it will define the next phase of legal risk in the profession.
Freshfields’ move positions it at the forefront of that shift. By working closely with Anthropic, it gains insight into how these tools are built and how they behave. But it also places itself closer to the point where technology, professional duty and client expectation collide. That is where the real risk sits — not in the use of AI, but in the reliance on it without fully understanding how liability follows.
The transformation of legal services is already under way. The question is no longer whether AI will change how lawyers work. It is whether the legal system, and the firms operating within it, are prepared for the consequences when that change goes wrong.