← All editions
Edition 26 Thursday, 5 March 2026

The first AI privilege ruling just landed. Law firms weren't ready

The lines courts are drawing your firm hasn't

The Legal AI Brief MPL Legal Tech Advisors
Edition 26 · Thursday, 5 March 2026

This Week’s Theme

Two federal courts issued rulings this month on attorney-client privilege and AI. They reached different conclusions on different facts, and together they draw a line that most law firms have not draw themselves yet.

The line is not between AI and no AI. It is between consumer AI and enterprise AI, between tools used at the direction of counsel and tools used without it, between documented governance and the assumption that someone else worked it out.

Courts are now deciding these questions. The firms that have not already answered them internally will find themselves in the position of having the answer handed to them after the fact.

How To Use Agentic Systems Without Compromising On Control

The starting point is permissions. Before connecting an agent to anything, the question to answer is what the specific task actually requires. Not what would be convenient to have access to, but what the task needs. Email access, file system access, connected tools - each one extends the blast radius if something goes wrong. Less permissions means less capability, and that is a trade-off you have to consciously accept.

The second layer is sandboxing. Run new agent workflows on non-sensitive materials first. Not as a technical exercise, but to understand what the agent actually does before it touches anything that matters. Most firms skip this because the demos look confident, but the demos are not running against your client files.

The third layer is the one most firms miss entirely: authenticating the connections. Giving an agent permission to access a system is not the same as controlling how that access is used. Without authentication requirements on the connections themselves, the door stays open after you have walked away from it.

This shifts law firms from passive risk exposure to a position where the decisions have been made deliberately, and can be explained after the fact.

🎬 7 Early Warning Signs Your Legal Tech Pilot Will Fail

Lawyers can now build functional internal tools without writing a line of code. In this video I cover why that capability is real, where it belongs, and what the 10.5% security rate on AI-generated code means for firms that are already using it.

7 Early Warning Signs Your Legal Tech Pilot Will Fail

🎙 AI Won’t Solve Your Conflicts

Gayk Ayvazyan has spent years resolving disputes for startups and scale-ups. In this conversation we focus on where AI genuinely helps, where it cannot replace human judgment yet, and why moving faster is not always the right answer.

AI Won’t Solve Your Conflicts: A Mediator’s Reality Check

The Big Risk Signal

Two decisions were made the same month, both very relevant to every firm using AI in legal work:

United States v. Heppner: Judge Rakoff of the Southern District of New York ruled that documents a criminal defendant CEO created using a consumer AI tool were not protected by attorney-client privilege. The reasoning was direct: disclosing information to a public AI tool, one with an express provision that user submissions were not confidential, constituted disclosure to a third party. Privilege requires confidentiality and the consumer tool destroyed it.

The court also ruled the work product doctrine did not apply, because the materials did not reflect the legal strategy of counsel and were not created at the direction of counsel. The defendant had used the tool unilaterally, without attorney involvement.

Warner v. Gilbarco: the Eastern District of Michigan reached the opposite conclusion on different facts. The court denied a discovery request seeking all documents related to the plaintiff’s use of generative AI in litigation preparation. It found that using AI tools to prepare legal materials is analogous to traditional work product-protected activities. Critically, it held that ChatGPT and similar tools are tools, not persons, and that waiver of work product protection requires disclosure to an adversary, not to a tool.

The distinction between the two decisions is how the tool was used, under whose direction, and whether appropriate confidentiality measures were in place.

For firms using consumer AI tools without enterprise agreements, without confidentiality provisions, and without attorney oversight of how those tools are used in active matters, the Heppner ruling is the more instructive one.

The AI Agent Regulatory Frameworks Are Not Ready

Since the agent hype is not slowing down, it’s worth revisiting what’s the actual danger with these systems. The OpenClaw incident earlier this month illustrated that the real risk is not the model, it’s the connections.

Over 900 OpenClaw gateways were found exposed with zero authentication. Anyone on the internet could obtain full shell access. The underlying model had its own safety measures. Those measures were irrelevant once the connections were in place.

Current regulatory frameworks were not built for this. The EU AI Act, generally considered the most comprehensive AI regulation in existence, classifies risk by use case. A general-purpose agent does not have a fixed use case. It can be applied to high-risk tasks without being exclusively designed for them, which means it can fall outside the strictest regulatory requirements even when the practical exposure is severe.

One way to describe today’s reality would be harm occurring at machine speed while enforcement remains stuck at human speed.

The regulatory frameworks will not catch up before managing partners need to make decisions. And the governance question is what are they willing to define for the firm before a court, a regulator, or a client defines do it for them.

Live Session on LinkedIn

Last Saturday, we unpacked the two federal court decisions on AI and privilege, what Cowork’s audit log gap means in practice, the Vercel research showing the accuracy difference between Skills and AGENT.md files, finetuning myths and model collapse, and why SOC 2 badges tell you about a vendor’s internal controls but nothing about where your data goes once the chain of sub-services kicks in.

Using Free ChatGPT Waived Attorney Client Privilege

Looking Ahead

🎙 This Saturday at 2pm CET!

This week’s guest on Rok’s Legal AI Conversations is Kaj Rozga, senior in-house counsel at ABB and host of the Version Up podcast on legal tech. Kaj has spent 16 years across the FTC, BigLaw, and in-house, and is currently also leading AI initiatives for his legal team.

We discuss why the upfront work before any tool decision is where most firms fail, why efficiency is often the wrong frame for senior legal practice, and what the gap between vendor predictions and day to day legal reality actually tells us about where this is all heading.

Podcast guest cover
The trust problem in legal tech

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

← Previous Edition 25: Microsoft's Copilot read your confidential emails for weeks Next → Edition 27: Your lawyers vibe-coded something. Now what?
More editions

Get the next edition in your inbox.

Every Thursday. No noise, no pitch — just what's worth knowing about AI risk in legal practice this week.