← All editions
Edition 29 Thursday, 26 March 2026

Lawyers are connecting Cowork to Clio. Did anyone read the docs?

What Anthropic says about Cowork and what that means for your clients

The Legal AI Brief MPL Legal Tech Advisors
Edition 29 · Thursday, 26 March 2026

This Week’s Theme

My LinkedIn feed has been full of Claude Cowork content this week. How to connect it to Clio. How to build slash commands for your firm. How small firms can basically onboard a digital employee for near-zero cost. The demos look genuinely impressive, and I understand why people are excited.

But I kept waiting for someone to mention what is sitting right there on Anthropic’s own help page. Nobody did. So here it is.

Anthropic’s documentation for Cowork on Team and Enterprise plans says this, plainly, at the top of the compliance section:

“Cowork activity is not captured in Audit Logs, Compliance API, or Data Exports. Do not use Cowork for regulated workloads.”

A lawyer’s entire practice is regulated workloads.

The Claude Cowork Compliance Gap

There are two things in that statement that lawyers and anyone advising them need to understand precisely.

No audit trail.

When Cowork runs a task, like searching through matter files, pulling information from Clio, or drafting a document, none of that activity is logged anywhere you can check it. Not in Anthropic’s audit logs, not in their Compliance API, not in any data export. The record of what the AI did simply does not exist.

For lawyers, this is much more than a technical inconvenience. Bar rules on client confidentiality require you to be able to account for how client information was handled. If a regulator asks, if a client asks, if something goes wrong and you need to reconstruct what happened - you have nothing. The obligation sits with the lawyer regardless of what the tool does or does not log.

Client data on a laptop.

Cowork stores conversation history locally on each user’s computer. Not on a firm’s server or a system the firm centrally manages or controls. On the individual lawyer’s machine. There is no central admin visibility, no way to manage it across the firm, no way to delete it systematically if a matter closes or a lawyer leaves.

So when someone connects Cowork to Clio and runs a client matter through it, which is exactly what the tutorials being posted this week are showing, that conversation, including whatever client data passed through it, is now sitting on a laptop outside any system the firm controls or can account for.

How To Respond When Partner or Clients Ask About It

This conversation is already happening. A partner sees a LinkedIn post and asks you about it or a client wants to know whether you use it. Or someone in the firm demos it and wants to roll it out.

Your answer can confidently be: Anthropic themselves say not to use Cowork for regulated workloads, because the audit trail does not exist, and client data ends up on individual machines outside firm control. Those risks are not blown out of proportion. They are the documented current state of the product, disclosed by the vendor.

That answer is defensible, because it’s grounded in documentation. And it closes the conversation without you having to claim technical expertise, because you are quoting the vendor’s own words.

What The Broader Agentic Risk Pattern Looks Like

Cowork’s limitations are not unusual for this category of tool. They reflect something structural about where agentic AI is right now.

The pattern shows up clearly in what happened with OpenClaw, an open-source AI agent framework that became one of the fastest-growing open source projects on GitHub after its November 2025 launch.

The platform’s MCP implementation had no mandatory authentication and granted shell access by design, meaning unauthenticated users could run arbitrary commands, sometimes with elevated privileges. Internet-wide scanning found tens of thousands of internet-reachable control panels, raising the risk of token theft and downstream credential exposure when hosts were misconfigured.

The underlying model in those deployments had its own safety measures. Those measures were irrelevant once the connections were in place and authentication was absent. The risk was never the model. It was what the model was connected to, and whether those connections were governed.

Sure, Cowork operates differently, as it is a commercial product from Anthropic and not an open source framework. But the governance gap is the same. The tool can connect to Clio, to local files, to email. It can execute multistep workflows across those systems.

And right now, none of that activity is captured anywhere that compliance or oversight can reach.

This is Not Forever

Anthropic knows exactly what the gaps are. They are the ones documenting them. Cowork is a research preview and the compliance capabilities will come; audit logging, data exports, proper enterprise controls. When they do, the Clio integrations and everything people are building now will be genuinely useful for smaller firms.

But that is not where things stand in March 2026. And the people posting YouTube tutorials about connecting it to your case management system are not mentioning that part.

🎬 Layoffs in Law Over AI Capabilities That Don’t Exist Yet

How the hiring slowdown started six months before ChatGPT launched, what Baker McKenzie’s cuts actually signal about volume-dependent business models, and what the researchers building these systems say about the real timeline vs. what the people raising $40 billion say.

The AI Layoff Trap: Law Firms Are Making a Huge Mistake

🎙 What Lawyers Need to Know About OpenClaw, Vibe Coding, and Where AI Is Really Heading

Anna Guo is the founder of Legal Benchmarks and a lawyer who has spent the last year benchmarking AI performance on real legal tasks. In this conversation we talk through why OpenClaw is fundamentally different, what to pay attention to before you connect Claude Cowork to client data, whether lawyers should actually be building their own tools or leaving that to engineers, and what AI researchers working on these models are saying about the real capability timeline versus what vendors are selling.

Should Law Firms Build Their AI Agents? The Hard Reality

Live Session on LinkedIn

These sessions are running bi-weekly. The next one is on Saturday, 11th of April at 3pm CET.

The sessions are open to everyone, but if you have a specific situation you want to work through in a smaller setting, you can send me an email directly.

No AI vendor has ever sponsored our sessions and none ever will. We’re not prepared, have no slides, no agenda, and no legal AI tools to sell. Frode Nilssen, Masai Brown-Andrews, Stephen Currie and I also don’t always agree.

What you get are thoughts, learnings and experiences from working in this space that are genuinely ours and not vendor-incentivized.

AI Native For Law Firms? Here's What They're Not Telling You

🎙 Next Tuesday at 2pm CET!

My next guest on Rok’s Legal AI Conversations is Ben Chiriboga, founder of Reframe.lawyer and former Chief Growth Officer at Nexl.

We discuss why the legal profession’s talent problem is fundamentally a culture problem rather than a skills gap, what the three career paths actually look like for lawyers who want to stay relevant in an AI-native legal market, and why the $4,000 billable hour and the AI-native law firm are not contradictions, they’re just solving for completely different parts of the market.

Podcast guest cover
Law is facing a talent problem

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

← Previous Edition 28: AI agents now act autonomously. Should they? Next → Edition 30: What does your practice actually do?
More editions

Get the next edition in your inbox.

Every Thursday. No noise, no pitch — just what's worth knowing about AI risk in legal practice this week.