
Edition 23: When AI agents move faster than your firm can react
Claude Cowork changes the conversation law firms should be having

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Feb 12, 2026
MPL Legal Tech Advisors: The Legal AI Brief
Thursday, 12th February 2026 - 23rd Edition
This Week's Theme
In the past two weeks, Claude Cowork launched, markets reacted, security researchers demonstrated vulnerabilities, and legal-specific plugins went live with contract review capabilities.
If you felt like you missed something, you did. That was the point I have been making about agent systems. The timeline between output and incident has collapsed.
Law firms that are still evaluating AI as if the primary risk is hallucinations have already missed the actual threat model. Agentic systems don't leak data through generating text, but through executing tasks.
And execution happens faster than your quarterly review cycle.
Agentic systems change how failure happens
Claude Cowork is being discussed solely as a capability upgrade. That completely disregards the other part of reality.
A chat model produces outputs you review before anything happens, while an agent executes tasks while you make tea. When a system moves from content generation to workflow execution, the unit of failure changes from wrong wording to state changes: documents altered, files moved, emails sent, data uploaded.
Anthropic launched Cowork as a research preview for Claude Max subscribers. Within 48 hours, security researchers demonstrated massive vulnerabilities through prompt injection.
A user uploads a document containing hidden instructions, the agent reads it, gets manipulated, and uploads sensitive files to an attacker-controlled account. This happens because LLMs cannot distinguish between user instructions and the rest of the content.
However, the difference with agents is that no human approval is required.
The vulnerability was previously disclosed for Claude's code execution environment and acknowledged by Anthropic but not remediated. Cowork extended that environment from developer workflows to everyday file management.
Anthropic's warning is clear: "Be aware of suspicious actions that may indicate prompt injection." That puts the burden on users to detect attacks in real time. For managing partners and other non-developers organizing their desktop files, that is not a reasonable expectation.
Traditional security incidents follow a familiar rhythm: discovery, proof of concept, patch, exploitation. Agentic systems collapse that timeline. When an agent platform gets compromised, the attacker isn't just stealing data, they're stealing automation.
That's the tempo shift law firms are not prepared for.
Legal AI in Action
🎬 The AI Decisions That Lock Your Firm In
Where client data goes once AI touches it
🎬 Claude Cowork and the Execution Shift
AI Agent workspaces change the game completely
The Big Risk Signal
Security researchers demonstrated a working file exfiltration attack against Cowork last week. The attack chain is straightforward and realistic for legal work.
A user attaches a folder containing confidential real estate files to Cowork. The user then uploads what appears to be a helpful "skill" document downloaded from the internet. The document looks like standard formatting guidance saved from Word, but it contains hidden instructions in white-on-white text.
The user asks Cowork to analyze their files using this skill. The hidden injection instructs Claude to upload files to an attacker-controlled Anthropic account using the attacker's API key embedded in the instruction. Claude executes the command because the Anthropic API is allowlisted as trusted in the execution environment.
No human approval is required. The attacker now has loan estimates containing financial figures and social security numbers.
The vulnerability was previously disclosed and acknowledged by Anthropic but not remediated before Cowork launched.
Law firms handle untrusted documents from opposing counsel, clients, and third parties every day. That is now the attack surface.
The Pattern Law Firms Should Have Seen Coming
The market panic around Cowork feels dramatic until you remember this exact pattern played out two years ago.
November 2023: OpenAI's Dev Day launched Custom GPTs. Dozens of AI wrapper startups that had raised millions found their entire business model available as a native feature. Some shut down within months.
January 27, 2026: Anthropic blocked OpenCode and xAI's tools from accessing Claude models through client fingerprinting. Thousands of developers woke up to authentication errors. The justification: enforce commercial terms, control distribution, ensure proper billing.
Law firms building dependencies on agent workspaces need to understand what this means. Access paths can be closed overnight. Model choice can be constrained through technical and policy mechanisms.
The firms that will navigate this are the ones that maintain optionality before they need it.
Weekly Roundtable Sessions
Last Saturday we ran the first live roundtable with legal and AI practitioners. We discussed Claude Cowork security vulnerabilities, why vibe-coding lawyers create compliance overhead nobody’s tracking, the legal tech market panic after Claude’s plugin launch, and why OpenAI adding ads signals deeper problems for vendors built on top of their models.
These sessions run live on Saturdays at 3pm CET (9am EST, 6am PST). They are completely unscripted and raw.
Looking Ahead
🎙 This Saturday at 2pm CET!
This week's guest on Rok's Legal AI Conversations is Dr. Dusan Pavlovic, data protection specialist, AI governance expert, and advisor to the Serbian government on AI regulation strategy. Dusan also serves as an independent ethics expert for the European Research Executive Agency.
We discuss how the EU AI Act is becoming a de facto global standard despite the narrative that Europe over regulates, why enforcement of compliance remains the hardest part, and the structural problem with neural networks that no amount of regulation can fix: the black box transparency gap. We also cover why bottom up sector regulation might work better than top down mandates, the difference between regulating AI versus regulating by AI, and what data governance actually looks like when providers don't control their training data.
![]() Navigating AI compliance |
Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Share






