← All editions
Edition 9 Thursday, 6 November 2025

When the agent breaks the law: Inside AI’s next compliance shock

AI agents are no longer demos, they’re operating inside real systems.

The Legal AI Brief MPL Legal Tech Advisors
Edition 9 · Thursday, 6 November 2025

The Silent Breach

AI agents are no longer demos, they’re operating inside real systems.

Some can reset passwords, process transactions, and manage client files without a human ever touching a keyboard.

The problem?

Every one of those actions now creates a digital identity the firm can’t trace.

And when an agent triggers a data leak, no one’s sure who’s accountable - the developer, the deployer, or the lawyer who used it.

That uncertainty isn’t theoretical anymore. Under the new AI Act, it’s a reportable incident.

The New Duty of Disclosure

Article 73 of the AI Act requires that any serious incident must be reported within 15 days, or 2 days if it affects critical systems or crosses borders.

That covers any malfunction that harms rights, disrupts operations, or exposes data.

In the U.S., the window is longer but no less strict: most state laws require notice within 30-60 days, and when several states are involved, the shortest deadline wins.

Different jurisdictions, same rule. Once data leaves your control, the clock starts.

And when an autonomous system makes that move, the firm still carries the legal responsibility, even if no human ever pressed “send”.

The Agent Problem

As one cybersecurity expert put it this week:

“Your AI-agent strategy is a PowerPoint deck built on a demo.”

Andrej Karpathy himself admits true autonomy is a decade away.

Yet firms are already deploying agents that act independently inside live systems - without logs, oversight, or kill-switches.

The result: self-authorizing liability making irreversible moves in environments that were never built for non-human autonomy.

This is not innovation. It’s unregulated delegation.

🎬 Is Microsoft Copilot Safe for Law Firms?

Safe setup or silent risk?

Watch This BEFORE Using Microsoft Copilot AI in your Legal Team

🎬 Case Study: Scaling an Outside GC Model

Interview with Joaquin Cubillos, managing partner at Cubillos Lama: Workflow beats headcount, every time.

The AI Audit that ALL Legal Teams Need (REAL Case Study)

Red Flag of the Week

Researchers uncovered a security flaw called ShadowLeak that targets AI tools designed to “remember” past conversations.

It hides secret instructions inside normal text, tricking the system into carrying that hidden message forward every time it’s used again.

This specific system was patched, but this means AI can accidentally pull old client data or internal files into a new chat without anyone realizing it.

Every smart agent that remembers yesterday can also expose it tomorrow.What The Legal AI Frontlines Are Saying

This week’s conversations with a cybersecurity architect, data governance advisor, and legal innovation leads all pointed to the same pattern:

  • Firms buy tools without reading the fine print, assuming compliance instead of verifying it.

  • Fear of missing out drives most adoption decisions, not strategy.

  • Policies rarely exist, ano no one checks if they’re followed.

  • Leadership pressure replaces preparation.

The problem isn’t bad technology. It’s blind trust.

Looking Ahead

🎙 This Saturday at 2pm CET!

This week’s guest on Rok’s Legal AI Conversations is Sara Pfrommer, a Utah appellate litigator with nearly 50 years in practice

Podcast guest cover
Use hallucinations to your advantage.

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

← Previous Edition 8: VC money, weak security, and legal AI agents that don’t work. Next → Edition 10: AI doesn’t fix chaos for legal teams. It scales it.
More editions

Get the next edition in your inbox.

Every Thursday. No noise, no pitch — just what's worth knowing about AI risk in legal practice this week.