MPL Legal Tech Advisors: The Legal AI Brief

Thursday, 6th November 2025 - 9th Edition​​

The Silent Breach

AI agents are no longer demos, they’re operating inside real systems.

Some can reset passwords, process transactions, and manage client files without a human ever touching a keyboard.

The problem?

Every one of those actions now creates a digital identity the firm can’t trace.

And when an agent triggers a data leak, no one’s sure who’s accountable - the developer, the deployer, or the lawyer who used it.

That uncertainty isn’t theoretical anymore. Under the new AI Act, it’s a reportable incident.

The New Duty of Disclosure

Article 73 of the AI Act requires that any serious incident must be reported within 15 days, or 2 days if it affects critical systems or crosses borders.

That covers any malfunction that harms rights, disrupts operations, or exposes data.

In the U.S., the window is longer but no less strict: most state laws require notice within 30-60 days, and when several states are involved, the shortest deadline wins.

Different jurisdictions, same rule. Once data leaves your control, the clock starts.

And when an autonomous system makes that move, the firm still carries the legal responsibility, even if no human ever pressed "send".

The Agent Problem

As one cybersecurity expert put it this week:

“Your AI-agent strategy is a PowerPoint deck built on a demo.”

Andrej Karpathy himself admits true autonomy is a decade away.

Yet firms are already deploying agents that act independently inside live systems - without logs, oversight, or kill-switches.

The result: self-authorizing liability making irreversible moves in environments that were never built for non-human autonomy.

This is not innovation. It’s unregulated delegation.

Legal AI in Action

🎬 Is Microsoft Copilot Safe for Law Firms?

Safe setup or silent risk?


🎬 Case Study: Scaling an Outside GC Model

Interview with Joaquin Cubillos, managing partner at Cubillos Lama: Workflow beats headcount, every time.


Red Flag of the Week

Researchers uncovered a security flaw called ShadowLeak that targets AI tools designed to “remember” past conversations.

It hides secret instructions inside normal text, tricking the system into carrying that hidden message forward every time it’s used again.

This specific system was patched, but this means AI can accidentally pull old client data or internal files into a new chat without anyone realizing it.

Every smart agent that remembers yesterday can also expose it tomorrow.What The Legal AI Frontlines Are Saying

What The Legal AI Frontlines Are Saying

This week’s conversations with a cybersecurity architect, data governance advisor, and legal innovation leads all pointed to the same pattern:

  • Firms buy tools without reading the fine print, assuming compliance instead of verifying it.

  • Fear of missing out drives most adoption decisions, not strategy.

  • Policies rarely exist, ano no one checks if they’re followed.

  • Leadership pressure replaces preparation.

The problem isn’t bad technology. It’s blind trust.

Looking Ahead

🎙 This Saturday at 2pm!

This week's guest at Rok's Legal AI Conversations: Sara Pfrommer, a Utah appellate litigator with nearly 50 years in practice

We unpack how seasoned lawyers are actually making AI work in high-stakes legal writing.

Use hallucinations to your advantage.

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski

Founder | MPL Legal Tech Advisors

Share