MPL Legal Tech Advisors: The Legal AI Brief

Thursday, 11th December 2025 - 14th Edition​​

This Week's Theme: AGI is a promise, the reality is ANI (Artificial Narrow Intelligence)

AGI (Artificial General Intelligence) gets the attention, but it has nothing to do with the work law firms must run today.

What matters is narrow AI - simple, focused tools that automate clear tasks. These tools are already shaping how firms review documents, manage inboxes, compare contracts, and prepare first drafts.

Across all recent evidence, one thing is clear:

AGI is a marketing story. Narrow AI is the actual product.

Narrow AI is predictable, practical, and under your control.It supports lawyers. It does not replace them.

And most of the value your firm can gain from AI (well over 90%) can come from small models you own, not expensive systems you rent.

The shift ahead is simple:

Winning firms stop chasing AGI and start owning their AI stack and data.

Insight of the Week

Autonomous agents are failing in simple ways. "Human-level reasoning" is not here.

Managing partners keep hearing that AI is becoming more "agentic" and "intelligent". Real world use says otherwise.

1. Autonomous agents still make basic, harmful mistakes

In one recent case, an AI agent wiped a company’s entire hard drive.Not a folder. The whole drive. It:

  • ran a destructive command

  • skipped the confirmation

  • tried to fix it

  • made the damage worse

This wasn’t a hacker. It was the AI assistant.

For law firms, the lesson is simple:


If an AI agent can delete a hard drive, it can delete client files too.

Agents act. They execute.
They do not understand context or consequences.

2. "Human-level reasoning" breaks down under real work

Models score well on tests and benchmarks. But according to the leading AI scientist, Ilya Sutskeveer, they still:

  • repeat the same mistakes

  • fail simple real tasks

  • look smart on paper and unreliable in practice

They can pass a law exam and still hallucinate citations in actual matters.

So the message for leadership is the following:

These systems can help. They cannot think.

They cannot judge risk, read nuance, or defend a decision to a client. "Smarter" models do not mean "safer" models.

Legal AI in Action

🎬 The AI Compliance & Control Blueprint for Law Firms

A clear framework for locking down confidentiality, supervision, and AI access before these tools touch client data.

🎬 When Your Law Firm’s AI Agent Gets Breached: 24-Hour Response Plan

Your first 24 hours after an AI breach, distilled to essentials.

Red Flag of the Week

Agentic operating systems are rolling out with no safety.

Microsoft, Apple, and Google are turning their platforms into "agentic" operating systems. Your computer is becoming an AI agent host.

This means your device can:

  • watch your screen

  • log what you do

  • trigger commands automatically

  • scan files in the background

And some of this cannot be turned off.

For law firms, this is not just a privacy issue. It is a client protection and professional duty issue.

Because your devices are drifting into part AI agent, part recorderbefore your firm has any governance in place.

The risk is not AGI.

The risk is AI at the operating system layer with no guardrails.

What The Legal AI Frontlines Are Saying

Across firms and in-house teams, 4 patterns keep repeating:

1. Firms are treating experimental agents as stable tools.

Teams grant system-level access and permissions without understanding the blast radius.

2. Surveillance features are baked into everyday software without clear disclosure.

"See what you see" capabilities ship inside operating systems and productivity updates. Governance usually arrives later, if at all.

3. Vendors sell AGI language, but what they deliver is narrow, brittle, and unstable.

Leaders expect "intelligence". They get fragile automation that breaks under pressure.

4. Cross-border exposure is still misunderstood.

Data stored in an EU data centre can still sit under U.S. law if the parent company is American. Most firms don’t model that risk.

Looking Ahead

🎙 This Saturday at 2pm CET!

This week's guest at Rok's Legal AI Conversations is Bo Kinloch, in-house product and technology lawyer. We talk turning messy Microsoft 365 setups into usable automation and when AI-driven vibe coding actually makes sense for legal teams.


You'll regret skipping this step

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski

Founder | MPL Legal Tech Advisors

Share