MPL Legal Tech Advisors present: The Legal AI Brief

Friday, 12 September 2025 - First Edition

Welcome to the very first edition of The Legal AI Brief!

This is where I share the sharpest insights from last week’s conversations with lawyers, compliance specialists, and legal tech founders, plus an informed look of where legal AI adoption is headed.

Normally, you’ll get this every Thursday, but we’re kicking things off on Friday for the first issue.

This Week’s Theme

Hallucinations came up in every conversation I had last week - from LinkedIn comments to calls with partners, and even in OpenAI’s latest research paper.

AI isn’t “broken” when it makes things up, that’s how it works. It predicts what is plausible, not what’s true. If unchecked outputs reach client files, you risk privilege, billable hours, and your license.

This edition is about what to do instead of chasing zero hallucinations.

Insight of the Week: Why Fixing Hallucinations Won’t Fix Your AI Risk

OpenAI’s research shows you can’t eliminate hallucinations, because they are part of how these systems work - AI predicts what sounds right, not what’s true.

Industry benchmarks even penalize them for saying “I don’t know”.

The risk becomes using AI without the controls to catch them before they create exposure.

This is what you can do:

1. Force the source

Require a quote or citation from your documents before any summary.

2. Reward “I don’t know”

Allow abstention instead of guesses.

3. Stay in bounds

Restrict use to approved firm documents, not the open internet.

Handled this way, AI increases your capacity and strengthens your work product, while reducing malpractice exposure.

Legal AI in Action

Staying Compliant With AI (7-Step Checklist)

A simple process to stay compliant, protect privilege, and avoid the mistakes already drawing sanctions.


Evaluating Legal AI Tools (The Real 7-Step Test)

How to stress-test legal AI tools on your own documents, catch hidden errors, and build a record you can show a regulator or insurer.


This Week’s Big Risk Signal

In our call, a former AmLaw 100 lawyer put it bluntly: "Malpractice insurers aren’t covering fines, sanctions, or penalties from AI misuse, and new AI coverage is starting to require proof of controls."

This confirms the risk isn’t a single bad answer; it’s lacking a defensible system. Show sources, allow “I don’t know”, keep AI inside firm documents, require lawyer review, and log who did what, when.

What The Legal AI Frontlines Are Saying

This week’s conversations with data privacy specialists, managing partners, legal AI founders, consultants, and in-house counsel surfaced three patterns:

1. Confidentiality is the first to break

Redacting for AI erases time savings, but public tools risk GDPR and Cloud Act exposure, putting privilege and client trust at risk.

2. "Shadow AI use" is rising

Associates are using ChatGPT/Copilot without policy, meaning untracked usage, malpractice exposure, and confidentiality breaches.

3. Compliance reset is coming

The EU AI Act will bar sending sensitive legal data to public AI systems without strict controls; even for firms outside Europe; so global teams need a plan now.

Looking Ahead

Big launch next week!

The first episode of my Legal AI Podcast airs next Saturday on YouTube, kicking off weekly conversations with partners, lawyers, legal AI tool founders, and in-house counsel, focused on what’s actually working (and what’s not).


Coming soon!

At MPL Legal Tech Advisors we’ve partnered with Hoorntje Legal, a Dutch boutique employment law firm, to set up their first AI system. We’ll soon share behind the scenes of going from zero to live.


With so much happening around AI in law, each edition of Legal AI Brief will bring you practical lessons from real legal teams putting AI to work.Thank you for reading this first edition! Excited to have you here from day one.

Rok Popov Ledinski

Founder | MPL Legal Tech Advisors

Share