← All editions
Edition 1 Friday, 12 September 2025

Why Fixing Hallucinations Won’t Fix Your AI Risk

Hallucinations are just a start

The Legal AI Brief MPL Legal Tech Advisors
Edition 1 · Friday, 12 September 2025

Welcome to the very first edition of The Legal AI Brief!

This is where I share the sharpest insights from last week’s conversations with lawyers, compliance specialists, and legal tech founders, plus an informed look of where legal AI adoption is headed.

Normally, you’ll get this every Thursday, but we’re kicking things off on Friday for the first issue.

This Week’s Theme

Hallucinations came up in every conversation I had last week - from LinkedIn comments to calls with partners, and even in OpenAI’s latest research paper.

AI isn’t “broken” when it makes things up, that’s how it works. It predicts what is plausible, not what’s true. If unchecked outputs reach client files, you risk privilege, billable hours, and your license.

This edition is about what to do instead of chasing zero hallucinations.

Why Fixing Hallucinations Won’t Fix Your AI Risk

OpenAI’s research shows you can’t eliminate hallucinations, because they are part of how these systems work - AI predicts what sounds right, not what’s true.

Industry benchmarks even penalize them for saying “I don’t know”.

The risk becomes using AI without the controls to catch them before they create exposure.

This is what you can do:

1. Force the source

Require a quote or citation from your documents before any summary.

2. Reward “I don’t know”

Allow abstention instead of guesses.

3. Stay in bounds

Restrict use to approved firm documents, not the open internet.

Handled this way, AI increases your capacity and strengthens your work product, while reducing malpractice exposure.

🎬 Staying Compliant With AI (7-Step Checklist)

A simple process to stay compliant, protect privilege, and avoid the mistakes already drawing sanctions.

7-Step AI Compliance Checklist for Law Firms (Protect Your License!)

🎬 Evaluating Legal AI Tools (The Real 7-Step Test)

How to stress-test legal AI tools on your own documents, catch hidden errors, and build a record you can show a regulator or insurer.

STOP Buying Legal AI Tools Until You Run These 7 Tests

Red Flag of the Week

In our call, a former AmLaw 100 lawyer put it bluntly: “Malpractice insurers aren’t covering fines, sanctions, or penalties from AI misuse, and new AI coverage is starting to require proof of controls.”

This confirms the risk isn’t a single bad answer; it’s lacking a defensible system. Show sources, allow “I don’t know”, keep AI inside firm documents, require lawyer review, and log who did what, when.

This week’s conversations with data privacy specialists, managing partners, legal AI founders, consultants, and in-house counsel surfaced three patterns:

1. Confidentiality is the first to break

Redacting for AI erases time savings, but public tools risk GDPR and Cloud Act exposure, putting privilege and client trust at risk.

2. “Shadow AI use” is rising

Associates are using ChatGPT/Copilot without policy, meaning untracked usage, malpractice exposure, and confidentiality breaches.

3. Compliance reset is coming

The EU AI Act will bar sending sensitive legal data to public AI systems without strict controls; even for firms outside Europe; so global teams need a plan now.

Looking Ahead

🎙 Next Saturday at 2pm CET!

The first episode of Rok’s Legal AI Podcast airs next Saturday on YouTube, kicking off weekly conversations with partners, lawyers, legal AI tool founders, and in-house counsel, focused on what’s actually working (and what’s not).

Podcast guest cover
Podcast launch next week!

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Next → Edition 2: Take Back Control Of AI
More editions

Get the next edition in your inbox.

Every Thursday. No noise, no pitch — just what's worth knowing about AI risk in legal practice this week.