← All editions
Edition 2 Thursday, 18 September 2025

Take Back Control Of AI

All six major AI providers collect chat data by default

The Legal AI Brief MPL Legal Tech Advisors
Edition 2 · Thursday, 18 September 2025

This Week’s Theme

Stanford just released the most in-depth study on AI privacy, and it’s alarming.

All six major AI providers collect chat data by default, use it for training, and some keep it forever. Their human reviewers access your chats.

This issue is about taking back control: getting client data out of public AI models, setting up private access, and proving your legal AI tools work for your practice.

Taking Back Control

Stanford’s research shows that data collection and retention aren’t outliers, they’re the default. The firms feel like passengers because the vendor decides where data goes, how long it stays, and who can see it.

The way out isn’t banning AI. It’s building control back into your process:

1. Data discipline

One repository, consistent naming, searchable files.

2. Defensible pilots:

Defined scope, logs, and a partner who can shut it down if anything looks off.

3. Weekly review:

Pick 10-20 real matters, review the outputs together, and document what passed and what failed.

When firms do this, AI stops being risky, and starts funding itself.

🎬 30-Day Defensible Pilot

Run a 30-day AI pilot that’s scoped, logged, partner-supervised, and proves hours recovered before firm-wide rollout.

STOP Running AI Pilots Without These 6 Guardrails

🎬 Private GPT/Claude Setup

Set up private, logged GPT/Claude access in 10 minutes for the price of ChatGPT subscription. Stop risking client data in public AI.

How to Setup the BEST AI Provider for your Law Firm (Azure OpenAI and AWS Bedrock)

Red Flag of the Week

A partner at a litigation firm told me this week:

“Our paralegal pasted a client’s NDA into ChatGPT to summarize it, and emailed the output to the client.”

That’s a live privilege breach, not a hypothetical. If you still have no policy, no logs, and no private alternative, assume it’s already happening in your firm.

This week’s conversations with litigators, managing partners, legal AI founders, and GCs pointed towards the following:

  1. Evidence isn’t just text

AI can’t tell originals from copies or resolve conflicting metadata, only a lawyer can. Every output still needs review.

  1. Policy vacuum

Several firms I talked to found associates using public AI on personal accounts. No supervision, and no way to tell what left the firm.

  1. Control gaps

Without defined scope, logging, and a review period, AI pilots become shadow systems. That’s a malpractice trap waiting to happen.

Looking Ahead

🎙 This Saturday at 2pm CET!

First guest on Rok’s Legal AI Conversations is Emma Kelly, a 20-year legal tech and eDiscovery expert. We cover data discipline, AI evaluations, and how small firms can outpace Big Law.

Podcast guest cover
How to win with AI in law

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

← Previous Edition 1: Why Fixing Hallucinations Won’t Fix Your AI Risk Next → Edition 3: The Craftsman's Dilemma: Why Lawyers Resist AI
More editions

Get the next edition in your inbox.

Every Thursday. No noise, no pitch — just what's worth knowing about AI risk in legal practice this week.