
Edition 2: Take Back Control Of AI
Stanford just released the most in-depth study on AI privacy, and it’s alarming. All six major AI providers collect chat data by default, use it for training, and some keep it forever. Their human reviewers access your chats...

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Sep 18, 2025
MPL Legal Tech Advisors: The Legal AI Brief
Thursday, 18 September 2025 - 2nd Edition
This Week’s Theme
Stanford just released the most in-depth study on AI privacy, and it’s alarming.
All six major AI providers collect chat data by default, use it for training, and some keep it forever. Their human reviewers access your chats.
This issue is about taking back control: getting client data out of public AI models, setting up private access, and proving your legal AI tools work for your practice.
Insight of the Week: Taking Back Control
Stanford’s research shows that data collection and retention aren’t outliers, they’re the default. The firms feel like passengers because the vendor decides where data goes, how long it stays, and who can see it.
The way out isn’t banning AI. It’s building control back into your process:
1. Data discipline
One repository, consistent naming, searchable files.
2. Defensible pilots:
Defined scope, logs, and a partner who can shut it down if anything looks off.
3. Weekly review:
Pick 10-20 real matters, review the outputs together, and document what passed and what failed.
When firms do this, AI stops being risky, and starts funding itself.
Legal AI in Action
30-Day Defensible Pilot
Run a 30-day AI pilot that’s scoped, logged, partner-supervised, and proves hours recovered before firm-wide rollout.
Private GPT/Claude Setup
Set up private, logged GPT/Claude access in 10 minutes for the price of ChatGPT subscription. Stop risking client data in public AI.
This Week’s Big Risk Signal
A partner at a litigation firm told me this week:
“Our paralegal pasted a client’s NDA into ChatGPT to summarize it, and emailed the output to the client.”
That’s a live privilege breach, not a hypothetical. If you still have no policy, no logs, and no private alternative, assume it's already happening in your firm.
What The Legal AI Frontlines Are Saying
This week’s conversations with litigators, managing partners, legal AI founders, and GCs pointed towards the following:
1. Evidence isn't just text
AI can’t tell originals from copies or resolve conflicting metadata, only a lawyer can. Every output still needs review.
2. Policy vacuum
Several firms I talked to found associates using public AI on personal accounts. No supervision, and no way to tell what left the firm.
3. Control gaps
Without defined scope, logging, and a review period, AI pilots become shadow systems. That’s a malpractice trap waiting to happen.
Looking Ahead
Finally Live!
The first episode of my Legal AI Podcast airs tomorrow at 2 PM CEST with guest Emma Kelly, a 20-year legal tech and eDiscovery expert. We cover data discipline, AI evaluations, and how small firms can outpace Big Law.

Coming soon!
After working with dozens of firms, we’re teaming up with a Utah appellate litigator to build a scientifically grounded legal prompt guide blending legal with our AI expertise.
Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Share




