This Week’s Theme
Stanford just released the most in-depth study on AI privacy, and it’s alarming.
All six major AI providers collect chat data by default, use it for training, and some keep it forever. Their human reviewers access your chats.
This issue is about taking back control: getting client data out of public AI models, setting up private access, and proving your legal AI tools work for your practice.
Taking Back Control
Stanford’s research shows that data collection and retention aren’t outliers, they’re the default. The firms feel like passengers because the vendor decides where data goes, how long it stays, and who can see it.
The way out isn’t banning AI. It’s building control back into your process:
1. Data discipline
One repository, consistent naming, searchable files.
2. Defensible pilots:
Defined scope, logs, and a partner who can shut it down if anything looks off.
3. Weekly review:
Pick 10-20 real matters, review the outputs together, and document what passed and what failed.
When firms do this, AI stops being risky, and starts funding itself.
Legal AI in Action
🎬 30-Day Defensible Pilot
Run a 30-day AI pilot that’s scoped, logged, partner-supervised, and proves hours recovered before firm-wide rollout.
🎬 Private GPT/Claude Setup
Set up private, logged GPT/Claude access in 10 minutes for the price of ChatGPT subscription. Stop risking client data in public AI.
Red Flag of the Week
A partner at a litigation firm told me this week:
“Our paralegal pasted a client’s NDA into ChatGPT to summarize it, and emailed the output to the client.”
That’s a live privilege breach, not a hypothetical. If you still have no policy, no logs, and no private alternative, assume it’s already happening in your firm.
What Legal AI Frontlines are Saying
This week’s conversations with litigators, managing partners, legal AI founders, and GCs pointed towards the following:
- Evidence isn’t just text
AI can’t tell originals from copies or resolve conflicting metadata, only a lawyer can. Every output still needs review.
- Policy vacuum
Several firms I talked to found associates using public AI on personal accounts. No supervision, and no way to tell what left the firm.
- Control gaps
Without defined scope, logging, and a review period, AI pilots become shadow systems. That’s a malpractice trap waiting to happen.
Looking Ahead
🎙 This Saturday at 2pm CET!
First guest on Rok’s Legal AI Conversations is Emma Kelly, a 20-year legal tech and eDiscovery expert. We cover data discipline, AI evaluations, and how small firms can outpace Big Law.
Each edition of Legal AI Brief brings practical lessons from firms using AI safely.