
Edition 21: When AI drafts first, where does judgment sit?
AI-first law firms are misunderstood

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Jan 29, 2026
MPL Legal Tech Advisors: The Legal AI Brief
Thursday, 29nth January 2026 - 21st Edition
This Week's Theme
Over the past year, “AI-native” or “AI-first” law firms have moved from theoretical to operational.
Don't mistake these for experiments. Senior partners from top global firms are joining them. Clients are funding them and they are being built to coexist with Big Law, not compete with it.
Yet much of the conversation inside traditional firms still treats these developments as either a publicity stunt or an existential threat.
Both miss the point.
What these firms are actually demonstrating is not that lawyers are becoming obsolete, but that certain legal operating models no longer hold up once AI is introduced into real work products.
Insight of the Week
AI-native firms redesign responsibility, not capability. The most important distinction between AI-native firms and traditional firms is not the technology they use.
It is how responsibility is structured.
In AI-native firms, AI does not just assist lawyers in the abstract. It performs tightly scoped, repeatable work that is explicitly reviewed, supervised, and owned by named professionals. Drafting happens first, judgment happens last, and accountability is visible.
This is an important distinction because most AI failures in traditional firms are not caused by bad models, but rather by unclear boundaries.
Who decides when AI output is good enough?
Who is responsible when it is wrong?
Where does judgment re-enter the workflow?
When these questions are left implicit, adoption stalls. Lawyers either over-rely on outputs they do not trust, or ignore tools entirely when the work becomes consequential.
AI-native firms resolve this by design. Not by optimism, but by constraint.
We walk through these boundary questions with firms on our calls, using real workflows rather than abstract AI claims.
Legal AI in Action
🎬 Why Smart Lawyers Still Disagree
Judgment variance inside law firms
🎬 Law Firms Are Making Business Decisions on Partial Discovery
The discovery blind spot
The Big Risk Signal
Many firms are responding to AI-native entrants by accelerating tool adoption without changing structure.
That is the risky path.
Layering AI on top of unclear workflows increases supervision cost, not efficiency. It shifts effort into review, exception handling, and downstream correction, often invisibly.
This is why some lawyers report that AI saves time in demos but increases cognitive load in practice. The work has not disappeared, it has moved.
Firms that mistake faster drafting for progress are quietly accumulating execution debt, which is work that technically moves faster, but is harder to validate, explain, and defend.
The real risk is not being disrupted by AI-native firms. It is discovering, too late, that your internal model cannot safely absorb AI at scale.
What Law Firm Decision Makers are Saying
Across firms, the same pattern keeps emerging. Lawyers are not objecting to AI and neither do they fear replacement. They do disengage however when the technology undermines how professional value is demonstrated.
When drafting, redlining, and synthesis are automated without redefining what “good judgment” looks like, confidence drops. People revert to familiar methods under pressure.
This shows up quietly:
Tools used for low-stakes work only
Templates preferred over generated drafts
AI avoided when deadlines tighten
These shouldn't be classified as cultural failures, but rather taken as signals that responsibility and agency have not been redesigned alongside the technology.
Where adoption does hold, lawyers are involved early in defining limits. What AI can do, what it cannot, and where human judgment remains decisive.
Agency, not training, is the differentiator.
Looking Ahead
🎙 This Saturday at 2pm CET!
This week’s guest on Rok’s Legal AI Conversations is Luke Pigram, a senior lawyer at Sierra Legal, who moved from PwC’s legal team into a remote-first boutique that builds its own internal operating platform.
We talk about what actually changes when innovation starts with the leadership, why most AI adoption stalls at chatbots, and why Sierra’s approach starts with workflow infrastructure (email filing, CRM, internal control center) before layering AI on top. We also get practical about the real blockers, such as security constraints, execution bottlenecks in large firms, and why most tools fail when they aren’t tailored to how a firm actually works.
![]() Big firm innovation vs. small firm execution |
Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Share





