
Edition 22: Law firms are losing control of AI in 4 areas
AI quietly erodes control inside law firms

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Feb 5, 2026
MPL Legal Tech Advisors: The Legal AI Brief
Thursday, 5th February 2026 - 22nd Edition
This Week's Theme
AI rarely enters a law firm through a single, visible decision.
It shows up through renewals, bundled features, browser tools, and "temporary" allowances that slowly become normal practice. Each step feels operational and none of them feel strategic.
By the time leadership asks whether AI is delivering value or creating risk, the operating model has already shifted. Work is being produced differently, reused differently, and supervised differently than before.
This week focuses on where that shift happens, and why it is hard to reverse once it settles in.
Control Erodes Before Governance Begins
Many law firms believe they are still in an experimentation phase with AI. In practice, many have already crossed into dependency without realizing it.
Control tends to weaken across four areas that sit below dashboards and outside formal approvals:
Inventory
Firms lose a reliable view of which systems interact with client data once AI features arrive through upgrades, embedded tools, and personal accounts.
Usage
Access metrics show who can use a tool, not how it shapes daily work or where reliance begins.
Output provenance
Work product becomes a chain of human and AI steps that cannot be reconstructed when questions arise.
ROI attribution
Time savings and value creation are inferred after adoption, once baselines are gone and behavior has already changed.
The above usually emerges because AI adoption is treated as a tooling exercise rather than a control problem. Responsibility shifts quietly, long before it is explicitly reassigned.
Legal AI in Action
🎬 Small and Mid-Sized Law Firms Became Viable Targets
Why firm size no longer protects you
🎬 When Informal AI Becomes Structural
Before habits harden
The Big Risk Signal
AI adoption decisions are being made by default owners.
In many firms I see AI tools being selected, configured, and renewed through IT, procurement, or practice support because they arrive as software. The consequences, however, sit with partners, lawyers, and ultimately the firm.
That ownership gap is important. It means key choices about dependency, acceptable failure rates, and long-term investment horizons are decided without explicit mandate from those who carry professional liability.
By the time leadership engages, the decision has already been operationalised.
What Law Firm Decision Makers are Saying
Across recent conversations with managing partners and compliance specialists, I kept on hearing the same:
“We assumed we could map usage later.”
“We did not realize how much output was being reused downstream.”
“We could not explain which tool influenced the final document.”
These are not failures of intent or capability, but rather symptoms of AI entering workflows before boundaries were defined.
Other regulated industries learned this earlier. Banking and insurance now start AI initiatives by defining constraints: allowed data, accountability, escalation paths, and review standards.
Those constraints shape what can be built and how it can be used. Law firms are being pushed in the same direction, without having developed those disciplines first.
New This Week: Roundtable Sessions
This is a series of unfiltered roundtable conversations between lawyers and AI, data, and security practitioners on how AI systems interact with law, regulation, and real-world infrastructure.
The sessions focus on concrete failure modes, including data sovereignty breaches, cloud routing and outages, vendor dependencies, security incidents, governance gaps, and liability exposure created by AI deployment.
The discussions move past tools and prompts and into system behavior, operational risk, and the legal consequences that emerge once AI is embedded in regulated environments.
Starting this week, these sessions will run live on Saturdays at 3pm CET (9am EST, 6am PST). The first live session airs Saturday, February 7.
Looking Ahead
🎙 This Saturday at 2pm CET!
This week’s guest on Rok’s Legal AI Conversations is Eve Vlemincx, a former lawyer who now works with law firms on strategy, leadership, and transformation, and teaches in the Stanford GSB LEAD program.
We discuss why most AI strategy in law is cosmetic, how AI exposes deeper issues in culture and operating models, and why firms default to copying peers instead of making deliberate choices. We also cover incentives, the efficiency trap, and what it takes to evaluate tools without locking the firm into the wrong path.
![]() Effectiveness vs. efficiency |
Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Share






