
Edition 24: 700 AI layoffs at Baker McKenzie. How much of it is true?
AI replaced 700 people in Baker McKenzie. But did it?

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Feb 19, 2026
MPL Legal Tech Advisors: The Legal AI Brief
Thursday, 19th February 2026 - 24th Edition
This Week's Theme
Baker McKenzie announced layoffs across business services roles and explicitly referenced AI as part of the rationale.
This is a structural shift in how big law firms justify operational change. We've seen this before in BigTech, and now it's being used as an executive-level explanation for workforce reduction in legal as well.
At the same time, external data shows that hiring slowdowns began before generative AI adoption accelerated, and that macroeconomic pressure, interest rates, and post-pandemic correction explain a significant portion of workforce contraction across industries.
AI is Reshaping Organizational Structure
The Baker McKenzie announcement reflects a decision pattern that is spreading faster than the underlying technology is stabilising.
Across industries, most enterprise AI deployments still fail to integrate reliably into real workflows, with only a small fraction delivering sustained operational value.
At the same time, agentic systems are proving brittle in exception-heavy environments, where accountability, regulatory context, and escalation determine whether the workflow holds or breaks.
This is why reversals are already happening. Klarna cut hundreds of customer support roles after deploying AI, then began rebuilding human coverage after service quality declined.
Removing operational capacity before the replacement system has demonstrated stable behavior under real conditions introduces a risk that only becomes visible after the organizational change is already locked in.
Legal AI in Action
🎬 Why AI pilots stall in law firms
How fragmented systems block adoption to scale
🎬 Why Legal Tech Breaks After Demo Day
Why vendors can’t see your reality
The Big Risk Signal
Layoffs attributed to AI create internal pressure to deploy automation into workflows that are not operationally stable yet.
The risk does not appear immediately, it rather emerges gradually as responsibility shifts from defined human ownership to partially automated execution.
Research on AI adoption consistently shows that successful deployment keeps workers in the loop as operators, reviewers, and exception handlers, rather than removing them entirely. This structure allows the firm to absorb edge cases safely while the system matures.
When firms compress that transition period, failure modes surface later, in production, under real client conditions.
At that point, the firm is forced to rebuild the human layer under pressure.
The Business Model Exposure
AI does not affect all law firms equally, as it disproportionately impacts firms built on volume.
Large global firms generate revenue by scaling routine work across large teams. When AI compresses the time required to complete that work, the economic justification for maintaining that operational capacity weakens.
This does not eliminate legal demand, but it does redistribute it.
Clients can handle more work in-house, and smaller teams can deliver what previously required much larger ones.
This shifts the constraint from labour capacity to workflow design.
Weekly Live Sessions
Last Saturday we ran the 2nd live session and this time, we were joined by Raymond Blyd (founder of Sabaio, CEO of Legalcomplex and co-founder of Legalpioneer). We went deeper on the economics behind legal AI: why compute is still the bottleneck, why token prices falling doesn’t mean total cost falls, and why "fine tuning your way to accuracy" is usually an endless and expensive loop once you move across jurisdictions and practice areas.
These sessions run live on Saturdays at 3pm CET (9am EST, 6am PST).
Looking Ahead
🎙 This Saturday at 2pm CET!
This week’s guest on Rok’s Legal AI Conversations is Philip Young, co-founder and CEO of Garfield AI, the world's first regulated AI law firm.
We discuss what it actually takes to build a regulated AI-first legal product, why most AI-first law firm narratives collapse once you hit real procedure, and how Garfield was designed around closed problem spaces, hard boundaries, and human in the loop review from day one.
We also cover why lawyers and engineers optimise for different things (process vs outcomes), why civil procedure keeps accumulating complexity that systems engineers would never design, and where Philip draws the line on what should remain human even if automation becomes technically possible.
![]() Regulated by SRA |
Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Share







