MPL Legal Tech Advisors: The Legal AI Brief

Thursday, 26th February 2026 - 25th Edition​​​​

This Week's Theme

Last week, Sam Altman announced that OpenAI had hired Peter Steinberger, the developer behind OpenClaw, a tool that lets ordinary people build AI agents that control apps, handle emails, manage calendars, and execute tasks across digital environments without writing a line of code.

Legal tech circles spent much of the week debating what the move signals. Altman was explicit about the direction. The future will be "extremely multi-agent". Not one, many agents. Running in the background, doing things.

So, agents are coming. That part is settled. And how do security and governance infrastructure actually look like right now?

This week provided some insights. Microsoft's Copilot bypassed data loss prevention policies and read confidential emails for weeks without permission. The European Parliament switched AI off entirely because it could not guarantee where data was going.

The Safety Net Is Still Being Built

At a recent industry event, Cisco's Chief Information Security Officer was asked directly about securing AI agents.

He described a multi-layered approach: monitoring agent behavior, issuing ephemeral privileges, sandboxing suspicious traffic, assigning agent identities, building governance controls. He said the tools to do this are being built and rolled out now. Cisco has been working on it for several years.

Cisco is a serious company, with serious resources, saying in plain language that the technology to govern agentic AI at scale is still being developed and deployed.

Hold that thought.

Because in the same week, Microsoft confirmed that a bug in Copilot had been silently reading and summarising customers' confidential emails since January, including emails protected by data loss prevention policies specifically designed to stop that from happening. Microsoft began rolling out a fix this month. It declined to say how many customers were affected.

Separately, the European Parliament's IT department disabled AI features on the work devices of all MEPs and their staff. The reason being, after an assessment, it could not guarantee the security of the tools' data. Their conclusion is that until the full extent of data sharing with service providers can be clarified, the safest answer is to switch them off.

Microsoft Copilot and parliamentary IT infrastructure represent the mainstream. And in both cases, the controls did not hold.

When controls fail at that level, the question is not whether your firm has a problem, but whether anyone has looked.

Legal AI in Action

🎬 Stop Buying Tools Before You Understand the Problem

Why AI magnifies broken processes

​

🎬 Define Success Before You Sign Anything

Measure outcomes, not impressions

Lawyers Distrust AI But Keep Using It

New research from Paragon, based on a survey of more than 250 legal professionals, found that only one in five lawyers place high trust in AI-generated legal work. 67% have had to override or correct AI output. 58% would not feel comfortable submitting an AI-drafted document to a regulator or court. 42% reported little to no trust in the technology at all.

Despite this, almost two thirds expect their team's AI use to increase moderately or significantly over the next year.

Distrust is rational, and the numbers suggest most lawyers are applying at least some skepticism. The risk signal is the gap between what people believe about the technology and how they behave around it, and that gap is where liability accumulates quietly, without a single visible decision.

The Legal AI Frontlines

The Dutch Bar Association has issued formal warnings to three lawyers for misuse of AI tools, and two were also required to attend a training course. The violations involved citing case law that either did not exist or was entirely irrelevant to the matter at hand; AI-generated arguments that looked coherent but were factually empty.

The supervisor tracking these cases is clear about the direction of travel: the number of investigations will increase as AI use increases. Supervisory bodies are not waiting for the profession to self-regulate, they are already acting.

Weekly Live Sessions on LinkedIn

Last Saturday, Stephen, Frode and I joined to unpack one number from the Thomson Reuters 2026 report that should concern every managing partner: GenAI usage in legal doubled last year, yet 82% of organizations are still not measuring return on investment. We discussed why the baseline problem sits at the centre of it, and what that means for the next renewal conversation.

We also got into data security, privacy, and sovereignty - why compliance badges tell you very little about where your data actually goes, and why "we don't train on your data" is harder to verify than most firms assume.

These sessions run live on LinkedIn on Saturdays at 3pm CET (9am EST, 6am PST).

Looking Ahead

πŸŽ™ This Saturday at 2pm CET!

This week's guest on Rok's Legal AI Conversations is Gayk Ayvazyan, corporate lawyer, mediator, and fractional legal counsel to startups and scale-ups expanding across Europe. Gayk works at the intersection of conflict prevention and commercial law, which gives him an unusual vantage point on where AI is genuinely useful and where it quietly makes things worse.

We discuss how the pace AI creates is generating a new category of conflict, why mediation may be one of the last professional contexts where AI hits a hard wall, and what a decade of working with founders teaches you about the gap between signing up for a tool and actually using it.


Instant solutions create real problems

Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski

Founder | MPL Legal Tech Advisors

Share