
Edition 27: Your lawyers vibe-coded something. Now what?
Vibe coding is real. So is the gap it leaves open.

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Mar 12, 2026

MPL Legal Tech Advisors: The Legal AI Brief
Thursday, 12th March 2026 - 27th Edition
This Week's Theme
Lawyers are now genuinely vibe coding legal SaaS products over weekends. The capability is real. A lawyer who understands a specific workflow deeply can build something tailored to how their team actually works within hours.
The question is what happens after that weekend.
Working software and production-ready software are not the same thing and that gap is not closed by better models or more careful prompting. It is closed by expertise, the kind that allows you to evaluate what the AI produces and be accountable when it matters.
The Pattern You Already Recognize
When a client arrives with a contract drafted using ChatGPT, you do not just review the text. You look at what they did not ask about, such as the jurisdiction question they did not realize was a question, the clause that looks enforceable but is not. The client sees fluent text and you see the gap between fluent text and sound advice.
The same dynamic applies when a lawyer builds software. AI amplifies the capabilities of people who already have the knowledge to direct and verify it. It creates a misleading sense of competence in people who do not.
A Carnegie Mellon benchmark study tested the best available AI coding agents on 200 real-world engineering tasks. 61% of solutions were functionally correct but only 10.5% of them were also secure. Their conclusion was that security cannot be treated as a prompting problem. The failure is structural.
If your firm is building or evaluating internal tools right now, that is exactly the conversation to have before something is put in front of client data.
Legal AI in Action
π¬ The Weekend Build That Nobody Properly Reviewed
Lawyers can now build functional internal tools without writing a line of code. In this video I cover why that capability is real, where it belongs, and what the 10.5% security rate on AI-generated code means for firms that are already using it.
π Deploying AI Inside a Legal Department: What Actually Works
Kaj Rozga has been practicing law for 16 years across big law, mid-size firms, and in-house. In this conversation we talk through what gets underestimated when deploying AI inside legal teams, why the iterative approach that works in product development clashes with how lawyers are wired, and why the gap between what the market predicts and what lawyers are actually doing day to day is bigger than most people admit.
π° New Article Published
As AI Enters Legal Work, Responsibility Gets Blurry

When AI is part of how legal work gets produced, firms need to be able to show who authorised it, how outputs were reviewed, and where responsibility sits. Most cannot. Co-authored with Jason T. Marett, Am Law 100 attorney who co-led one of the first AI incubators in Big Law.
The Big Risk Signal
Last week, nearly 3,000 live Google Cloud API keys were found exposed publicly, originally deployed as billing identifiers, never intended as AI credentials.
When someone enables the Gemini API in a Google Cloud project, every existing key in that project silently gains access to Gemini endpoints. Without any warning or notification. One Reddit user reported $82,314 in charges over two days from a stolen key.
The risk profile of a configuration can change without the configuration changing. Law firms using Google Cloud services should verify which APIs are currently enabled and whether any publicly deployed keys have quietly become AI credentials.
What a Structural Fix Looks Like
The way most agent systems are secured today is equivalent to telling a new associate to "use good judgment" and leaving it at that. No supervision, no defined scope, no way to verify what they actually did. When something goes wrong, you find out after the fact.
Google published a framework this year that works more like a proper matter delegation. Before the agent starts any task, the system defines exactly what actions are permitted for that specific task, and a separate, independent layer enforces those boundaries. The agent cannot override them, no matter what it reads or processes along the way. If the task is to review a folder of documents, sending anything externally is simply not an available action.
That distinction is important specifically for law firms. You already operate with defined scopes of authority, supervision structures, and audit trails. This research points toward agent systems that can be built the same way, where what the agent is allowed to do is defined, visible, and not subject to manipulation.
This is the right frame for evaluating any agent system your firm is considering: not "does it work" but "can anyone define and verify what it is actually permitted to do".
Featured on Law: What's Next This Week
This week I joined Alex and Tom on Law:What's Next to talk through the distinction between chatbots and agentic systems, what the Cowork security vulnerabilities actually mean in practice for law firms, and why asking non-technical users to detect prompt injections is not a reasonable security posture. You can watch the full conversation and read a substack here.

AI Security and Agentic Risk with Rok Popov Ledinski
Looking Ahead
π Next Tuesday at 2pm CET!
Schedule change: the show now airs on Tuesdays.
My next guest on Rok's Legal AI Conversations is J.P. Mohler, co-founder of General Legal, a YC backed AI-native law firm that turns commercial contracts in about an hour, with a flat fee structure and lawyers who live on Slack. J.P. has a background in both software engineering and law, and previously led cutting-edge legal AI prototypes at CaseText, including the first large-scale AI deployment in a DOJ investigation.
We discuss what an AI-native law firm actually looks like under the hood, why the billable hour is incompatible with genuine AI adoption, and what happens to client expectations when smart legal advice goes from taking days to taking minutes.
![]() YC backed AI native law firm |
Each edition of Legal AI Brief brings practical lessons from firms using AI safely.

Rok Popov Ledinski
Founder | MPL Legal Tech Advisors
Share





