Resources

Inversion

October 29, 2025

An important question in developing LLM-compatible software in legal (and elsewhere) is whether the control model is "direct" or "inverted." In short: Do I ask my DMS to summarize, and it asks ChatGPT (direct), or do I ask ChatGPT and it pulls the document from the DMS (indirect)?

From what I've seen, most LegalTech products are doing the former. I think there are a few reasons why, but I'm not sure they always hold up.

Prompt Engineering Is Hard: By using direct control, the LegalTech product takes control of the prompting, including not just a single prompt but the ability to construct chains of prompt-response-prompt-response (so-called "agentic" workflows). In the early days of LLMs that made sense. There was an art to prompt-engineering, and "agentic" workflows required additional programming. That's either entirely not true today or vanishing. Users have gotten more experienced at prompting for one thing. But more importantly, modern LLMs are very good at understanding any prompt you give them, and they all now have the ability to utilize the data they can access to construct agentic workflows that involve going back and forth with the data. More than that, I'd argue that in many situations—especially given the trillions of dollars being spent on these models—AI companies are going to be BETTER at these things than a LegalTech company.

Security: The argument here is "don't give an LLM access to your data." But look, that's true in both scenarios. No one has a competitive "closed loop" LLM; everyone is sending data to first-party LLMs; and everyone, regardless of the control model, has to deal with the security implications of that, which are real. In both cases you are going to be relying on your LegalTech vendor to have constructed the proper security guardrails. In the direct control scenario, they have to protect against prompt injection and similar AI attacks. In the inverted control scenario, they have to ensure proper authorization and scope. And here too, you could make a good argument that AI companies are BETTER at dealing with AI-specific security concerns, whereas LegalTech companies are better at protecting access to data.

It's My Data: I think a lot of companies don't want to allow LLMs to connect to their software, not because of security but because they want to "leverage" the customer information they have to sell an LLM product. I don't want ChatGPT to access the documents I store; I'd rather sell you my AI summarization service. I don't want Claude to have access to my caselaw database; I want to sell you my AI research product. OK, but the question legal CONSUMERS should care about is "is that better"—is your AI "wrapper" better than what would happen if you let ChatGPT connect, or are you just bundling.

To be clear, I'm not arguing the inverted control model is always right. My point is it’s a valid model and one that I think should—and will—gain more currency in LegalTech.

Latest posts

Depositions
Practice Pointers
Blog
Witness Prep: Figuring Out What To Say

When preparing a witness, don't waste time teaching them combat tactics. Get to the hard stuff.

Litigation Tools
Binders
Articles
From Paper to Pixels: How Litigators Can Successfully Adopt Digital Binders

We lawyers love binders — always have, always will. That doesn’t mean we’re stuck with paper, though. Today, it is certainly possible to have binders in digital form, either by using general-purpose tools or purpose-buil

Practice Pointers
Depositions
Blog
Depositions - Fly the Plane

Handle deposition objections like you handle a snake in the cockpit: ignore the distraction and fly the plane. Don't argue with counsel; simply focus on the witness and get your answer.