48 Comments
User's avatar
Andrew's avatar

Agreed that large versus small matters is an important distinction. There's just no way to mark up a 10 page contract for a small transaction in a way that is cost effective, but a prompt to "revise this to be more seller friendly" will get you 90% of the way there in 5 minutes and in another 10 I can spot check for which changes are appropriate and which are unreasonable or unnecessary.

Justin Curl's avatar

This is another great example of how the biggest bottleneck for many lawyers when it comes to AI is the cost of verification. I really think more legaltech companies should be trying to design new features that can lower these costs.

Liz's avatar

I’m a lawyer and hard agree on points 2 and 3 (stakes are high = less value and AI is most useful when outputs are verifiable.)

I’m not sure I agree it’s less useful for experienced attorneys. I think it is probably most useful, though, for middling attorneys who know enough through practice to sniff out bullshit but don’t have such expertise that they outpace the LLM.

For me, it lets me do analyses I normally couldn’t. The coding functions means I can run python programs doing a Monte Carlo analysis on some risk I’m trying to quantify. It’s not predictive or perfect but it generates a conversation with my clients I couldn’t otherwise have (I’m in-house.)

Justin Curl's avatar

This is a cool use case -- I always find it more interesting when people use AI to expand what's possible rather than replace what they already do.

And for the experienced attorneys point, my guess is we might be thinking about "experienced" differently. I would treat middling attorneys (perhaps 4/5th year associates) who can sniff out bullshit as more experienced attorneys. Because they have existing ways of doing things that are pretty efficient and reliable, there's a higher baseline of productivity that AI has to surpass to be worth adopting

Kevin's avatar

This comment is unrelated to the Legal field here in the US but I'm just back from visiting family in Ireland. Younger sister works with lawyers and works on tenders for contracts with the government and other agencies. Clients, in this case the government have placed a burden of proof on anyone doing business with them to demonstrate where, what, who, and how they are using AI in submitting anything to their agencies to the point of absurdity. The use of AI actually introduces more inefficiencies because of the client bias. In the case of the legal system, assuming we continue to operate on the basis of precedence, I don't know how AI is not significantly reducing the labor of the legal assistant profession at a minimum. As long as all output is sourced with the associated case law.

Justin Curl's avatar

I like this point. It seems counterproductive for us to create procedural barriers that make it really hard to experiment with and deploy AI. This suggests AI not only has to be more efficient than the existing way of doing things, it needs to be SO efficient it can overcome whatever other barriers we might add. Reminds me of some of Tim's other posts on self-driving cars.

Sudeep's avatar

I used AI extensively to fight a legal eviction case Here is my observation.

You have to provide it references otherwise it goes out of jurisdiction.

It help quite a bit in writing

but some of the issue like AI has hard time formatting document in pleading format

Jim Dunning's avatar

What struck me reading this was how familiar the objections sound — not just in law, but in education.

“Juniors won’t learn if they don’t do it manually” is the same argument middle and high school teachers use when defending legacy instruction: mistaking friction for rigor. We treat endurance through inefficient processes as proof of understanding, largely because those processes justify existing roles, hierarchies, and evaluation systems.

That raises an unasked question in both domains: if AI removes the drudgery, what is the senior professional’s value-add? Judgment, risk calibration, mentoring, taste — or just gatekeeping by time served?

If learning truly depends on recreating checklists by hand, that suggests a brittle training model, not a deep one. And resistance framed as “quality control” often collapses into something simpler: protecting legacy systems from exposure.

That instinct seems to underwrite many of the other objections in the piece (“it’s garbage,” “not worth verifying,” “too risky”) — not because the tools are static, but because they challenge the moral authority of how expertise has historically been earned.

AI doesn’t threaten competence so much as it threatens institutions that have confused process preservation with progress.

Jim Dunning's avatar

Just consider how many secondary and higher education schools penalize students for using ChatGPT or just outright ban it ... and note the import of "only 28% of law firms are actively using AI, while ... 79% of legal professionals use AI in their firms", in addition to firms that outright ban it.

Melissa Steinman's avatar

Isn’t judgment and risk calibration exactly what a senior professional adds to the equation? At least in law, the reason you generally hire one lawyer vs. another is their judgment and the experience it generally took to develop it—the combination not just of knowledge of the law but years applying the law to different sets of facts and circumstances, whether it’s negotiating contracts (or seeing how the other party reacts to one’s proposals at the table), arguing cases to a judge and seeing if those arguments are well-taken, counseling on regulatory compliance, or actually drafting those laws and regulations. AI can analyze lots of different patterns and documents. But it’s with the combination of AI and the experienced professional that one gets the best possible value. (There’s also the matter of calling out the hallucinations discussed in the news, which I have personally experienced as well.). The bigger challenge, as others have recognized, is how to train young lawyers.

Kevin's avatar

Agreed. I think it is important to identify that the use of AI doesn't as a consequence lead to replacing the lawyer's judgement with AI. At least not given where AI is at this point in time. Hallucinations while still an issue are much less of an issue in frontier models at this point. Still a valid criticism though.

Jim Dunning's avatar

Before we go any further, I’d want both of you to define judgment.

Melissa, you list outcomes (negotiation sense, pattern recognition, counseling instincts), but that’s a description of experience accumulation, not a definition of judgment itself. What is the thing being exercised that AI supposedly cannot learn, simulate, or outperform — and why?

Kevin, you seem to default to “human judgment > AI judgment” as a premise rather than a conclusion. That may be true in some domains, but it needs an argument, not an assumption. Otherwise “judgment” risks becoming a polite stand-in for tradition, seniority, or sunk cost.

If judgment can’t be specified, tested, or taught except by time-served exposure, that’s not a defense of expertise — it’s an admission that we don’t actually know what value is being added, only who has historically been allowed to add it.

Shreeharsh Kelkar's avatar

I think this gets into one of the classic issues here that has been discussed multiple times both in the history of AI but also since the Enlightenment: is all knowledge formalizable? The point about judgement is that it isn't formalizable; it's what's called "tacit knowledge" which is knowledge that you know but can't say. Which, I supposed, can strike those who believe that all knowledge is formalizable, as a dodge. One reason expert systems "failed" was that they were premised on the idea that what experts did could be broken down into formalizable rules and that just didn't seem to work. (Although they only "failed" in the sense that they didn't replace experts; but most apps today one might say are expert systems of some sort.) One of my favorite books on this is the sociologist Harry Collins' Artificial Experts.

Jim Dunning's avatar

Thoughtful framing; and I agree that judgment involves tacit knowledge that can’t be fully reduced to rules. Collins and Polanyi are right about that. But I think this is precisely where the institutional issue begins, not where it ends.

The problem isn’t whether judgment exists — it clearly does. The problem is how often “tacit knowledge” functions as a black box that protects legacy training pipelines and hierarchies from scrutiny. If too easily, an epistemological truth becomes an organizational shield.

History suggests that when tools advance, experts are rarely eliminated: but they are redefined. Chess masters, pilots, radiologists, and engineers still matter, yet their authority now rests on higher-order synthesis, not on monopolizing routine cognition. The mystique of expertise shrinks even when expertise itself remains.

What AI threatens is not judgment, but the assumption that judgment must be cultivated through maximal inefficiency — that novices must suffer through legacy workflows because that is how it has always been done. That confuses epistemology with pedagogy.

In other words, even if judgment is tacit, it does not follow that the existing apprenticeship structure is optimal. Institutions often conflate “this cannot be formalized” with “this must be preserved as-is.”

That is why resistance so often appears as philosophical caution while functioning as structural self-defense.

Judith Stove's avatar

Interesting survey, thank you. I'd be cautious about that widely-quoted METR analysis; this article seems to identify many of the problems with it (particularly, I'd say, with the 'human baseline,' without which the whole thing is meaningless):

https://arachnemag.substack.com/p/the-metr-graph-is-hot-garbage?utm_source=app-post-stats-page&r=18kjq3&utm_medium=ios&triedRedirect=true

Pam Hebert's avatar

I write medicolegal summaries for Qualified Medical Examiners who testify for personal injury cases and agree 100% with the findings in your article. So far, it takes so much more time to "proof" AI summaries than to create them from scratch, and there is the danger of missing critical information and losing appropriate context. Thanks for the concise and thorough capture of the current state of affairs....great job.

Ann Taylor Schwing's avatar

An interesting article, but I expected also to see discussion of AI citations to and quotes from nonexistent cases. This issue is presumably covered by general references to verification. With reported sanctions as high as $52,000 and multiple sanctions in the $10,000 range, surely the danger of sanctions and adverse impact on reputation merit more attention. Years ago, I clerked for a federal judge who placed cases first on the law and motion calendar when an attorney misquoted (e.g., omitting the "not") or miscited authorities so the assembly of attorneys and clients awaiting hearing of their matter would all learn the painful lesson.

Conn. Yankee's avatar

I’m an appellate lawyer with 30 years under my belt. I know the frontier models well. The lawyers to whom you spoke are systematically underestimating the power of the best models (some of which, e.g., Opus 4.5, are now available through Harvey, which recently acquired a model selector). If you don’t appreciate the power of a well-prompted frontier model to reason about even very complicated legal problems, you’re not doing it right. (Adam Unikowsky has written well in his Substack about the legal reasoning abilities of the prior generation of models.)

Conn. Yankee's avatar

Speak of the devil:

'The Technology Is There': Supreme Court Practitioners Quietly Embracing AI

"People are gonna start being being embarrassed by not doing it if they feel that their legal services can be enhanced with the assistance of AI," said Supreme Court lawyer Adam Unikowsky.

https://www.law.com/nationallawjournal/2026/01/15/the-technology-is-there-supreme-court-practitioners-quietly-embracing-ai/

Kevin's avatar

Nice to hear from someone in the field actively using the frontier models.

Justin Curl's avatar

This didn’t make it in, but a partner I spoke with said she thought GPT 5 Thinking was more useful for refining her thoughts on complex tax provisions than a conversation with a 3rd year associate

Conn. Yankee's avatar

The models are much better than most associates at issue-spotting. At least when I was in law school (mid-1990s), almost all the exams were “issue-spotters.” The professor would give you a Byzantine (and often quite entertaining) set of facts and your job was to identify and discuss the issues. This, of course, is something LLMs are almost built from the ground up to do. Gemini 3 Pro or ChatGPT-5.2 Pro would have easily made Law Review back in the day and might even have won the Fay Diploma and landed a Scalia clerkship! The practice of law of course involves much, much more than this, but it is one of the core skills.

Kai Williams's avatar

I wish I knew more about the dynamics with law in an international context: I've heard from French people that LLMs will tend to assume common law concepts, which can make legal advice less helpful.

Pierre Brunelle's avatar

Thank you. I hadn’t thought about the distinction between high stakes work (less AI being used) and lower stakes. Very interesting.

Paul H's avatar

Law should require complete accuracy. Whether it is due diligence, a contract or a pleading, there should be zero tolerance for any errors. This is a profession that fights over a comma.

But AI halicinates. Some studies say 30% of the time.

How can a tool with a high error rate

by appropriate for work products requiring zero errors?

Should lawyers now spend their time looking for the AI errors in a haystack?

Justin Curl's avatar

I imagine this is the hallucination study you're thinking of: https://law.stanford.edu/publications/hallucination-free-assessing-the-reliability-of-leading-ai-legal-research-tools/. While it's great, and I'm a huge fan of the authors, it is a few years old now, and I do think hallucination rates are far lower.

Miguel Rozas Pashley's avatar

The only way I see AI working ethically with my lawyer is if the lawyer - client privilege remains unscathed, that is, the AI and it's servers are housed behind the law firm' s firewall. I would not recommend any commercial AI to be party to any legal affairs otherwise, except open public research and due diligence matters. The paralegal AI should be protected with., and uphold the same rights as the lawyer/client privilege enjoys.

Kevin's avatar

This is an interesting point. I use AI locally for this exact reason, not a lawyer, but for privacy. NY Times has a lawsuit against Open AI which requires they keep all data irrespective of Open AI privacy terms. I'm not sure how the larger law firms using AI are addressing this. Maybe someone reading this who does know can comment.

Jojo's avatar

Whew. Won't it be great in our post-scarcity future when AI takes over (possibly elected), AI/robots do all the work, everything is provided for free, money is a historical footnote and lawyers are finally done away with?

B.C. Kowalski's avatar

One interesting use case locally is that Police are using AI-powered redaction tools. The introduction of bodycams introduces a whole backlog of footage that can and will be records requested, and it used to mean someone sitting there watching for potential redactions needed. Now the tool out local department bought does it and the backlog went from weeks to a few days.

Justin Curl's avatar

This perfectly gets at a trend with tech adoption I've been noticing: As technology increases the capabilities for one "side" (i.e., bodycams create more footage), there is an increasing appetite for using technology to improve capabilities for the other (i.e., redaction tool). Some other things it reminds me of are: (1) the offense-defense balance for cybersecurity; (2) bots / content moderation on social media.

Terry Maris, PhD's avatar

My son-in-law works for a top tier DC law firm. He told me that their policy is to use AI with exceptional caution.

Justin Curl's avatar

He might be at the same firm as one of the lawyers I interviewed