13 Comments
User's avatar
Andrew's avatar

Agreed that large versus small matters is an important distinction. There's just no way to mark up a 10 page contract for a small transaction in a way that is cost effective, but a prompt to "revise this to be more seller friendly" will get you 90% of the way there in 5 minutes and in another 10 I can spot check for which changes are appropriate and which are unreasonable or unnecessary.

Justin Curl's avatar

This is another great example of how the biggest bottleneck for many lawyers when it comes to AI is the cost of verification. I really think more legaltech companies should be trying to design new features that can lower these costs.

Liz's avatar

I’m a lawyer and hard agree on points 2 and 3 (stakes are high = less value and AI is most useful when outputs are verifiable.)

I’m not sure I agree it’s less useful for experienced attorneys. I think it is probably most useful, though, for middling attorneys who know enough through practice to sniff out bullshit but don’t have such expertise that they outpace the LLM.

For me, it lets me do analyses I normally couldn’t. The coding functions means I can run python programs doing a Monte Carlo analysis on some risk I’m trying to quantify. It’s not predictive or perfect but it generates a conversation with my clients I couldn’t otherwise have (I’m in-house.)

Justin Curl's avatar

This is a cool use case -- I always find it more interesting when people use AI to expand what's possible rather than replace what they already do.

And for the experienced attorneys point, my guess is we might be thinking about "experienced" differently. I would treat middling attorneys (perhaps 4/5th year associates) who can sniff out bullshit as more experienced attorneys. Because they have existing ways of doing things that are pretty efficient and reliable, there's a higher baseline of productivity that AI has to surpass to be worth adopting

Kevin's avatar
1hEdited

This comment is unrelated to the Legal field here in the US but I'm just back from visiting family in Ireland. Younger sister works with lawyers and works on tenders for contracts with the government and other agencies. Clients, in this case the government have placed a burden of proof on anyone doing business with them to demonstrate where, what, who, and how they are using AI in submitting anything to their agencies to the point of absurdity. The use of AI actually introduces more inefficiencies because of the client bias. In the case of the legal system, assuming we continue to operate on the basis of precedence, I don't know how AI is not significantly reducing the labor of the legal assistant profession at a minimum. As long as all output is sourced with the associated case law.

Justin Curl's avatar

I like this point. It seems counterproductive for us to create procedural barriers that make it really hard to experiment with and deploy AI. This suggests AI not only has to be more efficient than the existing way of doing things, it needs to be SO efficient it can overcome whatever other barriers we might add. Reminds me of some of Tim's other posts on self-driving cars.

Sudeep's avatar

I used AI extensively to fight a legal eviction case Here is my observation.

You have to provide it references otherwise it goes out of jurisdiction.

It help quite a bit in writing

but some of the issue like AI has hard time formatting document in pleading format

Jim Dunning's avatar

What struck me reading this was how familiar the objections sound — not just in law, but in education.

“Juniors won’t learn if they don’t do it manually” is the same argument middle and high school teachers use when defending legacy instruction: mistaking friction for rigor. We treat endurance through inefficient processes as proof of understanding, largely because those processes justify existing roles, hierarchies, and evaluation systems.

That raises an unasked question in both domains: if AI removes the drudgery, what is the senior professional’s value-add? Judgment, risk calibration, mentoring, taste — or just gatekeeping by time served?

If learning truly depends on recreating checklists by hand, that suggests a brittle training model, not a deep one. And resistance framed as “quality control” often collapses into something simpler: protecting legacy systems from exposure.

That instinct seems to underwrite many of the other objections in the piece (“it’s garbage,” “not worth verifying,” “too risky”) — not because the tools are static, but because they challenge the moral authority of how expertise has historically been earned.

AI doesn’t threaten competence so much as it threatens institutions that have confused process preservation with progress.

Jim Dunning's avatar

Just consider how many secondary and higher education schools penalize students for using ChatGPT or just outright ban it ... and note the import of "only 28% of law firms are actively using AI, while ... 79% of legal professionals use AI in their firms", in addition to firms that outright ban it.

The Tech Geek's avatar

AI can be so useful for so many things. Excellent piece.

Judith Stove's avatar

Interesting survey, thank you. I'd be cautious about that widely-quoted METR analysis; this article seems to identify many of the problems with it (particularly, I'd say, with the 'human baseline,' without which the whole thing is meaningless):

https://arachnemag.substack.com/p/the-metr-graph-is-hot-garbage?utm_source=app-post-stats-page&r=18kjq3&utm_medium=ios&triedRedirect=true

Pam Hebert's avatar

I write medicolegal summaries for Qualified Medical Examiners who testify for personal injury cases and agree 100% with the findings in your article. So far, it takes so much more time to "proof" AI summaries than to create them from scratch, and there is the danger of missing critical information and losing appropriate context. Thanks for the concise and thorough capture of the current state of affairs....great job.

Ann Taylor Schwing's avatar

An interesting article, but I expected also to see discussion of AI citations to and quotes from nonexistent cases. This issue is presumably covered by general references to verification. With reported sanctions as high as $52,000 and multiple sanctions in the $10,000 range, surely the danger of sanctions and adverse impact on reputation merit more attention. Years ago, I clerked for a federal judge who placed cases first on the law and motion calendar when an attorney misquoted (e.g., omitting the "not") or miscited authorities so the assembly of attorneys and clients awaiting hearing of their matter would all learn the painful lesson.