AI is just starting to change the legal profession
I talked to 10 lawyers about how they're using AI.
I’m pleased to publish this guest post by Justin Curl, a third-year student at Harvard Law School. Previously, Justin researched LLM jailbreaks at Microsoft, was a Schwarzman Scholar at Tsinghua University, and earned a degree in Computer Science from Princeton.
How much are lawyers using AI? Official reports vary widely: a Thomson Reuters report found that only 28% of law firms are actively using AI, while Clio’s Legal Trends 2025 reported that 79% of legal professionals use AI in their firms.
To learn more, I spoke with 10 lawyers, ranging from junior associates to senior partners at seven of the top 20 Vault law firms. Many told me that firms were adopting AI cautiously and that the industry was still in its early days of AI.
The lawyers I interviewed weren’t AI skeptics. They’d tested AI tools, could identify tasks where the technology worked, and often had sharp observations about why their co-workers were slow to adopt. But when I asked about their own habits, a more complicated picture emerged. Even lawyers who understood AI’s value seemed to be leaving gains on the table, sometimes for reasons they’d readily critique in colleagues.
One junior associate described the situation well: “The head of my firm said we want to be a fast follower on AI because we can’t afford to be reckless. But I think equating AI adoption with recklessness is a huge mistake. Elite firms cannot afford to view themselves as followers in anything core to their business.”
How AI can accelerate lawyers’ work
Let’s start with a whirlwind tour of the work of a typical lawyer — and how AI tools could make them more productive at each step.
Lawyers spend a lot of time communicating with clients and other third parties. They can use general-purpose AI tools like Claude, ChatGPT, or Microsoft Copilot to revise an email, take meeting notes, or summarize a document. One corporate lawyer said their favorite application was using an internal AI tool to schedule due diligence calls, which was usually such a pain because it required coordinating with twenty people.
AI can also help with more distinctly legal tasks. Transactional lawyers and litigators work on different subject matter (writing contracts and winning lawsuits, respectively), but there is a fair amount of overlap in the kind of work they do.
Both types of lawyers typically need to do research before they begin writing. For transactional lawyers, this might be finding previous contracts to use as a template. For litigators, it could mean finding legal rulings that can be cited as precedent in a legal brief.
Thomson Reuters and LexisNexis, the two incumbent firms that together dominate the market for searchable databases of legal information, offer AI tools for finding public legal documents like judicial opinions or SEC filings. Legaltech startups like Harvey and DeepJudge also offer AI-powered search tools that let lawyers sift through large amounts of public and private documents to find the most relevant ones quickly.
Once lawyers have the right documents, they need to analyze and understand them. This is a great use case for general-purpose LLMs, though Harvey offers customized workflows for analyzing documents like court filings, deposition transcripts, and contracts. I also heard positive things about Kira (acquired by Litera in 2021), an AI product that’s designed specifically for reviewing contracts.
Once a lawyer is ready to begin writing, general-purpose AI models can help write an initial draft, revise tone and structure, or proofread. Harvey offers drafting help through a dialog-based tool that walks lawyers through the process of revising a document.
Finally, some legal work will require performing similar operations for many files — like updating party names or dates. Office & Dragons (also acquired by Litera) offers a bulk processing tool that can update document names, change document contents, and run redlines (comparing different document versions) for hundreds of files at once.
You’ll notice many legal tasks involve research and writing, which are areas where AI has recently shown great progress. Yet if AI has so much potential for improving lawyers’ productivity in theory, why haven’t we seen it used more widely in practice? The next sections outline the common reasons (some more convincing than others) that lawyers gave for why they don’t use AI more.
AI doesn’t save much time when the stakes are high
Losing a major lawsuit or drafting a contract in a way that advantages the other party can cost clients millions or even billions of dollars. So lawyers often need to carefully verify an AI’s output before using it. But that verification process can erode the productivity gains AI offered in the first place.
A senior associate told me about a junior colleague who did some analysis using Microsoft Copilot. “Since it was vital to the case, I asked him to double-check the outputs,” he said. “But that ended up taking more time than he saved from using AI.”
Another lawyer explicitly varied his approach based on a task’s importance. For a “change-of-control” provision, which is “super super important” because it allows one party to alter or terminate a contract if the ownership of the other party changes, “you want to make sure you’re checking everything carefully.”
But not all tasks have such high stakes: “if you’re just sending an email, it’s not the end of the world if there are small mistakes.”
Indeed, the first four lawyers I talked to all brought up the same example of when AI is helpful: writing and revising emails. One senior associate said: “I love using Copilot to revise my emails. Since I already know what I want to say, it’s much easier for me to tweak the output until I’m satisfied.”
A junior associate added that this functionality is “especially helpful when I’m annoyed with the client and need to make the tone more polite.” Because it was easy to review AI-generated emails for tone, style, and accuracy, she could use AI without fear of unintentional errors.
These dynamics also help explain differences in adoption across practice areas. One partner observed: “I’ve noticed adoption is stronger in our corporate than litigation groups.”
His hypothesis was that “corporate legal work is more of a good-enough practice than a perfection practice because no one is trying to ruin your life.” In litigation, every time you send your work to the other side, they think about how they can make your life harder. Because errors in litigation are at greater risk of being exploited for the other side’s gain, litigators verify more carefully, making it harder for AI to deliver net productivity gains.
AI adds more value when verifying outputs is easier
The verification constraint points toward a pattern one associate described well: “AI is great for the first and last pass at things.”
For the first pass, lawyers are familiarizing themselves with an area of law or generating a very rough draft. These outputs won’t be shown directly to a client or judge, and there are subsequent rounds of edits to catch errors. Because the costs of mistakes at this stage are low, there’s less need for exhaustive verification and lawyers retain the productivity gains.
For the last pass, quality control is easier because lawyers already know the case law well and the document is in pretty good shape. The AI is mostly suggesting stylistic changes and catching typos, so lawyers can easily identify and veto bad suggestions.
But AI is less useful in the middle of the drafting process, when lawyers are making crucial decisions about what arguments to make and how to make them. AI models aren’t yet good enough to do this reliably, and human lawyers can’t do effective quality control over outputs if they haven’t mastered the underlying subject matter.
So a key skill when using AI for legal work is to develop strategies and workflows that make it easier to verify the accuracy and quality of AI outputs.
One patent litigator told me that “every time you use AI, you need to do quality control. You should ask it to show its work and use quotes, so you can make sure its summaries match the content of the patent.” A corporate associate reached the same conclusion, using direct quotes to quickly “Ctrl-F” for specific propositions he wanted to check.
Companies building AI tools for lawyers should look for ways to reduce the costs of verification. Google’s Gemini, for example, has a feature that adds a reference link for claims from uploaded documents. This opens the source document with the relevant text highlighted on the side, making it easier for users to quickly check whether a claim matches the underlying material.
Features like these don’t make AI tools any more capable. But by making verification faster, they let users capture more of the productivity gains.
AI might not help experienced lawyers as much
Two lawyers from different firms disagreed about the value of DeepJudge’s AI-powered natural-language search.
One associate found it helpful because she often didn’t know which keywords would appear in the documents she was looking for.
A partner, however, preferred the existing Boolean search tool because it gave her more control over the output list. Since she had greater familiarity with documents in her practice area, the efficiency gain of a natural-language search was smaller.
Another partner told me he worried that if junior lawyers don’t do the work manually, they won’t learn to distinguish good lawyering from bad. “If you haven’t made the closing checklist or mapped out the triggering conditions for a merger, will you know enough to catch mistakes when they arise?”
Even senior attorneys can face this tradeoff.
A senior litigation associate praised AI’s ability to “get me up to speed quickly on a topic. It’s great for summarizing a court docket and deposition transcripts.” But he also cautioned that “it’s sometimes harder to remember all the details of a case when I use AI than when I read everything myself.”
He found himself hesitating because he was unsure of the scope of his knowledge. He didn’t know what he didn’t know, which made it harder to check whether AI-generated summaries were correct. His solution was to revert to reading things in full, only using AI to refresh his memory or supplement his understanding.
Many lawyers are unaware of AI use cases and capabilities
A prerequisite for adopting AI is knowing what it can be used for. One associate mentioned he was “so busy” he didn’t “have time to come up with potential use cases.” He said, “I don’t use AI more because I’m not sure what to use it for.”
A different associate praised Harvey for overcoming this exact problem.
“Harvey is nice because it lists use cases and custom workflows, so you don’t need to think too much about how to use it,” the associate told me. As she spoke, she opened Harvey and gave examples: “translate documents, transcribe audio to text, proofread documents, analyze court transcripts, extract data from court filings.” She appreciated that Harvey showed her exactly how it could make her more productive.
But there’s a tradeoff: the performance of lawyer-specific AI products often lags state-of-the-art models.
“Claude is a better model, so I still prefer it when all the information is public,” one lawyer told me.
Meanwhile, many lawyers take a dim view of AI capabilities. An associate decided not to try her firm’s internal LLM because she had “heard such bad things.”
Earlier I mentioned that incumbents Thomson Reuters and LexisNexis have added AI tools to their platforms in recent years. When I asked two lawyers about this, they said they hadn’t tried them because their colleagues’ impressions weren’t positive. One even described them as “garbage.”
But it’s a mistake to write AI tools off due to early bad experiences. AI capabilities are improving rapidly. Researchers at METR found that the length of tasks AI agents can reliably complete has been doubling roughly every seven months since 2019. A tool that disappointed a colleague last year might be substantially more capable today.
Individual lawyers should periodically revisit tools they’ve written off to see if they have grown more capable. And firms should institutionalize that process, reevaluating AI tools after major updates to see if they better meet the firm’s needs.
Pricing models can discourage (or encourage) AI use
The right level of AI use varies by client.
Billing by the hour creates tension between lawyer and client interests. More hours means more revenue for the firm, even if the client would prefer a faster result. AI that makes lawyers more efficient could reduce billable hours, which is good for clients but potentially bad for firm revenue.
Other pricing models align incentives differently. For fixed-fee work, clients don’t see cost savings when lawyers work faster. Lawyers, of course, benefit from efficiency since they keep the same fee while doing less work. A contingency pricing model is somewhere in the middle. Lawyers are paid when their clients achieve their desired legal outcome, so clients likely want lawyers to use their best judgment about how to balance productivity and quality.
One senior associate told me he used AI differently depending on client goals: “Some clients tell me to work cheap and focus on the 80/20 stuff. They don’t care if it’s perfect, so I use more AI and verify the important stuff.”
But another client wanted a “scorched earth” approach. In this case, the associate did all the work manually and only used AI to explore creative legal theories, which ensured he left no stone unturned.
Some clients have explicit instructions on AI use, though two associates said these clients are in the minority. “Most don’t have a preference and want us to use our best judgment.”
Clients who want the benefits of AI-driven productivity should communicate their preferences clearly and push firms for pricing arrangements that reward efficiency. For their part, lawyers should ask clients what they want rather than making assumptions.



Agreed that large versus small matters is an important distinction. There's just no way to mark up a 10 page contract for a small transaction in a way that is cost effective, but a prompt to "revise this to be more seller friendly" will get you 90% of the way there in 5 minutes and in another 10 I can spot check for which changes are appropriate and which are unreasonable or unnecessary.
I used AI extensively to fight a legal eviction case Here is my observation.
You have to provide it references otherwise it goes out of jurisdiction.
It help quite a bit in writing
but some of the issue like AI has hard time formatting document in pleading format