The Pentagon’s bombshell deal with OpenAI, explained
Only Congress can put meaningful limits on government abuse of AI.
On any other day, the record-breaking $110 billion fundraising round OpenAI announced last Friday would have captured the attention of the AI world. Instead, we were all captivated by the showdown between Anthropic and the Pentagon.
On Tuesday, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon. He demanded that Anthropic drop contractual terms prohibiting the use of Claude for mass surveillance of Americans and the operation of fully autonomous weapons. If Anthropic didn’t comply, Hegseth threatened to declare Anthropic a supply-chain risk — a designation that could prevent other government contractors from using Anthropic’s products.
Hegseth gave Amodei a deadline of 5:01 PM on Friday. But Donald Trump jumped the gun. At 3:47 PM, he declared on Truth Social that Anthropic was “A RADICAL LEFT, WOKE COMPANY” and directed “EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” Hegseth followed through on his threat and declared Anthropic to be a supply-chain risk.
According to Hegseth, this meant that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic” — though it’s not clear that the law gives Hegseth such broad powers.
A few hours later, Sam Altman stunned the AI world by announcing that OpenAI had reached its own deal with the Pentagon. Altman claimed that the Pentagon had agreed not to use OpenAI models for fully autonomous weapons or mass surveillance of Americans — the same restrictions the Pentagon had rejected when Anthropic asked for them days earlier.
The announcement initially left many observers — including me — confused. Did Altman really convince Hegseth to accept terms he’d just denied to Amodei? Or was OpenAI employee Leo Gao right when he described the guardrails in OpenAI’s contract as “not really operative except as window dressing?”
The contours of last week’s negotiations gradually became clear over the weekend. Altman and other OpenAI employees shared their perspectives on Twitter, including in a Saturday night ask-me-anything session. Senior officials from the Trump Administration also weighed in. News organizations such as the New York Times and the Atlantic have published behind-the-scenes details.
I’ve read all of this information carefully, and it sure looks to me like OpenAI gave the Pentagon what it wanted and undercut Anthropic in the process. The contractual language shared by OpenAI does not appear to meaningfully restrict the government’s ability to spy on Americans or build fully autonomous weapons.
But ultimately, I don’t think any contract was going to prevent the government from misusing AI. That’s going to take oversight — and eventually legislation — from Congress. We need ground rules that apply to all government use of AI, regardless of whose models are used.

A fight over mass surveillance
An underlying issue in last week’s fight was whether it was reasonable to take government promises at face value. To understand why many people are skeptical about that, you have to go back to the events of 2013.
At a March 2013 Senate hearing, Sen. Ron Wyden (D-OR) asked James Clapper, Barack Obama’s Director of National Intelligence, “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?”
Clapper answered “No sir, not wittingly.”
Three months later, an NSA contractor named Edward Snowden leaked documents showing that the government actually had obtained a court order to collect telephone calling records about millions of Americans from Verizon and other phone companies.
In a June congressional hearing, an Obama administration official defended the government’s legal rationale for this program. Under the law, the government could obtain business records if they were relevant to an ongoing terrorism investigation. The government had told the Foreign Intelligence Surveillance Act (FISA) court that every American’s phone records qualified. This outraged Rep. James Sensenbrenner (R-WI), who fumed that the government’s interpretation of the law makes “a mockery of the legal standard.”
Given this history, you can understand why people might worry that OpenAI’s deal with the government will not meaningfully constrain the military. The agreement states that “handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.” It adds that “the AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.”
Notably, all of these laws and regulations were on the books prior to the Snowden revelations — and they didn’t prevent the government from collecting the phone records of millions of Americans.
During Saturday’s ask-me-anything session, Altman tapped a staffer named Katrina Mulligan to help him answer questions. Mulligan had spent a decade in the national security world before becoming OpenAI’s “first national security hire” in early 2024. She had been a key figure in OpenAI’s talks with the Pentagon.
Someone asked Mulligan whether the Pentagon might use OpenAI models to analyze “commercially available data at scale.” Mulligan replied that this wasn’t a concern because “the Pentagon has no legal authority to do this.”
But this doesn’t appear to be true. Just after Joe Biden took office in 2021, The Hill reported that “analysts at the Defense Intelligence Agency (DIA) have purchased databases of U.S. smartphone location data in recent years without a warrant.”
In the 2018 case Carpenter v. United States, the Supreme Court held that the Fourth Amendment required a warrant for the government to obtain someone’s location data from a cellular provider. But an internal DIA memo stated that the agency “does not construe the Carpenter decision to require a judicial warrant endorsing purchase or use of commercially-available data for intelligence purposes.”
OpenAI’s critics worry that vague language in the OpenAI contract provides the government with plenty of loopholes to engage in mass surveillance. For example, does buying bulk location data from a private company count as “unconstrained monitoring?” Most civil liberties groups would say yes, but the government might say no.
A core question: Do you trust the government?
In the wake of the Snowden revelations, many of Obama’s national security officials didn’t think they’d done anything wrong.
There were a handful of cases of clear-cut misconduct. For example, some NSA employees were caught using surveillance powers to spy on romantic interests. But the NSA said those incidents were “very rare” and that the perpetrators had been fired.
The major Snowden revelations weren’t like that. They showed the Obama Administration pushing the legal envelope to more effectively spy on terrorists, not to seek political advantage or personal enrichment.
And while transparency might sound nice in theory, the intelligence community believed it would have been impractical to ask Congress to explicitly authorize new surveillance programs. They believed that a public debate about a new surveillance program would have alerted terrorists to the program’s existence, undermining its effectiveness. So many officials believed they had struck a reasonable compromise: keep some programs secret from the public, but get approval from the FISA court and keep Congressional leaders updated.
The counterargument is that once mass surveillance infrastructure has been built, it will become available to future leaders who may be less scrupulous. So it might be a bad idea to allow mass surveillance even if you have total confidence in the current generation of government officials. And if a surveillance program is secret, the public doesn’t get to decide whether it’s too intrusive.
Someone’s views on these broader debates are inevitably going to color their thinking about last week’s bargaining between AI companies and the federal government.
Mulligan, OpenAI’s head of national security partnerships, has strong ties to the defense establishment. According to her LinkedIn page, she was working in the Obama Administration in 2013, where she “led the media and public policy response” to the Snowden disclosures. In 2024, she took a selfie at a Taylor Swift concert with Christine Wormuth, who was then Secretary of the Army under Joe Biden. So it’s not surprising that Mulligan believes Pentagon officials who insist that existing laws are sufficient to prevent abuse of AI.
Altman also seemed impressed by the sincerity of Pentagon officials. “I cannot overstate how much the DoW has been extremely aligned on this point,” Altman wrote in response to a question about mass surveillance.1
To be fair, OpenAI is not relying solely on the good faith of Pentagon officials. In a LinkedIn post, Mulligan wrote that OpenAI was implementing “layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases.” OpenAI says it will train its models to refuse problematic requests. It will also have engineers with security clearances working directly with the military to ensure that its activities are lawful.
It’s hard to know how effective this strategy might be at preventing misuse of OpenAI’s models. If the government were to set up a program of mass surveillance, it would be natural to split up the work across many model instances. If it did that, it’s not obvious that any single instance would have enough context to realize that it was participating in a program of mass surveillance.
And while it’s conceivable OpenAI’s forward-deployed engineers would realize what the government was doing, it’s asking a lot for them to blow the whistle on a classified program — a move that could damage their careers and even expose them to legal liability.
It’s not crazy for a company to decide the defense establishment is basically trustworthy, and that it wouldn’t be appropriate to second-guess the policy decisions of a duly elected president and his Senate-confirmed subordinates. But in my view it would have been better for OpenAI to be candid about the fact that it was breaking ranks with Anthropic.
What about killer robots?
So far I’ve mostly focused on mass surveillance, but Anthropic and OpenAI also consistently said they objected to the use of their models in fully autonomous weapons. I expect this to be a very important issue in the future, but I don’t think the stakes are very high in the short term. An AI model for an autonomous weapon needs to be fast, small, and good at spatial reasoning.
It’s certainly possible to build AI models like that — Waymo has been working on models optimized for autonomy, for example — but today’s frontier models simply aren’t suitable for the task. They require too much computing power to fit comfortably inside a drone or other mobile device. And they are not optimized for accurate real-time targeting.
Eventually we may have swarms with thousands or even millions of drones. But not only does the US not have swarms like that yet, frontier models don’t yet seem powerful enough to efficiently manage a fleet that large.
So the practical, short-term stakes of the companies’ language on autonomous weapons seem modest. With that said, OpenAI’s language on autonomous robots seems as toothless as its language on mass surveillance.
“The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control,” the contract says. It adds that “any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.”
This falls well short of banning fully autonomous weapons. There’s a widespread misperception that US law currently bans fully autonomous drones, but in a piece last year, Michael Horowitz explained that this isn’t true.
Anthropic’s showdown with the Pentagon
This weekend we also got new details about Anthropic’s negotiations with the Pentagon. For example, in a Sunday story, The Atlantic’s Ross Anderson wrote that the Pentagon “would pledge not to use Anthropic’s AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loophole-y phrases like ‘as appropriate’—suggesting that the terms were subject to change.”
Finally, the Pentagon agreed to remove these qualifiers, but “the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans” — things like GPS coordinates, credit card transactions, and Google search results. Ultimately, the two sides didn’t achieve consensus before the Pentagon-imposed deadline on Friday.
A Sunday story in the New York Times reported that by Friday afternoon, the parties only disagreed about “a few words about the issue of lawful surveillance.” But when Emil Michael, the Pentagon official leading the negotiations, tried to reach Amodei to hash out the best wording, he was told that Amodei was in a meeting and couldn’t come to the phone immediately.
A Sunday evening tweet from Michael seemed to confirm that government surveillance was a key sticking point, along with “as appropriate” language.
But he portrayed the discussion somewhat differently, claiming that Anthropic “wanted language that would prevent all [Department of Defense] employees from doing a LinkedIn search.” He added that “they wanted to stop DoW from using any *PUBLIC* database that would enable us to, e.g., recruit military services members or hire new employees.”
The Pentagon had leverage because it was simultaneously drafting a new contract with OpenAI. That process began when Michael called Altman last Wednesday. “Within a day, they had drafted a rough framework,” the Times reported. OpenAI’s accommodating stance presumably made it easier for Michael to take a hard-line stance in his negotiations with Anthropic.
On Saturday, I talked to Alan Rozenshtein, a law professor at the University of Minnesota, about the Pentagon’s plan to label Anthropic a supply-chain risk. He told me that the Trump Administration would face an uphill battle convincing a court to allow this.
Rozenshtein said the Pentagon was most likely to invoke a 2011 law called Section 3252. That law was intended to be used against foreign companies, and it’s not clear that it even applies to a US-based company like Anthropic.
“I’ve been scouring, I’ve had my research assistant scouring, we can’t find anything on this statute,” he told me. “I can’t find it being used.”
He said it was unprecedented to use a mechanism like this against a US company. Moreover, the decision to use the designation as a threat during the bargaining process could signal to the courts that the government’s rationale is pretextual.
Rozenshtein also believes that Hegseth’s stated rule — that no government contractor may have “any commercial activity” with Anthropic — is far too broad. If the law applies, it would likely only apply to a company’s work on military contracts. This would be a relief to a company like Amazon, which does a lot of federal business but has also invested billions of dollars in Anthropic. If Hegseth’s interpretation of the law were correct, Amazon would have a lot to worry about. But its stock price has been basically flat over the last week, suggesting that investors don’t consider the issue a serious threat.
I admire Anthropic for its principled stance, but ultimately I’m not sure even strong contractual restrictions would have made much difference. The Pentagon already has a deal in place with xAI that puts few restrictions on military use of AI. Moreover, open-weight models are already good enough for many surveillance activities, and they’ll presumably become suitable for even more in the coming months and years.
Indeed, even Dario Amodei believes that contractual agreements are only a stopgap solution to preventing abuse of AI models.
“In the long run, I actually do believe that it is Congress’s job,” Amodei said in a Saturday interview on CBS. He urged Congress to “catch up” with laws to limit domestic mass surveillance. And that may ultimately be the most important outcome of Anthropic’s battle with the Defense Department: getting the public, and through them, their elected representatives, to focus on dangerous applications of AI.
DoW is short for “Department of War,” Donald Trump’s preferred name for the Department of Defense.


Heaven forfend that I impute ambiguous motives to this administration, but it seems possible that the $25 million donation from the president of Open AI to Trump’s superPAC may have also played a role: https://www.sfgate.com/tech/article/brockman-openai-top-trump-donor-21273419.php
Thanks for the explainer. Tim, you write that you think that "it would have been better for OpenAI to be candid about the fact that it was breaking ranks with Anthropic." I agree. But do you have any guesses about why OpenAI chose not to be candid? It's not like an argument like "one can't dictate conditions to a democratically elected government about how it should use this software" is outside the bounds of debate. In my Twitter feed, I saw a tweet from Palmer Luckey saying something similar.
I guess I'm asking if it's more of an internal thing: Altman took this tack because he doesn't want more employees to leave. Or is it because Altman thinks that there will be a Democratic president some day (as there surely will) and making this claim now will help him then when they undoubtedly negotiate to buy something from OpenAI. Or is there some third factor?