23 Comments
User's avatar
Oleg  Alexandrov's avatar

Historically speaking, Congress give the president a lot of freedom in day-to-day execution, and a lot of benefit of doubt.

AI will do all that the military needs it to do, including surveillance and killer robots, as deemed necessary, and the Congress will only prevent worst excesses.

I think sooner or later Anthropic will also provide an AI that it cannot ultimately control, if the gov gives enough reassurances (with fine print).

David M Lewis's avatar

Heaven forfend that I impute ambiguous motives to this administration, but it seems possible that the $25 million donation from the president of Open AI to Trump’s superPAC may have also played a role: https://www.sfgate.com/tech/article/brockman-openai-top-trump-donor-21273419.php

Thom's avatar

I was surprised to see this missing from this otherwise very well written article.

Samuel's avatar

The Kushners are investors in OpenAI also

Shreeharsh Kelkar's avatar

Thanks for the explainer. Tim, you write that you think that "it would have been better for OpenAI to be candid about the fact that it was breaking ranks with Anthropic." I agree. But do you have any guesses about why OpenAI chose not to be candid? It's not like an argument like "one can't dictate conditions to a democratically elected government about how it should use this software" is outside the bounds of debate. In my Twitter feed, I saw a tweet from Palmer Luckey saying something similar.

I guess I'm asking if it's more of an internal thing: Altman took this tack because he doesn't want more employees to leave. Or is it because Altman thinks that there will be a Democratic president some day (as there surely will) and making this claim now will help him then when they undoubtedly negotiate to buy something from OpenAI. Or is there some third factor?

Shine's avatar

Third factor: Altman genuinely doesn’t care how the government uses its products.

Shreeharsh Kelkar's avatar

Yes but then why not say that? Why say that they had the same or similar conditions as Anthropic and got the deal?

Timothy B. Lee's avatar

I can only speculate, but I think a couple of things are probably going on:

One factor: OpenAI employees probably don't favor mass surveillance or fully autonomous weapons, but more importantly I think the general public probably doesn't favor those things. I haven't seen any specific polling on this, but I would be willing to bet that even many Republicans aren't that excited about the use of AI for mass surveillance and automated weapons. So OpenAI wants to be able to say they're not doing those things for PR reasons. Not only that, the Pentagon might be happy to have OpenAI say this, since the Pentagon probably doesn't want the public thinking they're doing those things either.

Another factor: I think most Pentagon insiders genuinely don't think they're going to do mass surveillance, at least as they prefer to define the terms. I think if you'd asked NSA executives privately in 2013, they would have told you that the upset over the the 215 program and PRISM were overblown because they weren't collecting information to spy on Americans, they were doing it to spy on terrorists. The mass collection was just a necessary step to spy on terrorists more effectively. I'm not saying that this position was reasonable, but I think something like it was widely and sincerely held by intelligence community insiders in 2013.

I expect something similar is happening here. People at the Pentagon — especially career civil servants — see themselves as doing good, important work, and they see "mass surveillance" as something that only other, bad people would do. So of course they're happy to say they're not going to engage in mass surveillance, while reserving the right to perhaps engage in some mass-surveillance-adjacent activities down the road. People like Katrina Mulligan are inclined to take them at their word when they say this and she had Sam Altman's ear. So it's possible that Sam was not trying to mislead anyone in Friday's announcement or Saturday's AMA — he just hadn't thought very carefully about how the debate was likely to play out in practice. Does that make sense?

Shreeharsh Kelkar's avatar

This makes a lot of sense, thank you. I had forgotten to think about "public opinion" which, you're right, Open AI does want to stay on the positive side of and I can see why this would be a much bigger consideration than alienating some employees.

My take on Sam Altman, following much recent reporting, is that he is a ruthless pragmatist who has determined, above all else, that he wants to make Open AI the next Google, and will do what it takes, organizationally and politically, to make it happen.

Timothy B. Lee's avatar

Yes, that's my read of Altman as well.

Simple John's avatar

Trump abhors runaway winners.

Anthropic seems to be winning in the domains that can actually benefit from AI.

Trump made up a reason to get some momentum back for OpenAI.

Klement Gunndu's avatar

The nuance in "On any other day, the record-breaking $110 billion fundraising round OpenAI announced last Friday would have captured th" is something most posts on this topic miss. Saving this for reference. The distinction you draw here is exactly what teams need to internalize before scaling.

Mark McEnearney's avatar

Re: ‘A Sunday story in the New York Times reported that by Friday afternoon, the parties only disagreed about “a few words about the issue of lawful surveillance.”’ Would love to know the wording used in the OpenAI contract.

Mark Vickers's avatar

Well done. That strikes me is the best reporting on this topic i've seen so far

code_to_joy's avatar

Whatever shall we say about this topic in the context of less democratic nation s... 🤔

S.Germenis's avatar

There has never been a weapons system that was not used save a cobalt tipped thermonuclear bomb. If you think the government will not conduct mass surveillance on Americans without authorization then you are not informed. The history of abuse for the alphabet agencies is easily researched. The government at the end of the day does not believe it works for the people but exercises power independently

Congress could pass massive oversight but it would be disregarded.

Alayne Rhodes's avatar

OpenAI's "guardrails" are just a marketing stunt for tech that isn't ready; we need actual laws to prevent autonomous killing machines once the compute finally catches up.

Mark Dtayo's avatar

The consumer response made the red lines real. 295% uninstall spike. 1.5 million gone. Altman's own VP quit. By Monday he was calling the Pentagon begging to rewrite the deal. The people moved before any institution did. Full breakdown: tmaark.substack.com/p/hhhooollleee-sht

James Maconochie's avatar

Insightful, timely, and though provoking, Timothy. But I'd reframe the core problem: this isn't fundamentally about trust, it's about language. When the most powerful actors also control the definition of words like "lawful," "constrained," and "relevant," no contract can hold. Clapper's "not wittingly." Every American's phone record being "relevant."

OpenAI's contract appearing to constrain while actually enabling. This is what happens when shared meaning erodes.

Dario is right that Congress needs to act, but legislation faces the same vulnerability. The tax code is a cautionary tale: bloated, incomprehensible, and endlessly exploited. If there was ever a moment for simple, explicitly defined, internationally coordinated language around AI governance, this is it. Not 1,000 pages. Ten principles. Crystal clear.

ToxSec's avatar

really nice job on this article. thanks for the updates here, some of this wasn’t on my radar

Ross Grossman's avatar

The Clapper moment is the one that gets me. The government doesn't need to break the law. It needs the law written loosely enough that "complying" and "surveilling" are the same verb.

What I keep sitting with as a therapist: the NSA had to fight in court for bulk phone metadata. What people type into AI at 2am, their marriages, their addictions, stuff they've probably never said out loud, they just hand over. Voluntarily. To a system with the confidentiality obligations of a bulletin board.

No warrant. No Snowden. Just a terms of service nobody actually read.

Forty years of surveillance infrastructure, and it turns out you didn't need any of it. You just needed people to feel lonely enough.

I wrote about the civilian end of this, what it looks like when the AI roommate has keys to everything and zero legal obligation to protect it: https://thediagnosis.substack.com/p/aiden-moved-in-you-didnt-read-the . Would love your opinion on this...

Marcie Geffner | Mostly Books's avatar

Thank you for explaining this.

Petar Dimov's avatar

A detailed look at the Pentagon’s AI negotiations