There may only be two people who can stop OpenAI's for-profit pivot
Neither one of them is Elon Musk.
In September 2022, a few months before the release of ChatGPT, podcaster Guy Raz asked Sam Altman why OpenAI had been founded as a non-profit organization. Altman said OpenAI was trying to develop artificial general intelligence, which Altman thought “really does deserve to belong to the world as a whole.”
“It's gonna have such a profound impact on all of us that I think we deserve, like we globally, all the people, all of humanity, deserve a say over how it's used, what happens, what the rules are,” Altman said. Altman said he was “very pro-capitalism,” but “AGI is sort of an exception to that."
Altman and other OpenAI leaders have been saying stuff like this since the organization was founded. And it was more than just talk: OpenAI is one of the few prominent tech companies to be organized as a non-profit organization rather than a for-profit company. Or more precisely, OpenAI today is a non-profit organization that controls a for-profit subsidiary that’s also named OpenAI.
In recent months, the for-profit subsidiary has been raising billions of dollars to fund its next generation of AI models. And investors have gotten increasingly nervous that OpenAI’s unconventional structure could prevent them from getting a financial return.
To address those fears, OpenAI is trying to convert itself into a more conventional for-profit company. Under a proposal announced last December, the non-profit parent would give up control over OpenAI’s technology in exchange for tens of billions of dollars it could use for charitable purposes. In a recent blog post, OpenAI boasted that such a transaction would create “the world’s best-equipped nonprofit.”
But opponents believe that this would betray the commitments OpenAI made when it was founded in 2015.
“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the founders wrote in the 2015 blog post announcing the creation of OpenAI. “Since our research is free from financial obligations, we can better focus on a positive human impact.”
Transforming OpenAI into a for-profit company would run directly counter to this founding vision. The question is whether statements like this are legally enforceable.
Last year Elon Musk, who co-founded OpenAI and provided much of its early funding, sued OpenAI, arguing that a for-profit conversion would violate commitments Altman made to Musk at the time OpenAI was founded.
That lawsuit is currently being heard by a federal judge who seems sympathetic to Musk’s concerns. However, the judge could rule that Musk lacks standing to bring the lawsuit because he would not personally be harmed by OpenAI transforming itself into a for-profit company.
But two government officials almost certainly do have standing: California attorney general Rob Bonta and Delaware attorney general Kathy Jennings. So far, both officials have been noncommittal about the issue, though we know that Jennings is looking into it. If one of them decided to sue OpenAI, they’d have a much better chance than Musk of blocking a for-profit conversion.
“This technology belongs to humanity as a whole”

In 2018, OpenAI published a charter promising to try to “ensure that artificial general intelligence benefits all of humanity.” The document warned that AI development could become “a competitive race without time for adequate safety precautions.” To avoid that outcome, OpenAI pledged that “if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.”
These and other commitments made in OpenAI’s charter “were considered to be binding and were taken extremely seriously” inside the company, according to a group of former OpenAI employees. The charter “was consistently emphasized by senior leadership at company meetings and in informal conversations.”
By 2019, it had become clear that the most promising path to AGI involved scaling up large language models, a project that would cost billions of dollars. So OpenAI created a for-profit subsidiary that received a $1 billion investment from Microsoft. To avoid undermining its founding principles, OpenAI inserted some unconventional terms in its investment agreements.
“It would be wise to view any investment in OpenAI Global LLC in the spirit of a donation,” OpenAI warned investors like Microsoft. “The Company may never make a profit, and the Company is under no obligation to do so.”
The non-profit parent company retained full control of the for-profit subsidiary. OpenAI capped the amount of profit Microsoft could earn from its investment. Microsoft got a license to OpenAI’s current and future technology, but the license didn’t include AGI—and OpenAI’s board got to decide what counted as AGI.
In June 2023, Bloomberg’s Emily Chang asked Sam Altman why the public should trust him.
“No one person should be trusted here,” Altman said. “I don’t have super-voting shares. I don’t want them. The board can fire me. I think that’s important.”
“The reason for our structure, and the reason it’s so weird, is we think this technology, the benefits, the access to it, the governance of it, belongs to humanity as a whole. If this really works, it’s quite a powerful technology. You should not trust one company and certainly not one person with it.”
A culture change at OpenAI

A few months later, the board did fire Altman—or at least it tried to fire him. On the Friday before Thanksgiving, the board announced that Altman had been terminated because he “was not consistently candid in his communications with the board.”
Altman fought back and quickly got Microsoft CEO Satya Nadella on his side. The two threatened that if Altman wasn’t reinstated, he would take a job at Microsoft and bring most of OpenAI’s staff with him. Faced with the potential disintegration of OpenAI, the board surrendered. Altman not only got his job back, he got a new, more deferential board.
OpenAI’s culture seems to have shifted dramatically since Altman’s return. In 2024, OpenAI suffered a series of departures by safety-minded employees. These included the leaders of OpenAI’s Superalignment team. OpenAI had pledged to give this team 20 percent of OpenAI’s computing resources to work on AI safety. But insiders say Altman never made good on that promise.
The OpenAI charter warns about the dangers of a “competitive race without time for adequate safety precautions.” That seems to be exactly the situation the industry is in right now.
“According to eight people familiar with OpenAI’s testing processes, the start-up’s tests have become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks,” the Financial Times reported earlier this month. According to the FT, “staff and third-party groups have recently been given just days” to evaluate new models for safety. For comparison, OpenAI spent six months testing GPT-4 before releasing it to the public in 2023.
The OpenAI charter says the company should try to tap the brakes in this kind of situation. But OpenAI seems to be doing the opposite.
In a December 2024 blog post titled “Why OpenAI’s structure must evolve to advance our mission,” OpenAI announced that it intended to “transform our existing for-profit into a Delaware Public Benefit Corporation with ordinary shares of stock and the OpenAI mission as its public benefit interest.” This will “enable us to raise the necessary capital” to stay on the cutting edge of AI development.
A public benefit corporation is a special kind of for-profit company that pledges to pursue goals beyond profit. Two of OpenAI’s leading competitors, Anthropic and xAI, are organized as PBCs. Anthropic is particularly known for its commitment to the safe development of AI technology, so there’s no inherent conflict between a PBC structure and a pro-safety culture.
But a PBC is still a for-profit company. In theory, a PBC might be obligated to serve the public, but there’s no mechanism to force a PBC to live up to such an obligation. So if OpenAI strips its non-profit parent of control, the newly independent for-profit will face the same commercial pressures as other companies.
Such a shift seems hard to reconcile with the pledges OpenAI has made over the last decade. The company would no longer be accountable to “humanity as a whole.” It would be accountable to a specific group of profit-minded investors.
In recent months, investors have poured billions of dollars into OpenAI. In the process, they’ve ratcheted up the pressure for OpenAI to complete a for-profit transformation.
OpenAI raised $6.6 billion last fall. The New York Times has reported that “if OpenAI did not change its corporate structure within two years, the investment would convert into debt.” That change would “put the company in a much riskier situation.”
Then last month, OpenAI raised $40 billion in a deal led by SoftBank. SoftBank pledged $30 billion, but there was a catch: if OpenAI failed to convert to a for-profit company by the end of 2025, SoftBank’s investment would be reduced by $10 billion.
Over the last decade, a lot of people supported OpenAI because they believed in its idealistic mission. Some early employees turned down higher salaries at big technology companies because they believed they could do more good at OpenAI. Some of them now feel burned as OpenAI prepares to renege on those earlier promises and convert itself into a conventional for-profit company.
The question is whether anyone can force OpenAI to honor its original commitments.
Elon Musk sues OpenAI—but there’s a catch

Elon Musk co-founded OpenAI and was its biggest funder during the first few years. Indeed, Musk was so prominent that many early news stories described OpenAI as an Elon Musk project. But Musk left the organization in 2018 after a feud with Sam Altman. In the years that followed, he became increasingly critical of Altman’s leadership.
Last year, Musk sued OpenAI, arguing that Altman had duped him into donating more than $40 million to OpenAI over five years.
“Altman feigned altruism to convince Musk into giving him free start-up capital and recruiting top AI scientists to develop technological assets from which defendants would stand to make billions,” Musk’s lawyers wrote. Now, Musk argues, Altman is trying to renege on commitments he made to Musk during OpenAI’s early years.
To win a lawsuit, a plaintiff doesn’t just need to show that a defendant did something illegal. The plaintiff must also show that he has standing. And that may not be easy.
For example, Dana Brakman Reiser of Brooklyn Law School argues that Musk is “surely the wrong person” to bring a lawsuit against OpenAI.
“Once gifts have been made, donated assets are no longer donors’ property, and they lose the authority to sue to protect them,” Reiser wrote last year. Reiser argued that strict enforcement of standing rules is necessary to shield nonprofit organizations from frivolous lawsuits.
A donor can’t sue simply because a non-profit used money differently than he expected. To gain standing, a donor needs to have a legally binding commitment from the non-profit promising to use money in a specific way.
Musk argues that his early email conversations with Altman created such a commitment. But OpenAI disputes that, arguing that those early discussions were too abstract and speculative to count as a binding contract.
In March, Judge Yvonne Gonzalez Rogers denied Musk’s request for a preliminary injunction that would have blocked OpenAI from converting to a for-profit entity. But her order signaled that Rogers had some sympathy for Musk’s point of view. She described it as a “toss up” whether OpenAI had made a legally binding commitment to Musk.
“Whether Musk’s emails and social media posts constitute a writing sufficient to constitute an actual contract or charitable trust between the parties is debatable,” she wrote. “The email exchanges convey early communications regarding altruistic motives of OpenAI’s early days and even include reassurances about those motives from Altman and [OpenAI co-founder and president Greg] Brockman when they perceived Musk as upset.”
At the same time, Judge Rogers wrote, “the emails do not by themselves necessarily demonstrate a likelihood of success.”
Later in her opinion, she wrote that “significant and irreparable harm is incurred when the public’s money is used to fund a non-profit’s conversion into a for-profit.” This suggests she is sympathetic to at least some of Musk’s legal arguments. But she might still conclude that Musk doesn’t have the legal standing required to win the lawsuit.
Two state politicians could hold OpenAI’s future in their hands

Charities are supposed to serve the public interest, but it would create too much chaos to allow any member of the public to sue non-profits on behalf of the public. This is why most states give the attorney general the power to file lawsuits on behalf of the public.
For OpenAI, the relevant states are Delaware (where OpenAI was incorporated) and California (where OpenAI has its headquarters). If the attorney general of either state wanted to sue OpenAI, they would very likely have standing to do so.
In December, Delaware attorney general Kathy Jennings notified Judge Rogers that she was looking into the legality of OpenAI converting itself into a for-profit company.
“The Delaware Attorney General has authority to review the Proposed Transaction for compliance with Delaware law by ensuring, among other things, that the Proposed Transaction accords with OpenAI’s charitable purpose and the fiduciary duties of OpenAI’s board of directors,” Jennings wrote. She added that she “has not yet concluded her review or reached any conclusions.”
Experts have told me that if Jennings did intervene, she wouldn’t have any trouble establishing standing. Rather, such a lawsuit would focus on the merits: would converting OpenAI to a for-profit company be in the public interest? And more specifically, would it be consistent with the charitable purpose described in OpenAI’s founding documents?

It’s likely that California attorney general Rob Bonta could also intervene if he wanted to. But Bonta seems less interested. In November, Musk tried to draft Bonta into the case by naming him as an “involuntary plaintiff” in the lawsuit. Bonta responded with a motion to be dismissed from the case. That doesn’t necessarily mean Bonta won’t file his own lawsuit in the future, but it doesn’t seem like a promising sign for opponents of OpenAI’s for-profit pivot.
If neither attorney general decides to intervene, the courts might decide that Musk and other private parties lack standing to bring a lawsuit. In that case, OpenAI would be free to convert itself into a for-profit company even if doing so would be contrary to every promise its leaders made in the early years of the company.
> A donor can’t sue simply because a non-profit used money differently than he expected. To gain standing, a donor needs to have a legally binding commitment from the non-profit promising to use money in a specific way.
This was clarifying. I do think that Musk has reason to be upset, since, if OpenAI had been structured as an ordinary startup, the amount of early funding he put in would have given him a stake that would be worth billions of dollars today. But that seems not to be legally actionable.
If no one sues OpenAI, does that mean there’s an easy path to privatization? Any other hurdles they need to pass?