This is the full text of a post from Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work.
Earlier this month, OpenAI announced that it aspires to build "the best-equipped nonprofit the world has ever seen" and was convening a commission to help determine how to use its "potentially historic financial resources."
But critics view this new commission as a transparent attempt to placate opposition to its controversial plan to restructure fully as a for-profit — one that fails to address the fundamental legal issues at stake.
OpenAI is currently a $300 billion for-profit company governed by a nonprofit board. However, after an earlier iteration of that board briefly fired CEO Sam Altman in November 2023, investors reportedly began demanding that the company shed its quasi-nonprofit status.
"The story of OpenAI's history is trying to balance the desires to raise capital and build the tech and stay true to its mission," a former OpenAI employee told me. The current move, they say, is an attempt to "separate these things" into a purely commercial entity focused on profit and tech, alongside a separate entity doing "altruistic philanthropic stuff."
"That's wild on a number of levels because the entire philanthropic theory of change here was: we're going to put guardrails on profit motives so we can develop this tech safely," the former employee says.
Legal hurdles
The for-profit conversion faces significant unresolved legal challenges, including a lawsuit from Elon Musk arguing that his $44 million donation was contingent on OpenAI remaining a nonprofit and that the conversion would violate its founding charitable purpose. The case will go to trial this fall. The conversion can also be challenged by the California and Delaware Attorneys General (AGs), who are reportedly each looking into the case.
Musk's suit, OpenAI's gargantuan valuation, and the unprecedented nature of the conversion attempt appear to have attracted scrutiny.
Without mentioning OpenAI explicitly, California Assembly Member Diane Papan introduced a bill in February that would have blocked the conversion. However, the legislation was amended without explanation earlier this month to instead focus on aircraft liens. Papan's office has not replied to a request for comment.
OpenAI countersued Musk last week, asking a federal judge to halt what it called a "relentless campaign" of harassment designed to harm the company.
Adding fuel to the fire, a group of twelve former OpenAI employees, represented by Harvard Law Professor Lawrencre Lessig, filed an amicus brief on Friday supporting Musk's challenge to the conversion.
The brief argues that OpenAI's nonprofit structure wasn't merely administrative — it was fundamental to the organization's mission and key to recruiting talent who were told they were building AI that would benefit humanity rather than shareholders. Former employees contend that removing the nonprofit's controlling role would constitute a betrayal of the trust that drew many of them to the company in the first place.
In an accompanying declaration, Todor Markov stated he left OpenAI after losing trust in leadership, concluding that the organization's Charter "had been used as a smokescreen, something to attract and retain idealistic talent while providing no real check on OpenAI’s growth and its pursuit of AGI." The proposed restructuring plan, he writes, "has only served to further convince me that OpenAI’s Charter and mission were used all along as a facade to manipulate its workforce and the public."
Markov reiterated this point in a written statement to me:
The fundamental question about the OpenAI corporate restructuring is whether the nonprofit will maintain legal control over the for profit. The announcement of the OpenAI commission does not address that question in any way, and so does nothing to alleviate the substantial concerns we raise in our amicus brief.
Fearing that billions in charitable assets could be transferred to private hands without sufficient oversight, a coalition of dozens of California-based nonprofits began organizing and urged the state's AG in January to investigate the OpenAI conversion, seeking transparency about the valuation process and demanding assurance that the nonprofit will remain truly independent from commercial interests.
One of the coalition leaders, LatinoProsperity CEO Orson Aguilar, says that the commission announcement reminded him of 2008, "when the financial institutions that helped crash the economy decided that the solution was teaching everyone else financial literacy."
OpenAI's original nonprofit mission was, and, at least for now, remains, to ensure AGI benefits all of humanity. This purpose is enshrined in its Charter, which defines AGI as "a highly autonomous system that outperforms humans at most economically valuable work." In 2019, when the company spun up a for-profit arm to raise the billions needed to train increasingly expensive AI models, it gave the nonprofit board ultimate control. That board has a fiduciary duty to humanity — not shareholders.
OpenAI has not replied to multiple requests for comment.
The nonprofit's control over OpenAI became global news in November 2023, when the board dramatically exercised its authority by firing CEO Sam Altman — cryptically citing his failure to be "consistently candid." Altman orchestrated a swift comeback with the help of Microsoft and a revolt of the employees (whose ability to sell billions in equity hung in the balance).
The Wall Street Journal recently shed new light on the firing, reporting that OpenAI executives collected dozens of examples of Altman's "alleged lies and other toxic behavior, largely backed up by screenshots," such as falsely saying the legal department approved a release without safety testing.
Illustration: Jan Feindt. Wall Street Journal, 2025.
The ouster was brief, but still served as a potent reminder to investors that the nonprofit board was, at least formally, in control and that its fiduciary duty was to all of humanity — not them.
Rose Chan Loui, founding executive director of the Lowell Milken Center on Philanthropy and Nonprofits at UCLA Law School, says that OpenAI's proposed commission "confirms their intent to make this nonprofit a typical corporate foundation." She continues, "Now can it do a lot of good? Yes, absolutely. But it's still, from our perspective, abandonment of their original purpose. Either that or it's a big stretch of their purpose."
Her colleague Michael Dorff, executive director of the Lowell Milken Institute for Business, Law, and Policy also at UCLA Law, echoed this skepticism. "I'm trying not to be terribly cynical," he told me. "On the one hand, it's commendable that OpenAI is thinking of ways to use their tech to fulfill their nonprofit mission. But I don't think that should have anything to do with whether the nonprofit can abandon its mission."
"Pandering"
OpenAI's announcement describes a commission that will help understand "the most urgent and intractable problems nonprofits face" and incorporate feedback from leaders in health, science, education and public services — "particularly within OpenAI's home state of California."
That last detail isn't subtle, and critics see it as telling.
OpenAI is "pandering," the former employee says.
"Most of the nonprofit and philanthropic world doesn't care about AI safety. And presumably the California AG and the people who he cares about don't know anything about AI safety or the actual premise of OpenAI's purpose and mission," the former employee says. The specific mention of California in the plan for a "wildly well-funded science and education nonprofit," they say, makes "the pandering pretty obvious. So it feels like a bribe to California, to the California nonprofit sector — the sector that might be up in arms about this nonprofit conversion."
On Tuesday, OpenAI announced the advisors for this commission. The group will be convened by Daniel Zingale, a former senior advisor to California governors Gavin Newsom and Arnold Schwarzenegger. The advisors include iconic labor leader and civil rights activist Dolores Huerta, who cofounded United Farm Workers with Cesar Chavez; Monica Lozano, former CEO of the largest Spanish-language newspaper in the US; Dr. Robert K. Ross, former president and CEO of The California Endowment, an influential statewide healthcare foundation; and Jack Oliver, a lawyer and private equity partner who previously co-chaired Bono's ONE Campaign.
Dolores Huerta and Kamala Harris in 2017
None of the commissioners immediately replied to requests for comment.
The commission's stated goal is to gather input on how OpenAI's philanthropy can tackle systemic issues, incorporating feedback from leaders in health, science, education, and public services — with a particular focus on California. The advisors are tasked with submitting findings to the OpenAI board within 90 days.
The advisors have no clear experience in AI governance but do have deep connections to California and backgrounds in civil rights, education, and community advocacy. Their selection does little to dispel the criticisms that the commission is primarily an attempt to smooth the path for this conversion, particularly with the California AG.
The effort, at least with Aguilar of LatinoProsperity, seems to be falling flat. "Pandering is the right word," he told me prior to the announcement of the advisors. OpenAI's commission is "a reaction to the work of our campaign and others to try to make it seem that as though they're listening, but they're distracting from the real questions," he says.
Those questions, according to Aguilar: Is OpenAI a truly independent nonprofit? And what is the real value of what it has right now?
In a follow-up email sent after the commission members were announced, Aguilar was even more pointed: "As impressive as OpenAI's advisory commission members may be, let's call this what it is — a calculated PR stunt to distract us from the real issue: OpenAI funneling nonprofit assets into private pockets."
Conflicts of interest
OpenAI's nonprofit board has a fiduciary duty to represent the interests of the public and the nonprofit, which includes ensuring that it is fairly compensated for whatever it gives up in the conversion.
In an email, Aguilar writes, "The fundamental question remains: How can a nonprofit commission maintain true independence when housed within an organization with significant commercial pressures?"
The IRS requires charities to publicly disclose conflicts of interest of its board members, and views a lack of majority independence as a significant governance risk factor that may invite greater scrutiny. Chan Loui says that the way OpenAI defined independence when it started the for-profit in 2019 was based on whether the director has equity in the company. However, she says, the law also looks at whether you have financial interests in the organization's "partners," such as suppliers or customers.
OpenAI lists ten "independent" nonprofit board members on its site, including CEO Sam Altman. However, at least seven of these directors or their spouses have significant investments in companies that already do business with OpenAI, according to SEC filings, news reports, and Crunchbase data. This includes the board chair, Bret Taylor, who founded the $4.5 billion AI startup, Sierra, which is a customer of OpenAI (Taylor has publicly committed to recuse himself from decisions in which he's conflicted).
The wrong question?
A lot of the conversation around OpenAI's restructuring has focused on whether the nonprofit will be fairly compensated for what it's giving up — namely control over the for-profit and the money that exceeds extremely high profit caps.
In February, Musk offered to buy the nonprofit's assets for $97.4 billion, in a likely attempt to derail the conversion or drive up the price the for-profit has to pay to the nonprofit to give up its control.
The nonprofit coalition has emphasized the importance of adequate compensation as well. Aguilar writes to me, "It's crucial that the Attorney General provide a fair market valuation of OpenAI's charitable assets to ensure proper oversight and protection of public interest."
However, others think this focus and the commission announcement misses the fundamental legal question: Does OpenAI's restructuring advance its charitable purpose?
The former employee says, "I would like the entire question about fair market value to be taken off the table because I think that's just the wrong question" because it "treats this as a normal corporate transaction where fair market value is what matters."
Given OpenAI's purpose is to ensure AGI is built safely, they ask, "What better position could you be in than literally controlling the company on the brink of building AGI? What amount of money could you get in the transaction to put you in a better position to realize that mission?"
They offer an analogy:
Imagine you are a nonprofit whose mission is to ensure nuclear technology benefits humanity. And you literally have a controlling interest in the Manhattan Project. And it's 1943. For what amount of money should you sell that interest?
Dorff agrees, stating plainly, "I don't see any amount of money that would allow the nonprofit to better pursue its mission." After all, he notes, it currently controls the market leader in the AI space.
That market leadership was recently underscored by OpenAI's Wednesday release of o3, its latest and most capable "reasoning" model to date.
OpenAI says that o3 sets new state-of-the-art performance on difficult benchmarks for coding, math, and science, significantly improving on its predecessor, o1.
While OpenAI asserts the model remains below the "High" risk threshold defined in its Preparedness Framework, the relentless push toward more powerful and autonomous systems highlights the immense potential value and risk embodied in the technology the nonprofit currently oversees — and the very control it is being asked to relinquish.
Chan Loui calls the nonprofit's position "priceless." Its control of the company is beyond what the typical watchdog organizations can do, she says. The conversion is not, in her view, "just an interpretation of how to fulfill purpose," but rather "a change of purpose." "Under the law, they would need to go to court and say we have a basis for changing our purpose."
Dorff says "I haven't seen anything remotely close to a justification for" a change in purpose. "It's a very steep burden to show that a nonprofit's mission is no longer viable."
OpenAI has benefited a lot from its nonprofit status. Chan Loui notes that the emails released as a result of the Musk lawsuit "demonstrate that their reasoning really was driven by recruiting needs" — a point supported by the ex-employee amicus brief.
"That was really the main benefit of going out there and saying, 'We're a nonprofit. We really care about developing AI safely and for the benefit of humanity.'" "You can't just abandon your purpose now that you are in this position," she says.
Safety shakeups
The conversion attempt comes amidst a year of headline-generating departures of OpenAI leadership and safety staff.
On Tuesday, I reported in Obsolete that Joaquin Quiñonero Candela had quietly stepped down from his role leading the team focused on catastrophic risks, less than nine months after the previous lead was reassigned without an announcement. Candela announced the move on LinkedIn, describing it as a transition to an "intern" role focused on healthcare applications.
Candela’s new swag
An OpenAI spokesperson said that safety governance is now consolidated under a Safety Advisory Group (SAG) chaired by Sandhini Agarwal and preparedness work is distributed across teams.
This marks yet another significant shakeup in OpenAI's safety leadership following a year of high-profile exits — including cofounders John Schulman and Ilya Sutskever, safety systems lead Lilian Weng, Superalignment co-lead Jan Leike, and Senior Advisor for AGI readiness Miles Brundage — and the disbanding of both the Superalignment and AGI Readiness teams.
And it comes amidst recent reports that OpenAI dramatically reduced safety testing times and released powerful new models like DeepResearch and GPT-4.1 without promised safety reports, raising further doubts about the company's founding commitment to ensure that AGI is built safely.
The path forward
OpenAI's announcement states that commission members will submit insights to the board within 90 days. The board will "consider these insights in its ongoing work to evolve the OpenAI nonprofit well before the end of 2025."
That timeline is significant — the Wall Street Journal recently reported that if OpenAI fails to convert by the end of 2025, it will have to return $20 billion of the $40 billion it recently raised in a fundraising round valuing the company at $300 billion. The company's $6.6 billion investment from October carries a similar condition, requiring conversion by October 2026 to avoid potential investor clawbacks with ten percent interest.
OpenAI investor and SoftBank CEO Masayoshi Son and Sam Altman at an event in Tokyo in February. Photo: Kim Kyung-Hoon/Reuters
Chan Loui called this "a very aggressive deadline" that regulatory authorities may struggle to accommodate. "You can't hurry the attorneys general. They don't really have a deadline," she said. "In California, you're to give notice of any significant transactions, which is what this proposed restructure is," she says, and "there's no deadline for when they decide."
She speculates that these deadlines might be a move to speed up the regulatory authorities.
According to Dorff, there are only two ways the nonprofit mission could be enforced: through Musk's lawsuit, which he says definitely won't see a verdict before the end of this year, or through action by the California or Delaware AGs.
"The only way to meet that deadline that I can see is for OpenAI to settle with everybody," Dorff said. "Elon would have to agree," and OpenAI "would need some kind of indication of satisfaction from the AGs."
Aguilar says he recently met with executives at OpenAI, along with former Housing and Urban Development Secretary Julián Castro, and fellow coalition leader Fred Blackwell. Aguilar says that the OpenAI executives listened and were "very eager to get our thoughts on mission," but no details were shared. The meeting hasn't dissuaded the coalition, which Aguilar says has grown to around 50 organizations and recently filed an administrative petition calling on the California AG to investigate the conversion.
So while OpenAI works to construct the appearance of a graceful transition, the legal challenges remain daunting. No matter how well-resourced the spun-out nonprofit might be, many experts say it cannot replace the core mission enshrined in the original nonprofit.
"If OpenAI wants to give many billions to science and education in California, that's great. I'm very supportive of that," the former employee says. "But that's not its mission. They can't use that as an out in this situation."
OpenAI was founded as an alternative to the perils of letting commercial interests dictate the development of a potentially transformative — and dangerous — technology. A decade later, as the AI race it helped supercharge reaches unprecedented intensity, OpenAI is looking to shed one of the last vestiges of that original intent.
The former employee put it simply: "I view what's happening now is: the profit motive's winning. They have given up on the altruistic angle. They've given up on trying to be the good guy, and they just want to win."
If you enjoyed this post, please subscribe to Obsolete.
Executive summary: This investigative post argues that OpenAI's proposed shift from nonprofit governance to full for-profit control marks a fundamental betrayal of its original mission to develop artificial general intelligence (AGI) for the benefit of humanity, with critics viewing its newly announced philanthropic commission as a public relations maneuver that fails to address core legal, ethical, and governance concerns surrounding the transfer of charitable assets to private hands.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.