Editors’ Note: Peter Kurie discusses the parallels between two American nonprofits that control major for-profit corporations: the OpenAI Foundation, on paper now the wealthiest charitable organization in the U.S., and the Hershey Trust, the subject of his 2018 book, In Chocolate We Trust. (This post has been revised to reflect greater clarity on the organizational nature of the OpenAI Foundation).
OpenAI is, at this moment, the most important startup in the world, with chatter that it’s becoming “too big to fail.” I began paying close attention in November 2022, when ChatGPT’s release sent shockwaves through Intel—where I was working as an anthropologist in product development—and across the broader tech industry. Since then, increasingly powerful AI models have captured public imagination and redirected enormous investment, fueling an industrial bubble reminiscent of Victorian railway mania.
As OpenAI’s influence has grown, attention has turned to how it is governed. Following a recapitalization in October 2025, the OpenAI Foundation now controls the $500 billion OpenAI Group Public Benefit Corporation and holds roughly $130 billion in equity—making it, on paper, the wealthiest charitable organization in the United States. The same directors sit on the boards of both the nonprofit and the for-profit as the company prepares for a blockbuster IPO.
What could possibly go wrong?
OpenAI’s charity-controlled corporation is unusual in the U.S., but it has a clear precedent. I grew up in Hershey, Pennsylvania, where a charitable trust—the Hershey Trust, with assets around $23 billion—has controlled The Hershey Company for more than a century. Years later, as a graduate student in anthropology, I returned to study how that arrangement played out in the everyday life of the self-proclaimed “Sweetest Place on Earth.” My book, In Chocolate We Trust (University of Pennsylvania Press, 2018), examines the enduring American belief that business and philanthropy can work in harmony to serve the public good—so long as the government stays out of the way.
Hershey remains one of the longest-running experiments in linking private enterprise to public purpose. Its history reveals both the promise and the peril: philanthropy can harness corporate power for the public good, but only under relentless external scrutiny. OpenAI now faces the same test.
How Hershey’s Charity Has Controlled a Fortune 500 Company for a Century
As legend has it, Hershey’s story begins with a dirt-poor Mennonite farm boy, Milton Hershey, who made a fortune in caramels at the turn of the 20th century and took a huge risk on mass-produced milk chocolate. He built a model company town around his factory, a hundred miles west of Philadelphia—providing workers and their families with housing and amenities including an amusement park. In 1918, after his wife’s death, Hershey transferred ownership of the chocolate company into a charitable trust, which supports the school he and his wife had founded for “the orphan boys of America.” The company went public in 1927.
Today, the Hershey Trust holds roughly 80% of the voting power and about 28% of the equity in the company, now a Fortune 500 corporation. Profits from candy sales fund the Milton Hershey School, a tuition-free residential school for children from low-income backgrounds, based in the town of Hershey—population 14,000—where the lampposts resemble Hershey’s Kisses. The trust is set up to exist in perpetuity, but its control of the company is not mandated; successive generations of trustees have chosen to maintain it.
For decades, this arrangement delivered real benefits. The trust supported the school and strengthened the wider community, funding healthcare, entertainment, and infrastructure. The company’s profits grew, and its unionized workforce remained local even as many American manufacturers shuttered plants.
Over time, the trust acquired stakeholders and expectations its founders had not foreseen. Employees, townspeople, school alumni, and local businesses came to rely on—and identify with—the arrangement, forming attachments that were as much civic and emotional as economic.
Then, in 2002—worried that the school had become too dependent on a single company’s fortunes—the trust’s leadership voted to sell its controlling stake. The community saw not prudence but betrayal; many believed the small, insular group of trustees was serving its own interests, pursuing projects beyond the school’s core mission. A sale to Wrigley, a Chicago-based competitor, would almost certainly have meant relocation or downsizing. Hershey risked becoming yet another deindustrialized Rust Belt town.
That’s when civil society spoke up—and the state stepped in. The grassroots campaign to “Derail the Sale” drew the attention of both major gubernatorial candidates, one of whom was Pennsylvania’s attorney general, who has oversight of charities in the state. His office petitioned a local court, which agreed to halt the sale on the grounds that it would harm the public interest. The trust’s leadership backed down. Several trustees resigned. Victory in Hershey was declared.
Pennsylvania revised oversight rules to require attorney general review of any sale. Yet public trust in the Hershey Trust was never fully restored. The decades that followed brought accusations of self-dealing, repeated lawsuits, and further state intervention.
Hershey’s governance crisis showed that when a charitable organization controls a major corporation, its legitimacy hinges on trust from multiple publics—not only its direct beneficiaries. When that trust breaks down, the question of who holds power moves out of the boardroom and into the political arena.
OpenAI now stands in a similar position, though on an entirely different scale. The parallels are instructive. In Hershey, the charity’s leaders tried to give up control; at OpenAI, as we will see, they sought to assert it. Yet in both cases, leaders claimed to be acting for the public good—only to find their authority challenged by other stakeholders and, ultimately, mediated by the state.
How OpenAI Came to Resemble Hershey—With Far Higher Stakes
OpenAI was founded in 2015 as a nonprofit research lab with a mission to ensure that artificial intelligence benefits “all of humanity.” As the cost of developing advanced systems soared, the nonprofit’s leadership concluded it needed to raise capital in a way no traditional charity could. So in 2019, it created a for-profit arm—OpenAI LP—that remained under the control of the nonprofit. Microsoft soon became its largest investor. By 2023, after the release of ChatGPT, OpenAI’s for-profit arm had become one of the world’s most valuable startups—still governed, unusually, by a small nonprofit board with no financial stake in the outcome.
It was this hybrid structure that set the stage for what came next.
In November 2023, OpenAI’s board abruptly removed CEO Sam Altman, citing a “lack of candor”—exposing tensions over whether rapid commercialization was compromising product safety. Nearly the entire workforce threatened to resign, and Microsoft signaled it would hire them en masse. Under intense pressure, the board reversed course. Several directors resigned. Altman returned.
Closed-door negotiations followed among OpenAI’s new leadership, investors, and the attorneys general of California, where the nonprofit operates, and Delaware, where both the nonprofit and the for-profit are incorporated. Behind the legal formalities lay a power struggle: OpenAI hinted at relocating if California’s charitable oversight proved too heavy-handed, while officials in both states sought to preserve their authority and economic interests. The governance structure announced in October 2025 is the result of that uneasy compromise.
The for-profit company is now a Public Benefit Corporation (PBC), which considers social purpose alongside profit. That, in itself, isn’t unusual—Ben & Jerry’s and Warby Parker are both PBCs. What is unusual is ownership and control: the OpenAI Foundation holds equity valued at roughly 26 percent of the company and, more importantly, special ‘Class N’ governance rights that allow it to hire and fire the board and approve or block major structural changes—giving the nonprofit effective control. As with Hershey, that control persists only as long as the charity’s leadership chooses to maintain it.
OpenAI now sits at the forefront of a new wave of mission-driven corporate experiments. Patagonia divided ownership and control between a private trust and a social-welfare organization devoted to environmental advocacy. Anthropic—OpenAI’s competitor—created a Long-Term Benefit Trust to shape board composition so that AI safety is prioritized. Both aim to hard-wire public purpose into private enterprise, but neither goes as far as OpenAI in giving a single charitable organization direct control.
Structurally, OpenAI most closely resembles the ‘industrial foundations’ of Northern Europe—such as Denmark’s Novo Nordisk Foundation (the world’s wealthiest philanthropy, thanks to Ozempic and Wegovy). In the U.S., that model was largely curtailed by the 1969 Tax Reform Act, which imposed steep penalties on private foundations that held controlling stakes in businesses. Hershey—organized as a charitable trust rather than a private foundation—was grandfathered in, becoming the most prominent American example of an industrial foundation. OpenAI’s new structure revives elements of that older model, with significant implications for corporate governance and democratic oversight in our AI era.
How Charitable Control Without External Checks Threatens Public Trust
The greatest vulnerability facing OpenAI may be the same one that broke trust in Hershey: the perception that a small group in charge is serving its own interests rather than the public interest.
In Hershey’s case, the charity’s leadership was seen as selling the company to pursue projects beyond the school’s core mission; in OpenAI’s, the concern is that leadership might prioritize the company’s commercial success over the nonprofit’s public purpose, particularly its safety mission.
On paper, the OpenAI Foundation and the Public Benefit Corporation share the same goal: “to ensure that artificial general intelligence benefits all of humanity.” In practice, they answer to different constituencies. The nonprofit is accountable to the public; the for-profit must also answer to shareholders. The two entities maintain separate boards in name, but the same small group of people serves on both—concentrating more control than even Hershey ever did.
That tight alignment may keep the organization unified, but it also binds the same directors to competing obligations: advancing the company’s commercial success while upholding the nonprofit’s mission. With no independent check, it’s unclear whether decisions serve the public interest or merely business growth.
The mission itself compounds the risk: “Benefiting all of humanity” is broad enough to justify almost any decision, inviting the appearance of conflict of interest.
The OpenAI Foundation’s initial $25 billion plan illustrates the tension. Its “AI resilience” initiative focuses on cybersecurity and the risks posed by advanced systems—including those created by OpenAI’s own models. These are legitimate public concerns, yet they also align closely with the company’s business interests. When the nonprofit funds efforts to mitigate the risks of its own technology, it inevitably raises questions: Is it serving the public good, or making OpenAI’s products more commercially viable? Even when both aims coincide, the appearance of conflict is hard to avoid.
The deeper risk may not be technological, but civic: the erosion of public trust that follows when AI governance appears unaccountable. In California, where the nonprofit operates under the attorney general’s oversight, that trust will depend on whether civil society and state regulators prove equal to the task.
How External Checks on Charitable Control Protect Public Trust
The same structure that creates appearances of conflict of interest also provides levers of public accountability.
In a charity-controlled corporation, the business itself follows ordinary corporate law, while the charitable organization is governed by state charity law—which requires its leaders to act in service of a public purpose. That split in legal obligations matters because it creates an external check: if the charity drifts from its mission, the state can step in. Traditional corporations—and even Public Benefit Corporations—have no comparable safeguard; their social commitments are largely discretionary, enforced by shareholders and market pressure, not the state.
In Hershey, civic pressure is what moved the state government to act. Intervention came only after sustained public agitation—workers, residents, alumni, journalists, and former trustees asking questions and demanding answers.
OpenAI now faces similar, but more organized, scrutiny. EyesOnOpenAI, for example, is a coalition of more than sixty California-based foundations, labor unions, and nonprofits, working to ensure AI serves the public good. It monitors OpenAI’s governance and spending, advocates for transparency, and coordinates across civil society. This coalition can’t compel action, but it can rally stakeholders and make inaction politically costly.
The attorney general, an elected office in California, holds real power to intervene—by investigating potential misuse of charitable assets, by demanding corrective action or changes in governance, and by going to court to enforce a nonprofit’s public purpose. California courts can back such actions, and the legislature can strengthen the state’s hand further. The federal government remains, as ever, a wild card.
Any major restructuring or mission change at OpenAI now triggers oversight in Sacramento. That level of state involvement is rare in American corporate governance—because charity-controlled corporations are so rare in the U.S.—and virtually unheard of in Silicon Valley.
As OpenAI lays groundwork for a potential trillion-dollar IPO, California’s attorney general stands as a key check on the organization’s direction—occupying one of the most consequential oversight roles in the country, if not the world.
“We will be keeping a close eye on OpenAI,” Attorney General Rob Bonta announced on October 28.
The public—and California’s voters—will be watching, too.
-Peter Kurie
Peter Kurie is a Los Angeles–based cultural anthropologist and author of In Chocolate We Trust (University of Pennsylvania Press, 2018). A former user experience researcher at Intel, he now helps foundations and technology firms navigate the intersection of innovation and social impact.