Every Promise Sam Altman Broke — With Receipts
In early March 2026, a Reddit thread on r/OpenAI compiled something that had been building for years: a structured, evidence-linked timeline of statements made by Sam Altman and OpenAI that were later walked back, contradicted, or quietly abandoned. The thread title — "every promise Sam Altman broke with receipts" — spread across Hacker News, X, and AI-focused newsletters within 48 hours.
This is not a court ruling. It is not a regulatory finding. It is a community-driven compilation of public statements, policy changes, and later reversals, organized into a format that readers can inspect link by link. That format is precisely why it resonated. Vague distrust is easy to dismiss. A dated list with sources is harder to wave away.
Why This Thread Hit a Nerve
The timing matters. By March 2026, OpenAI had completed its transition to a for-profit public benefit corporation. The company was valued at over $300 billion. GPT-5 and its variants were shipping on rapid cycles. Enterprise contracts with governments and defense agencies were public knowledge. None of this was secret. But the gap between the company's early rhetoric and its current reality had grown wide enough that someone finally put the two side by side.
The Reddit post did not invent new accusations. It organized known facts into a timeline that made the pattern visible. That pattern — stated mission, followed by reversal, followed by reframing — is what drove engagement. Readers did not need to agree with every item on the list to recognize the shape of the trajectory.
The Nonprofit Origin Story
OpenAI was founded in 2015 as a nonprofit research lab. The founding charter emphasized open research, safety, and a commitment to benefiting humanity rather than shareholders. Altman repeatedly described the structure as intentional — a safeguard against the concentration of AI power.
That structure was dismantled piece by piece. The capped-profit subsidiary was created in 2019. The cap was raised. The nonprofit board lost operational control during the November 2023 leadership crisis. By 2025, the transition to a full for-profit entity was underway. By early 2026, it was complete.
The receipts thread catalogs each step with links to original statements. The pattern is not one dramatic reversal but a series of incremental moves, each one presented as necessary and pragmatic, each one moving further from the original commitment.
The Equity Contradiction
One of the most discussed items on the list involves Altman's equity stake. In his May 2023 testimony before the U.S. Senate, Altman stated he did not own equity in OpenAI. This was technically accurate in a narrow sense — he held no direct shares.
Later reporting revealed indirect financial interests through investment vehicles and funds with exposure to OpenAI's valuation. The Reddit thread documented this gap between the literal statement and the economic reality. One commenter pushed back, arguing the original post overstated the claim. The post was edited to reflect that correction.
That correction is important. The community was refining the evidence in public, adding precision rather than just accumulating outrage. A thread that self-corrects is more credible than one that does not.
The Regulation Paradox
Altman's position on AI regulation has been, to put it charitably, adaptive. In 2023, he was the prominent tech CEO calling for regulation. He told Congress that AI companies should be licensed. He endorsed the idea of a federal oversight body.
By 2025 and into 2026, OpenAI's public posture had shifted. Company communications increasingly warned about overregulation. Lobbying efforts targeted state-level AI safety bills. The European AI Act implementation was described as burdensome. The thread tracks this shift with specific examples, showing how the same executive moved from "please regulate us" to "please don't regulate us too much" over roughly 18 months.
This is not unique to OpenAI. Many tech companies follow the same arc: embrace regulation rhetorically during the growth phase, resist it during the monetization phase. But the receipts format makes the transition harder to obscure.
Safety Team Departures
The thread dedicates significant space to the departures of key safety and alignment researchers. Between 2023 and early 2026, multiple high-profile members of OpenAI's safety teams left the company. Some left publicly and critically. Others left quietly. The SuperAlignment team, announced with much fanfare in 2023, was effectively dissolved after the departure of its co-leads.
The receipts here include original announcements about compute allocations for alignment research, followed by reporting that those allocations were not fully delivered. The gap between the public commitment and the internal reality is documented through a combination of blog posts, interviews, and leaked internal communications.
A fair counterpoint: companies lose employees all the time, and departures do not automatically prove broken promises. But the pattern — safety-focused staff leaving, citing resource constraints and shifting priorities — is consistent enough to be notable.
Military and Dual-Use Applications
OpenAI's acceptable use policy was updated in early 2025 to remove the blanket prohibition on military applications. The company framed this as a clarification rather than a reversal. The Reddit thread treats this framing skeptically, noting that the original prohibition was presented as a moral stance, not a technical limitation.
The thread links to OpenAI's partnership announcements with defense-adjacent organizations and contrasts them with earlier statements about not developing weapons or systems designed to harm people. Whether you view this as a pragmatic evolution or a broken commitment depends on your threshold for what counts as a promise.
The Community Correction Mechanism
What makes this thread structurally interesting is not the anger — anger at tech companies is abundant and often low-quality. It is the correction mechanism. Multiple commenters challenged specific claims in the original post. Some argued that certain items were matters of interpretation rather than clear reversals. Others added missing context that softened or complicated the narrative.
The original poster engaged with these criticisms and edited the post accordingly. By the time the thread went viral on Hacker News, it had already been refined through a round of community fact-checking. That process does not guarantee accuracy, but it produces something more rigorous than a typical complaint thread.
What This Means for OpenAI
OpenAI is not going to change course because of a Reddit thread. The company has $300 billion in valuation momentum, major enterprise customers, and a product pipeline that the market rewards. But the thread crystallizes something that individual complaints do not: a documented pattern of rhetorical commitments that did not survive contact with commercial reality.
For developers and enterprise buyers, the practical question is not whether Altman is trustworthy in some abstract sense. It is whether OpenAI's current commitments — on pricing, on API stability, on safety practices — should be treated as durable or as temporary positions subject to revision.
The receipts format answers that question implicitly. If the pattern of past commitments is one of incremental retreat, the rational expectation is that current commitments carry similar risk. That is not cynicism. It is pattern recognition.
The Bigger Lesson
The thread's popularity reflects a broader shift in how the AI community evaluates companies. Benchmarks and capabilities matter, but trust is becoming a first-class variable. When developers choose between models from competing providers, the reliability of the provider's commitments — on safety, on pricing, on open access — increasingly factors into the decision.
Sam Altman is not the only executive whose past statements could be compiled this way. But he is the one whose company most explicitly built its brand on trust, openness, and public benefit. When the receipts contradict the brand, the gap is louder.
The Reddit thread is a living document. It continues to be updated with new entries and community corrections. Whether it ages well or gets forgotten depends on what OpenAI does next. But the format — public claims, linked evidence, community revision — is likely here to stay. Trust verification through receipts is becoming a standard practice in AI discourse, and no company is exempt from it.