They Were Going to Save Us From This. Then They Became This.
Don’t let a few megacompanies control AI.
This is the third in a series. The first two pieces are Dario Amodei: The Self-Appointed Ethics Czar for Planet Earth and Time for Open Source Large Language Models.
What follows is a combination of documented facts and my own conclusions. Where I am speculating, I will try to say so. The facts are there for anyone to verify. The conclusions are mine. You should draw your own.
There was a real idea at the center of OpenAI’s founding. Not just a marketing line — an actual, serious concern shared by serious people. The idea was that artificial intelligence was too powerful and too consequential to end up owned and controlled by a handful of large corporations. Google, Microsoft, Meta — the usual suspects — would inevitably try to capture it, monopolize it, and use it to entrench their own power. Someone had to make sure that didn’t happen.
So in 2015, a group of technologists, researchers, and funders — Elon Musk among them — created OpenAI as a non-profit. Open, as in open research. Shared, as in for everyone. The research would be published. The code would be available. No single company would own it. This was the explicit mission, and it attracted a community of researchers and developers who believed in it enough to contribute their work, often without meaningful compensation, because the cause seemed worth it.
That community’s contributions helped build something extraordinary. They also helped build something that would be taken from them without a second thought.
What followed is a story about how power actually works — and about how the people most loudly committed to preventing a bad outcome have a way of quietly producing it.
Billions of dollars has a way of doing that to good intentions.
Who Actually Invented This
Before getting into what OpenAI’s founders did and didn’t do, it’s worth establishing something that tends to get lost in the mythology: nobody in this story invented the underlying technology.
Google’s researchers produced some of the most important foundational work in the field. The transformer architecture — the technical breakthrough that sits underneath every major AI system today, including ChatGPT and Claude — was developed by Google researchers and published in a 2017 paper. Google’s DeepMind had been doing serious AI research for years across a range of domains, including work involving human feedback in training — a concept that OpenAI would later formalize into the specific technique that became the industry standard. And as early as 2018, Google was publicly demonstrating an AI system called Duplex that could call a hair salon, speak naturally with a human receptionist, and book an appointment — using voice. Whatever combination of techniques made that possible, the capability was already there, and it was impressive.
Google didn’t build the product. Why is worth examining. This is conjecture, but given the scale of Google’s contributions to AI research, it is not a stretch: the researchers clearly could have, and likely knew exactly where the technology was heading. The more plausible explanation is that management chose not to. Search was the golden goose, generating billions every quarter, and a capable AI assistant threatened to make it obsolete. If the AI can answer your question without you needing to do searches, there are no search results to click through, and no clicks means no ad revenue. Protecting the cash cow is a rational short-term decision. It is also, as history repeatedly shows, a catastrophic long-term one.
This is not a new mistake. It has a name in business literature — the innovator’s dilemma — and it has claimed many companies before Google. Kodak invented the digital camera and buried it, fearing it would cannibalize their film business. Xerox PARC invented the graphical user interface, the mouse, and the foundations of what became the personal computer — years before Apple or Microsoft existed. The people running Xerox were copier salesmen making very good money, and they had no interest in understanding what their own researchers had built, let alone commercializing it. They watched Steve Jobs walk through their labs, take notes, and build a billion-dollar industry on what they had invented and ignored. You protect what you have until someone else builds what you were afraid to, or did not value, and then you spend a decade trying to catch up. In Google’s case, the people who eventually built the commercial chatbot products were working with architecture and techniques that Google researchers had pioneered and in many cases published for the world to see.
OpenAI commercialized it. Anthropic commercialized it. They deserve credit for execution — for building the products, scaling the systems, and taking the market risk. But the intellectual foundation was substantially not theirs. Keeping that straight matters, because the story these companies tell about themselves tends to obscure it.
The Man Nobody Mentions
In the early 1990s, there was Gerald Tesauro, a researcher at IBM’s Thomas J. Watson Research Center.
His work was not in language models — it was in deep reinforcement learning, a distinct branch of AI in which a system learns on its own by doing rather than by being explicitly programmed. Without getting into the weeds, reinforcement learning plays a central role in large language models today and will continue to play a central role in the path toward AGI. And TD-Gammon was the first time a computer demonstrated something that looked, unmistakably, like high-level human thought — strategy, adaptation, judgment — not just pattern matching.
Working with hardware that would struggle to run a simple modern smartphone app, he built TD-Gammon — a neural network that taught itself to play backgammon through self-play and reinforcement learning, starting from nothing and eventually reaching near world-champion level. It was one of the first real-world applications of reinforcement learning ever demonstrated.
AlphaGo — Google’s program that defeated the world champion at Go, a game that had resisted every technique that worked for chess and remained an unreachable goal for decades despite years of trying — was built on the same foundational ideas. The AlphaGo papers, published decades later, cited his work directly. In my opinion, if he had access to modern compute, doing what later made Google famous with AlphaGo would have been within his reach. There were no additional major puzzles to solve to make a world champion Go program.
The connection runs deeper than AlphaGo. RLHF — Reinforcement Learning from Human Feedback — is one of the core techniques that makes ChatGPT and Claude behave like assistants rather than just text predictors. It is reinforcement learning with human preference judgments substituted for the game score. The conceptual lineage runs straight through Tesauro.
There is a saying in science: the person who becomes famous for a discovery is often not the first person to make it, but the last.
The Organization That Wasn’t Protected
The founding ethos of OpenAI had a practical consequence that nobody fully thought through. Commercial organizations protect themselves. IP assignment clauses, trade secret agreements, retention structures — legal architecture designed to ensure that institutional knowledge cannot simply be walked out the door. These are standard features of any serious technology company precisely because the knowledge is the value.
OpenAI didn’t have that armor, or had it only weakly, because it contradicted everything the organization was supposed to be. You don’t build walls around open research. You don’t treat collaborators like potential competitors. The assumption was that people were there for the mission, not the money, and that the normal competitive threats didn’t apply to an organization explicitly founded for the benefit of humanity.
It was a noble assumption. It turned out to be worth billions of dollars to whoever recognized it could be exploited.
Act One: Sam Sees the Value
By 2020, GPT-3 had demonstrated that large language models worked at commercial scale. OpenAI had proved the concept — on the back of years of research, enormous compute costs, and the contributed labor of a community that thought it was building something for everyone.
Sam Altman saw what that was worth and acted accordingly. A non-profit research lab is not a vehicle for capturing the value of a technology that could reshape the global economy. So OpenAI changed. A capped-profit structure. A massive investment from Microsoft — one of the specific companies the founding was meant to counterbalance. A pivot toward commercial products. The “open” in OpenAI became increasingly difficult to locate.
Musk eventually sued, arguing the organization had betrayed its founding mission. You don’t have to be a Musk sympathizer to see the merit in that argument. OpenAI stopped being what it said it was. The researchers and contributors who gave their work to the mission did not give it to Microsoft’s balance sheet. Nobody asked them.
Altman is a skilled operator. He is also the man who took an institution built on a promise to the broader community and converted it into a commercial enterprise while that community watched. The non-profit board tried to stop him — firing him in November 2023 — and was overruled within days by investor pressure and employee revolt. The idealistic structure didn’t just fail to prevent commercialization. It couldn’t even survive one attempt to enforce it. That deserves to be said plainly.
Act Two: Dario’s Exit
In late 2020, Dario Amodei was VP of Research at OpenAI. He had been there nearly five years. He understood better than almost anyone what had been built, how it worked, and — critically — how little legal protection existed around the people who knew how to do it.
He left. He took his sister Daniela, who was VP of Safety and Policy. He took a carefully selected group of senior researchers and engineers — not a random collection of disgruntled employees but the specific people you would need to reconstruct what OpenAI had built, for yourself, somewhere else. Then he raised billions of dollars, built a commercial AI company, and told the world he did it because he was worried about safety.
Think about that for a moment. Dario was running research. Daniela was running safety and policy. Between the two of them, they had their hands on the two things they claimed to care most about. If Sam Altman didn’t care about safety, why did he have a VP of Safety and Policy? What exactly changed? Was Sam not allowing them to do their work? If so, that case has never been made clearly.
And if the concern was genuinely about safety, leaving OpenAI leaderless and stripped of its best people was a strange way to show it.
At Apple, Google, or Meta, this story looks very different. Those organizations have spent decades building structural protections — IP assignment clauses, vesting schedules, trade secret agreements, non-solicitation agreements — that make walking out with the core team legally and financially painful, and in many cases immediately actionable. Had Dario tried this at Google or Meta, he would have faced lawsuits before the ink was dry on Anthropic’s incorporation papers.
OpenAI didn’t have those protections. Because it was never supposed to need them. That assumption was the gap Dario walked through.
The hole he left behind was not trivial. This was not a mature industry with a deep talent pool. In 2021, the people who genuinely understood how to build large language models at scale could be counted in the hundreds globally, and a disproportionate number of them had just left together, a complete, functioning team. Altman had to rebuild years of institutional knowledge — why certain decisions were made, what the failed experiments had revealed, how to diagnose problems nobody had encountered before — in real time, in a field where that knowledge barely existed anywhere else.
Dario didn’t just take people — he took the right people. As VP of Research he knew the organization better than almost anyone. He knew who the essential nodes were: the researchers who understood the core architecture, the engineers who had run the critical experiments, the people whose departure would pull others behind them. That’s how talent raids actually work. You don’t recruit everyone — you recruit the people others follow, and the network comes with them. The initial group almost certainly brought others in their wake, colleagues who trusted them and wanted to be part of whatever came next.
Altman did rebuild. GPT-4 came. ChatGPT happened. Which tells you something important: Anthropic’s head start wasn’t the product of Dario’s singular genius. It was the predictable result of leaving with the team that had already figured it out. Any competent group with those advantages would have had the same lead. What looks like visionary entrepreneurship, examined closely, is something more specific — a senior insider who identified a structural vulnerability in an idealistic organization, waited until the technology was proven, and walked out through a gap that the organization’s own founding principles had left open.
Most people would not spend years at a company, watch it build something extraordinary, and then — at the exact moment it was about to succeed — take the key people and go start a competitor. The law may have allowed it. OpenAI’s idealistic structure may have made it possible. But in my book, legally defensible and morally defensible are not the same thing.
In my opinion, this does not show him to be a moral beacon. That history casts a long shadow over Dario’s self-righteous moralizing about safety and responsibility. You can judge for yourself.
Is it safety and responsibility — or is it a rationalization to take something?
The Safety Story
I’ve written about Amodei’s safety claims at length in Dario Amodei: The Self-Appointed Ethics Czar for Planet Earth and Time for Open Source Large Language Models. The short version: I don’t buy it.
The safety narrative also assumes, without ever quite saying so, that Sam Altman is not concerned about safety — that Dario is the responsible one and Sam is the reckless one. That framing has been accepted largely because Dario keeps repeating it. But consider the alternative explanation for why he left: he needed to be in complete control, and everything had to be his way or the highway. The money that came with it is also impossible to ignore.
That origin story is less flattering.
In my opinion, what you see publicly is consistent with a person who genuinely believes he is the only one who truly understands what needs to be done — about AI safety, about how OpenAI should have been run, about what terms the Pentagon should accept. A Pentagon official who worked with him directly called it a god complex. That framing is harsh, but the behavior is documented. Undersecretary of Defense for Research and Engineering Emil Michael said it publicly on X: Amodei is “a liar” with a “God complex” who “wants nothing more than to try to personally control the US Military.” The dispute — over a $200 million contract and whether the Pentagon could use Claude for any lawful military purpose — collapsed publicly, with Anthropic walking away rather than bend its terms, before negotiations quietly resumed. That is not someone who entertains the possibility that he might be wrong.
People are very good at convincing themselves that what they want to do is also the right thing to do. Dario may believe every word of the safety narrative. That doesn’t change what he did, or the fact that it happened to make him a billionaire with absolute control over his own company.
Consider the timeline. OpenAI was founded in 2015. The core team built for five years. Dario left in December 2020, the moment GPT-3 proved the technology was worth billions — not before, when it was uncertain. He arrived at Anthropic with that team intact, that knowledge in hand, and that head start already banked. ChatGPT launched in late 2022 and Google sounded a “code red” — they were caught flat-footed. Serious competition didn’t arrive publicly until early 2023. Anthropic launched Claude the same month. In my opinion, that early competitive position had nothing to do with a superior approach to safety and everything to do with a five-year head start with the core team. The gap has been closing ever since as better-resourced organizations caught up. That’s what a head start looks like eroding. It is not what a unique safety insight looks like compounding.
The open source community — without the billions in private capital — is not far from reaching the same place as these frontier models. They got there quickly, with limited resources. If the safety approach were the secret ingredient, that shouldn’t be possible.
In my opinion, the secret ingredient is just a fable.
The Irony That Isn’t Irony
This story has been told before. MySQL was built by an open source community on the same basic promise — free, shared, for everyone. It got acquired by Sun Microsystems, then landed in Oracle’s hands when Oracle bought Sun in 2010. It went from open source that many people contributed to, to someone’s property.
An Honest Accounting
The AI technology that has been developed is remarkable.
Nobody in this story is a saint.
Musk helped found an organization on genuine principles and has pursued his own AI ambitions ever since, his motivations too entangled with his competitive interests to be taken entirely at face value.
Altman commercialized an institution built on a promise to the broader community, without asking that community’s permission, and has been celebrated for it.
Amodei identified a vulnerability, extracted what he needed, built a commercial competitor, and has been treated by much of the press as the conscience of the industry.
People act as though he built this from scratch — when what he actually did, in my opinion, was abscond with the people and the technology under the guise of safety. And by taking all those people, he didn’t just take the technology. He left OpenAI scrambling to rebuild a team that had taken years to assemble, set back by years at the exact moment the field was about to explode.
All three of them — Altman, Musk, Amodei — have since been aggressive about making sure nobody does to them what they did to get here. Terms of service prohibiting use of their models to train competing systems. Legal and technical barriers against distillation. The full arsenal of corporate protections that OpenAI’s idealistic founding deliberately went without. The research community that helped build the foundation they all stood on is now on the wrong side of their terms of service.
These are the people who now control one of the most consequential technologies in human history. They are all telling you a version of events that flatters themselves. And the thing they were all supposedly trying to prevent — a few powerful people owning and directing AI for their own purposes — is exactly what we have.
“AI for everyone” was traded for “AI for us.”
The least you can do is notice.
Addendum: The following is a controversial paper, but it offers another view of the whole history of machine learning and attempts to credit and put things in perspective more fairly: “Annotated History of Modern AI and Deep Learning” by Jürgen Schmidhuber. If you give the link to a chatbot, it can explain it and give its opinion on it.

