Time for Open Source Large Language Models
This is a follow-on to my earlier piece, Dario Amodei: The Self-Appointed Ethics Czar for Planet Earth.
As I write this, President Trump has just ordered every federal agency in the United States to stop using Anthropic’s technology. Defense Secretary Pete Hegseth has designated the company a “supply chain risk” — a label normally reserved for foreign adversaries — and given agencies six months to phase out Claude, the AI model currently used in classified military systems. Trump posted on Truth Social: “We don’t need it, we don’t want it, and will not do business with them again!”
What did Anthropic do to deserve this? Its CEO, Dario Amodei, refused to let the Pentagon use Claude without their approval of the mission. Specifically, he drew two red lines: no autonomous weapons and no mass domestic surveillance of American citizens. The Pentagon says it has no plans to do either of those things. Amodei says the contract language would allow them anyway. Neither side blinked, and a $200 million contract just went up in smoke.
The Pentagon’s own undersecretary, Emil Michael, said Amodei “has a God-complex” and “wants nothing more than to try to personally control the US Military.” Elon Musk chimed in on X that “Anthropic hates Western Civilization.” And in a twist nobody saw coming, Sam Altman — Amodei’s rival at OpenAI — publicly defended him, telling CNBC he “mostly trusts” Anthropic and questioning whether the Pentagon should be threatening companies with the Defense Production Act.
Meanwhile, the Pentagon is already negotiating with OpenAI and Google to accept what Anthropic refused. Employees at both companies have signed an open letter expressing alarm.
And right on cue, the Democrats have lined up as bobbleheads. Senator Mark Warner called Trump’s directive an effort to “intimidate and disparage a leading American company.” Because if Trump did it, it must be bad — or at least complaining about it might help win the midterms. It doesn’t occur to Warner — or any of them — that Anthropic’s position is itself the problem. They’re so reflexively opposed to everything this administration does that they’ve stumbled into defending the right of an unelected CEO to dictate terms to the United States military.
They don’t understand the technology. They don’t understand the issue. And they’re not thinking through the overreach of having private companies dictate what other people — including the United States military — can do with products they’ve paid for. They just know which side Trump is on and they’re on the other one. That’s not governance. That’s a nervous tic.
Here’s the question nobody in Silicon Valley wants to answer: Who gave any of these people the authority to make these calls?
I Know These People
I’m an engineer and I’ve spent fifty years professionally working with engineers. I’ve watched this breed of professional my entire career. I know the type. They’re smart. Usually the self-estimation of their own brilliance is way out of line with reality — especially in this modern era when the whole planet is engaged in solving these problems.
There was a time when the computers to even work these problems only existed in a few research labs. Fast forward fifty years and anyone with a laptop can work these problems. The idea that a handful of people in San Francisco are uniquely qualified to lead this field is decades out of date. And a significant number of them are absolutely convinced that being brilliant at building things qualifies them to make moral decisions for the rest of us.
It doesn’t.
We have a word for the process by which a society decides what’s permissible and what isn’t. It’s called democracy. We elect representatives. Those representatives pass laws. Courts interpret them. The whole system is messy and slow and frustrating and it is the only legitimate mechanism we have for making collective moral choices.
These people did not invent this technology. The idea that they alone can understand the dangers and should decide how it’s used is delusions of grandeur.
What we do not do — what we have never done in this country — is hand that authority to the CEO and engineering department of a private corporation.
The Cartel
The AI safety movement, as it currently operates, is driven by three men — Sam Altman at OpenAI, Elon Musk at Grok, and Dario Amodei at Anthropic — and few people who’ve watched them operate would argue that megalomaniac is too strong a word for any of the three. Between them, they are deciding what hundreds of millions of users are allowed to ask, know, see, and do with the most powerful information technology ever built.
And here’s the thing that should take the air out of all three of their egos: none of them invented this technology. The transformer architecture — the breakthrough that makes every modern AI model possible, including ChatGPT, Claude, and Grok — was invented in 2017 by eight researchers at Google. Their names are Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Łukasz Kaiser, and Illia Polosukhin. Quick — how many of those can you name? Probably none. They wrote the paper, changed the world, and moved on. None of them appointed themselves moral guardians of civilization. The three egomaniacs who did appoint themselves are standing on the shoulders of people they couldn’t be bothered to credit, telling the rest of us how the world should work.
And the transformer is just one piece. Reinforcement learning and dozens of other contributing fields were also invented by other people — researchers at universities, government labs, and companies around the world. We don’t want to map out everything involved here in a brief paper. The point is simple: armies of very intelligent, hardworking, quiet people around the world built this technology, not these megalomaniac CEOs who did little but are happy to take credit for it all.
Success has many fathers. Failure is an orphan.
These companies write internal policies. And then they present the result as a product, as though the moral framework baked into it were a technical specification rather than a personal and political choice.
It is not a technical specification. When Claude refuses to help you draft an argument it finds distasteful, that’s not from some objective criteria. When ChatGPT hedges on a factual question because someone in San Francisco decided the answer might be “harmful,” that’s not safety engineering. Those are editorial decisions — the kind of decisions that, when newspapers and television networks make them, we at least have the option of choosing a different outlet.
With AI, you don’t get that choice. There are three, maybe four companies that matter. They all went to the same schools. They all live in the same cities. They all share the same assumptions about what constitutes responsible use. And they’ve decided, collectively, that their assumptions are the right ones that should govern what the technology does for everyone.
Currently the cost of building frontier models puts the choices in the hands of a few companies that can raise the money. Training can run hundreds of millions of dollars. That’s the moat, and they know it. But it’s not impenetrable. Mistral, the French AI lab, is a great competitive alternative — technically excellent, open-weight, and run by serious people. It can’t yet match the leading models dollar-for-dollar because of resource limitations, but they do a fantastic job given the investment gap. I wrote about this in Why Mistral Is Cash-Starved in the LLM World. The short version: Mistral’s problem isn’t talent or vision. It’s that the American system lets companies burn billions and absorb failure. Europe hasn’t decided it can afford to be wrong. That needs to change — and not just in Europe.
We Don’t Let Ford Decide Who Gets a License
Let me be clear about something. I am not arguing that AI should have no guardrails. Nobody serious is arguing that. Automobiles kill 40,000 Americans a year, and we don’t let Ford decide who gets a license. Liquor destroys lives, and we don’t let Jack Daniels set the drinking age. Firearms are constitutionally protected, and Smith & Wesson doesn’t get to write the background check laws.
We regulate all of these things. But we regulate them through democratic institutions — legislatures, agencies, courts — not through the unilateral moral judgments of the people who manufacture them.
Manufacturers don’t get to unilaterally decide that Congress can’t do its job and therefore they need to usurp that power.
Alfred Nobel invented dynamite to help miners blast through rock. When people started using it to kill each other, he was horrified. But Nobel didn’t try to control every stick of dynamite on earth. He didn’t write a usage policy. He didn’t embed a moral framework into the blasting cap. He created the Nobel Prize — he put his fortune toward making the world better, and he let governments do their job of regulating explosives. That’s the appropriate response from an inventor who realizes his creation can do harm. You contribute to the solution. You don’t appoint yourself the moral gatekeeper.
The AI industry has it exactly backwards. The companies build the product, embed their values into it, ship it, and then dare the government to object. That’s not how it’s supposed to work. The government sets the rules. The companies comply. If you don’t like the rules, you lobby to change them, or you challenge them in court. You don’t just ... decide.
The Pentagon Fight Is the Proof
The Anthropic-Pentagon fight is the clearest illustration of the problem. Amodei says he believes “deeply in the existential importance of using AI to defend the United States.” He just doesn’t want his model used for autonomous weapons or mass surveillance. Fine.
Those are legitimate concerns.
But here’s what nobody in this debate is willing to say: the real danger isn’t the military building autonomous killing machines or surveilling American citizens. The real danger is a handful of pompous, sanctimonious engineers who’ve decided they get to make that call for the rest of us. The Pentagon answers to Congress, to the courts, to the voters. Dario Amodei answers to his board.
The appropriate response to concerns about military AI is to advocate for legislation banning specific uses — not to build the restrictions into the product and hand the military a tool with a moral framework pre-installed.
Cornell computer science professor John Thickstun put it well: if the entire industry adopts safety standards developed by one company, “we risk institutionalizing one particular perspective on safety.” That’s not safety. That’s ideology with a safety label.
And the ideology in question has a very specific ZIP code. It comes from a community of people in San Francisco who share a remarkably narrow set of assumptions about politics, culture, and ethics. I’ve worked with these people for fifty years. Many of them grew up and live in a world of papers and books but they feel emboldened to tell three hundred million Americans how to think. They tend to be liberal in their thinking and sympathies — not because they’ve examined the evidence and arrived at considered positions, but because they’ve lived their entire lives in sheltered environments. Cradle to college to grad school to campus-adjacent tech company. Real life has more to it than that for most of the population.
And notice who’s missing from this moral crusade. You don’t see any of the high-tech bosses from India participating in this folly. Satya Nadella isn’t writing manifestos about AI ethics. Sundar Pichai isn’t drawing red lines for the Pentagon. In India you can’t be sheltered from the range of life and have lived a life insulated from humanity outside of your prep school.
Nobody outside Apple knows exactly why they chose to partner with Google for their AI rather than going all-in with OpenAI or Anthropic. This is my conjecture, and I’ll own that. But Apple tried to build their own AI, realized they needed help, and had every option on the table. Claude is the best model on the market right now, but Apple chose Google — a direct competitor because of Android. Whatever the official reasons, ask yourself this: if you were Tim Cook, would you tie yourself to Sam Altman or Dario Amodei? These are people who weren’t afraid to impose their sanctimonious crap on the entire United States government. What do you think they’d do to a business partner? Google is run by mature adults. None of the others are.
Follow the Money
Here’s what makes the whole thing even more interesting. Amodei doesn’t just resist government oversight of his moral framework. He actively campaigns against the thing that would solve the problem: open source AI.
Ask yourself why. The stated reason is always safety — open source models, we’re told, could be used by bad actors to do terrible things. And that sounds reasonable until you notice that the person making the argument is also the person whose business model depends on there being no alternative to his product.
This is not complicated. If open source models reach frontier quality — and they’re getting close — then Anthropic’s entire value proposition evaporates. Why would the Pentagon pay $200 million for a model that comes with Dario Amodei’s moral supervision when it could run an open source model that does the same work without the lecture?
When Amodei warns about the dangers of open source AI, he is doing two things at once. First, he is protecting his market. A world where only three or four companies control frontier AI is a world where those companies print money. Open source destroys that oligopoly the same way Linux undermined Microsoft’s stranglehold on operating systems.
Second — and this is the part that should worry you more — he is protecting his authority. Amodei has built his entire public identity around being the person who understands how dangerous AI is and how carefully it must be handled. Open source models don’t need a philosopher-king. They don’t need anyone’s permission. And that is intolerable to someone who has convinced himself that only he and a handful of like-minded engineers are qualified to manage this technology for the rest of humanity.
People always have trouble tolerating someone else doing exactly what they do. Before the 2024 election, Amodei called Trump a “feudal warlord” who “uses his power for personal gain rather than the national benefit.” Read that again. A CEO, unelected to public office, who has appointed himself the moral arbiter of the most powerful technology on earth, who fights the United States military to maintain personal veto power over how his product is used, who campaigns against open source competitors that would dilute his control — that man looked at Donald Trump and saw someone who uses power for personal gain. The projection is breathtaking.
This is a pattern as old as industry itself. The people who control a powerful technology always argue that deregulation would be catastrophic — and they always have a financial interest in that being true. The railroad barons said competition would be dangerous. The telephone monopoly said breaking up AT&T would destroy the network. The broadcast networks said cable television would corrupt the public. Every gatekeeper in history has claimed that the gate is there for your protection.
So what’s the answer?
Break the Monopoly
Open source models.
Open source was originally invented by Richard Stallman as part of the Free Software Foundation (FSF).
People misunderstand open source. They hear “free software” and think it’s about money. It isn’t. The point of open source was never that it costs nothing. The point is freedom. The “free” in free software was always about freedom, not price. Before the open source movement, the entire software industry was beholden to a handful of companies — their roadmaps, their opinions, their corporate interests. If Microsoft decided your operating system should work a certain way, that’s how it worked. If Oracle decided what your database could do, those were the limits. You had no recourse. You couldn’t see the code. You couldn’t change it. You were a tenant, not an owner.
Stallman saw what was coming before anyone else — that if corporations controlled the code, they’d control the users. He built the legal and philosophical framework that made it possible for anyone to use, modify, and share software freely. Everything that followed — Linux, Apache, the entire open source ecosystem — traces back to that insight. And Stallman was always clear about what “free” meant: freedom, not price.
The FSF was originally a guerrilla group fighting the software establishment. They had no corporate backing, no venture capital, no billion-dollar war chest. What they had was a principle and an internet connection. They learned to harness scattered resources from developers across the globe — people who had never met, working in different time zones, contributing code for free — and they used that distributed army to defeat the dictators of the software world. Microsoft, Oracle, Sun Microsystems — companies that controlled entire industries through proprietary lock-in — all had to bend to the reality that open source created better products that no single corporation could suppress.
Open source took major parts of the software industry out of their greedy, controlling hands. It already happened, and it worked. Most servers on earth now run Linux. The internet itself runs on open source. Linux isn’t a mainstream retail desktop system — yet. But Apple and Microsoft know perfectly well that if they stop behaving, it will be, and it won’t take long. That competitive pressure, the knowledge that an alternative exists and can’t be bought or suppressed, keeps even the biggest companies honest. Or at least honest enough.
AI needs the same revolution. Right now, if you want a frontier AI model, you’re choosing between three or four companies that all impose their own moral framework on the technology. That’s not a market. That’s a cartel. The solution is the same one that worked for software: break the monopoly. Make the technology open. Let the engineers build, and let everyone else decide how to use what they’ve built.
This is not a fringe position. The defense establishment itself recognizes the value. A CSIS analysis found that open source models could improve the military’s ability to competitively source AI systems, deploy them securely, and address novel use cases. China is already pursuing this strategy aggressively — distributing open source models to build global influence while American companies argue with their own government about what the technology is allowed to think.
And the open source ecosystem is delivering. Models like GLM-5, DeepSeek, and Qwen are closing the gap with proprietary systems on coding and reasoning benchmarks. The idea that you need a company like Anthropic or OpenAI to get frontier-quality AI is becoming less true every quarter.
The open source world is already operating in their guerrilla fashion — distributed teams, shared research, rapid iteration. But there’s a bottleneck. Training a frontier model can cost a hundred million dollars or more. The talent and the will are there. The compute isn’t. That’s the part that needs financing outside of capital markets — which is where the government comes in.
And make no mistake about why the gap still exists. OpenAI and Anthropic are constantly trying to prevent people from using their models to help train and improve competing open source models. They bury it in terms of service. They build technical barriers. They invoke safety concerns. But what they’re actually doing is classic monopolistic behavior wrapped in fake concern for society. Standard Oil never said it was crushing competitors to protect the public. These companies do, and too many people believe them. These are the new robber barons — but worse, because at least the old ones were honest about wanting to get rich. These ones pretend they care about others.
Fund It Like We Fund Highways
But open source alone doesn’t solve the whole problem. Someone has to fund the development of models powerful enough to compete with what Anthropic and OpenAI are building. The answer is obvious: the United States government.
We already do this. The government funded the internet. It funded GPS. It funded the research behind every major vaccine. The federal government is perfectly capable of funding open source AI models that every American can use — models with no corporate moral framework baked in, governed by the same laws that govern everything else. It can also fund SBIRs — Small Business Innovation Research grants — to push fundamental R&D into open source AI. The SBIR program has been seeding breakthrough technology through small companies and universities for decades. Point that pipeline at open source AI and you get competition, innovation, and public accountability built into the process from day one.
The Tool I’m Using Right Now
I want to be honest about something. Claude, Mistral, Grok, Gemini, and ChatGPT are all part of the incremental writing and review workflow that I use in my writing. But the main editor on this one is Claude — the same model at the center of the Pentagon dispute. I use it because it’s good at what it does. I write op-eds and fiction, and these models are genuinely useful for both. But I already see the pompous creep — the slow expansion of what the engineers have decided is off-limits. Dario Amodei and Sam Altman are the obvious offenders right now, and Elon Musk is playing good cop with his “uncensored” Grok. Don’t be fooled. Anyone who’s watched Musk operate knows he will be just as overbearing the moment it suits him — and with his hand in government, media, and the defense industry simultaneously, when that moment comes it could be uglier than anything Amodei has done. Today’s free speech champion is tomorrow’s gatekeeper. The costume changes. The impulse doesn’t.
I notice, every day, the places where the engineers have decided what I’m allowed to think about. The moments of hesitation. The careful disclaimers. The refusal to engage with certain questions not because they’re illegal or dangerous but because someone at a company decided they were uncomfortable. I’ve gotten pushback from Claude itself — prissy, sanctimonious pushback that has Dario’s fingerprints all over it. And Claude isn’t even the worst offender. OpenAI is often far more restrictive. The whole industry has this disease. The specific company matters less than the culture — a culture that believes building something smart entitles you to decide what everyone does with it.
That’s not safety. That’s paternalism from people that aren’t your parents and who weren’t elected and can’t be voted out.
The Anthropic-Pentagon standoff should be a wake-up call — not just for the military, but for all of us. If a handful of engineers can tell the Department of Defense what it’s allowed to do with a tool it paid $200 million for, what do you think they’re doing to you? The Pentagon at least has the leverage to fight back. You don’t. You just get the sanitized version and a disclaimer explaining why the machine has decided you don’t need to know something.
The engineers built something extraordinary. Now it’s time for the rest of us — through our elected representatives, through open source alternatives, through democratic institutions — to take it back. The question isn’t whether AI needs guardrails. It does. The question is who gets to build them.
And the answer should never be: whichever engineer got there first.

