Congress, It’s Time to Stop Big AI From Writing Its Own Rules
The Rule No One Voted For
Imagine every book and article published from this day forward carried a disclaimer on the first page: “You may read this. But you are strictly prohibited from using anything you learn here to make yourself a better writer, or more knowledgeable on this subject, in any way that competes with the author.”
No court would uphold it. No legislature ever passed it. Copyright law has always drawn a deliberate line: it protects original expression — not the underlying ideas, facts, methods, or knowledge you walk away with. That line exists for a reason. It is how every technological and intellectual revolution in history has worked. You study what came before, you improve on it, and the world gets better.
That disclaimer doesn’t exist in publishing — yet. But it exists right now in AI, buried in the Terms of Service of the two most powerful labs in the world.
OpenAI and Anthropic both have explicit clauses in their API agreements: You may pay us to use our models, but you are forbidden from using the outputs to train or improve any model that competes with ours. They call it “competitive distillation” and treat it like theft. When a dispute arose with Chinese labs in early 2026, Anthropic wrapped it in a national-security flag and ran it to Congress. Theater and misdirection. The ToS clause existed before that dispute and will exist long after it. It was never about national security. It is about protecting the moat. Meanwhile the Chinese government is actively enabling distillation for its own companies. American labs are playing whack-a-mole trying to stop it — and losing. The only people these ToS restrictions actually bind are legitimate American competitors.
The Law Already Drew This Line
Here’s what should make every American taxpayer and innovator angry: we already have a legal framework for protecting intellectual property. It is called copyright and patent law. Copyright protects specific expression — not ideas, methods, or knowledge. Patent law protects specific inventions for a fixed term, then releases them to the public. These are the general legal protections Congress designed after long debate, deliberately balancing investment incentives against the public’s right to learn, compete, and build. When you release a product into the world where others can use it and learn from it, that is the deal Congress made on your behalf. ToS is an attempt to get more than Congress ever intended to give.
The AI labs don’t like where that line was drawn. So they’re using ToS to move it.
Distillation is a standard machine-learning technique. A smaller “student” model learns patterns and reasoning from a larger “teacher” model’s outputs. Multiple legal analyses have concluded this is unlikely to constitute copyright infringement under current U.S. law. AI outputs generally lack human authorship — a point reinforced this month when the Supreme Court declined to hear Thaler v. Perlmutter, leaving in place the ruling that AI-generated outputs generally lack copyright protection. Copyright doesn’t protect ideas, methods, or systems. Patents don’t cover model behavior.
So the labs fall back on the one tool left: the contract you click “I Agree” to. They’ve invented a private rule that no statute or court has ever imposed — you can use our service, but not to get better at competing with us.
Apply that logic consistently and see where it leads. Every textbook publisher could ban students from using what they learned to compete with the author. Every journal could prohibit researchers from building on published findings. Every lecture could come with a clause forbidding the audience from getting smarter in a way that threatens the speaker. The book and article disclaimer from the opening of this piece isn’t a reductio ad absurdum. It is the logical conclusion of the legal theory these companies are asserting.
This is not how a free market works. You can buy a competitor’s car, study it, and build a better one. You can read a novel and write your own.
In AI, the clauses have gone too far.
Fair Use for Me, Not for Thee
The hypocrisy is staggering. These same companies spent years arguing that training their models on millions of copyrighted books, articles, and code snippets was “transformative fair use.” They won (or are still winning) that argument in court. Yet the moment someone wants to do something functionally similar — learn from their model’s outputs — they scream foul and hide behind a contract.
The result? A handful of well-funded labs get to build artificial moats around capabilities that cost them hundreds of millions or billions to develop. Everyone else is told to start from scratch or pay the toll. That is not competition. That is oligopoly dressed up as innovation policy. What they are really asking for is old AT&T status — a government-protected moat, with themselves as the permanent gatekeeper. AT&T needed a regulator to hand it that position. These labs are trying to get there through a clickwrap agreement.
And Congress is letting it happen.
Pulling Up the Ladder
The pattern is not hard to see. Having secured their position, the labs are using private contract law to pull up the ladder.
Just last month, on March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence and handed lawmakers a blueprint. It talks about intellectual property, frontier models, and national security. It even nods to the distillation attacks by foreign labs. But it leaves the core domestic question untouched: Should private companies be allowed to use boilerplate contracts to block the very kind of knowledge transfer that has driven every previous technological revolution?
The answer is no.
Three Things Congress Should Do Now
Congress needs to step in and set clear national rules. Here are three concrete things lawmakers should do:
1. Clarify fair use for model outputs. Declare by statute that the systematic (but non-fraudulent) use of lawfully obtained API outputs for distillation or fine-tuning is a transformative fair use, just as training on publicly available books and web data has been held to be. Ban ToS clauses that attempt to override this.
2. Draw a bright line between legitimate protection and anti-competitive overreach. Allow labs to ban outright fraud (fake accounts, evasion of rate limits, geo-restrictions). But prohibit them from using contracts to forbid good-faith competitive learning from outputs they willingly sold access to.
3. Create a narrow, time-limited safe harbor for “model behavior.” If the labs want real IP-style protection for the emergent reasoning patterns in their models, let them ask Congress for it — with strict limits on duration and scope, just like patents. Don’t let them seize it through clickwrap agreements.
The labs will scream that without these restrictions, no one will invest the insane sums needed for the next frontier model. They made the opposite argument when they trained on copyrighted works — that restricting access would kill innovation. They were right then. They are wrong now. If the investment case genuinely requires monopoly control over knowledge transfer, Congress can debate public funding, tiered access models, or time-limited exclusivity — the same tools it has used in pharma and semiconductors. What it should not do is let private companies seize that protection through a clickwrap agreement nobody voted on. And while Congress is at it — the fair use question around training on copyrighted works deserves resolution too. Authors claiming that training on their work violates copyright are asserting rights the law was never designed to give them. Congress should clarify that as well.
We are at a rare moment. The White House National Policy Framework for AI has handed lawmakers a blueprint. The Supreme Court’s denial of cert in Thaler v. Perlmutter this month confirmed that AI outputs lack human authorship and cannot be copyrighted — leaving ToS as the only tool the labs have. Lawmakers can either rubber-stamp the status quo and let a few companies privately legislate the future of AI — or they can do what Congress is supposed to do: write clear, public rules that balance real investment incentives with genuine competition.
The choice is simple. Do we want an AI future written by corporate lawyers in clickwrap agreements? Or one written by the American people, through their elected representatives?
I’ve written before about how OpenAI was founded explicitly to prevent a handful of megacompanies from controlling AI — and then became exactly what it set out to stop (They Were Going to Save Us From This. Then They Became This.). The distillation ban is the enforcement arm of everything documented in that piece.
First you build the moat. Then you use private contract law to make sure no one can cross it. Blocking the advance of progress through monopolistic methods is not in the public interest.
It’s time for Congress to step in.

