AMI: A Monument to French Exceptionalism
This is a follow-on to two earlier pieces: LeCun’s AMI: What is the Proposition? and What Evolution Tells Us About the Path to AGI.
In 1930, France began construction on the Maginot Line — a technically sophisticated, enormously expensive chain of fortifications along the German border. It was the product of genuine engineering brilliance and absolute strategic confidence. France had studied the last war, identified the threat, and built the perfect defense against it. The Germans went around it through Belgium. France fell in forty days.
The instinct that built the Maginot Line is still alive. It just raised $1.03 billion.
Europe Missed the Wave
The large language model revolution was built in San Francisco, London, and a handful of other American and British institutions. Not Paris. Not Berlin. Not anywhere in continental Europe. OpenAI, Anthropic, Google DeepMind — the companies defining the trajectory of AI are American. The infrastructure running them is American. The capital funding them is overwhelmingly American.
Europe watched this happen. And Europe’s response, as usual, has been to regulate what it cannot build.
The one genuine exception is Mistral. A French company, founded by researchers who left Google DeepMind and Meta, building competitive large language models that punch far above their weight given their funding constraints. As I documented in Why Mistral Is Cash-Starved in the LLM World, Mistral is not underfunded by European standards. It is cash-starved by frontier AI standards. Europe has the money. It simply hasn’t decided that backing a European LLM lab at the scale needed is a risk worth taking.
Putting this money into Mistral would make a lot of sense. It could surely become the dominant AI platform for all of Europe — actual product, actual users, actual revenue, actually in the fight. Europe had that option. They started late but they have the home field advantage there.
Doing something realistic that has a good chance to succeed is not good enough.
It chose AMI instead.
The Statement
AMI is a statement about the present.
The statement is this: the entire LLM wave was wrong. Silicon Valley built the wrong thing. France — with its French Turing laureate, its Paris headquarters, its presidential endorsement — will build the right thing.
This is a more coherent explanation of AMI than any technical argument. It explains why the money went to a three-month-old company with no product, no demo, and a part-time chairman who does not appear to be leaving New York. It explains why Macron (a caricature and the very embodiment of French exceptionalism) endorsed it personally. It explains why Bpifrance, Dassault, and the Mulliez family wrote checks. It explains why the framing is explicitly “a credible frontier AI company that is neither Chinese nor American” — LeCun’s own words.
Hidden in this bravado is low confidence and fear of failure. France would rather fail gloriously than lose incrementally. If AMI fails, it fails on its own terms, reaching for something nobody else had the vision to attempt. If Mistral falls behind OpenAI, it is just another company that lost. France would rather be Icarus than runner-up.
Is the Proposition Realistic?
We examined this in detail in LeCun’s AMI: What is the Proposition?. LeCun’s actual track record — the Bell Labs team work built on prior art, the post-1998 field that moved without him, thirteen years at FAIR — does not suggest someone capable of the astounding feat of rethinking AI from scratch and besting the entire global research community. The credentials are real. The forecast they imply is not.
The theoretical foundation of AMI’s bet is also examined in the companion piece What Evolution Tells Us About the Path to AGI. The “stochastic parrots” argument, associated with Bender et al., turns out on examination to be a restatement of the same assumptions that failed in the symbolic AI era. The brain is also a pattern-matching system. Evolution built general intelligence without symbols, causal models, or explicit world representations. The deficiencies in LLMs keep shrinking through engineering, exactly the way nature refines. The argument that the current architecture has a ceiling requires believing something evolution already disproved.
Skin in the Game
Before examining whether AMI’s bet makes sense, there is a prior question worth asking: does Yann LeCun actually believe it?
The most honest answer to that question is not found in his interviews or his X posts. It is found in what he is doing with his life.
LeCun holds the Jacob T. Schwartz Chaired Professorship in Computer Science at NYU’s Courant Institute of Mathematical Sciences. This is not an honorary title or an emeritus position. It is an active faculty role — with graduate students who depend on him, a research group, and teaching responsibilities. He is not winding down his academic career to throw everything into AMI. He is keeping it fully intact. The students are still there. The office is still there. New York is still home. As far as we can tell, none of that is changing.
LeCun is sixty five years old. Is he going to spend another fifteen years working around the clock in a startup? They don’t grow by themselves. He is the spiritual leader.
Zuckerberg left Harvard at 19. Gates left Harvard at 20. Page and Brin abandoned their Stanford PhD programs. Jobs dropped out of Reed College. Sam Altman dropped out of Stanford. Every one of them did it against rational advice, against the safe path sitting right in front of them. Their parents almost certainly told them to finish school first. They were driven beyond reason. That is what genuine conviction looks like — not a calculated bet, but an obsession that burns every available bridge.
LeCun has burned nothing. The tenure is intact. The New York life is intact. The someone-else-runs-it structure is intact. If AMI succeeds, he claims the scientific vision. If it fails, LeBrun ran the company, the research needed more time, investors were impatient.
The people writing the billion-dollar checks should sit with that.
A Startup Is Not a Research Lab
The second problem is structural. AMI is asking a startup to do what startups cannot do: fundamental research on an open-ended timeline.
Research is like entering a tunnel of unknown length that may not even have an exit. It is dark and you can walk for ages and find nothing. And if you do find the exit, you may emerge into a forest with a fork in the road leading in ten different directions. That is not a flaw — it is the nature of the thing.
A startup has a clock. The burn rate is real. At some point the investors ask when the tunnel ends. “We don’t know — that’s the nature of the tunnel” is not a satisfying answer to people who wrote nine-figure checks. Japan’s Fifth Generation Computer project made the same bet in the 1980s — massive government funding, bold proclamations, fundamental rethinking from first principles. It died quietly a decade later having produced almost nothing. The hype was atmospheric. The results were not.
The Maginot Line
Wish LeCun well. His early contributions are real. But a $3.5 billion valuation for a small team with no product, no demo, a part-time chairman who does not appear to be leaving New York, a visual-only architecture with no language capability, and a thesis his own employer sidelined after thirteen years — that is not a scientific venture.
It is a monument to French certainty that what AI was missing was the French.

