When one of the godfathers of deep learning tells a packed audience that Silicon Valley has lost the plot, people tend to listen.
Yann LeCun, who spent over a decade as Meta's Chief AI Scientist, took the stage at aiPulse last week with a message that the large language models everyone is obsessing over are a technological dead end for achieving true machine intelligence.
Perhaps the greater heresy: The revolution that will replace them is going to happen in Europe.
"Silicon Valley is completely hypnotized by generative models," LeCun declared, on stage. "So you have to do this kind of work outside Silicon Valley."

The 63-year-old Turing Award winner has always been something of a contrarian, but his latest move goes beyond academic skepticism. LeCun has left his VP role at Meta to launch his own venture, a project he calls AMI — Advanced Machine Intelligence — focused on building AI systems that work fundamentally differently from ChatGPT and its ilk.
The departure, LeCun explained, wasn't acrimonious. Mark Zuckerberg liked the research direction. But both men realized the potential applications stretched far beyond Meta's core interests in social media and virtual reality.
"The number of applications is so wide that it was better to do it as an independent entity," LeCun said. "And also a global entity having research organizations everywhere around the world, particularly in Europe, where there is a lot of talent which currently does not realize its full potential."
The Snowball Problem
LeCun's critique of current AI centers on what he sees as a fundamental limitation: language models are trained on text, but most human knowledge isn't captured in words. Your cat, he pointed out, understands the physical world better than any robot we've built, despite never reading a manual.
"We think language is essential to intelligence, but in fact, it's not," he argued. "Animals are pretty smart. They're much smarter than the best robots we have today."
The solution, in LeCun's view, lies in "world models." These are AI systems that can predict how the physical world will change in response to actions. Instead of generating text token by token, these systems build internal representations of reality that allow them to plan and reason about consequences.
Pim de Witte, the young Dutch entrepreneur who shared the stage with LeCun, CEO & Co-founder, General Intuition. He offered a vivid analogy to explain the difference. LLMs, he said, are like snowballs rolling downhill: they just keep accumulating without any perception of what's ahead.
"As it comes to the bottom of the mountain, it doesn't know it's about to crash into something, because its entire world is just itself," de Witte explained. "Real intelligence is more like Olaf" — the sentient snowman from Frozen — "who would know if there's a stone to dodge."
De Witte runs General Intuition, a startup built on an extraordinary asset: a massive dataset of video game interactions collected through his previous company, Medal. When OpenAI offered $500 million for the data, de Witte declined. He understood its value for training the next generation of AI. The company recently raised $134 million to teach agents using video game clips.
Europe's Moment?
Both speakers made a provocative case for European leadership in this emerging field. The talent for world model research, de Witte argued, is concentrated in Europe, thanks partly to LeCun's own influence through Meta's FAIR labs and DeepMind's early investments in the space.
"It's been much easier to find great people here in Europe than in the US," de Witte said. "In the US, there are still a lot of people who are very LLM-pilled."

LeCun added a geopolitical dimension. American AI leaders are increasingly closing their research. OpenAI abandoned openness years ago, Anthropic never embraced it, and even Meta is "rethinking its strategy." Meanwhile, China is going all-in on open source, producing the best freely available LLMs.
"You need models that are easily modifiable but not preconditioned to conform to the political views of the Chinese government," LeCun warned. "Open source is the best way to make fast progress, and it's the best way to attract the best scientists."
Perhaps most surprising was LeCun's claim about computational requirements. Training world models, he said, requires far less firepower than frontier LLMs, a few thousand GPUs rather than the massive clusters that have become table stakes in the generative AI arms race. More data, smaller models, smarter results.
"You get the best of both worlds," LeCun said. "And in the end, they have some level of common sense. It's a complete no-brainer."