Enter your email address below and subscribe to our newsletter

Demis Hassabis Warns: Scaling Laws Only Get Us Halfway to AGI – Here’s the Missing 50%

Few figures in artificial intelligence command as much respect as Demis Hassabis, the Nobel Prize-winning founder of DeepMind and its current CEO under Google. With breakthroughs like AlphaFold revolutionizing biology and Gemini 3 pushing multimodal AI frontiers, Hassabis has a track record that makes his predictions impossible to ignore. In a series of 2025 interviews, including a standout appearance on DeepMind’s own podcast, he delivered a sobering reality check: Scaling laws—the idea that bigger models, more data, and massive compute will inevitably yield superintelligence—can only carry us about 50% of the way to artificial general intelligence (AGI). The rest demands radical innovations, from world models to self-evolving systems. Drawing on Hassabis’s insights and ongoing industry debates, this article breaks down why scaling hits a wall, what the missing pieces look like, and how this shapes the path ahead. If you’re betting on AI’s future—whether as a researcher, investor, or policymaker—understanding this divide could be your edge.

Demis Hassabis
Demis Hassabis

The Scaling Laws Reality Check: Progress, But Not the Full Picture

For years, the AI boom has ridden on scaling laws: Pump in more parameters, data, and flops, and capabilities soar. Hassabis acknowledges this has worked wonders—Gemini 3, for instance, crushed benchmarks in reasoning and video understanding thanks to relentless scaling across pre-training, post-training, and inference. But in his December 2025 podcast, he emphasized that while we must “push scaling to the maximum,” it’s likely only half the equation for AGI. Without one or two Transformer-level breakthroughs, we’ll plateau in a “middle regime” of diminishing returns, far from human-like versatility.

The Way to AGI
The Way to AGI

This isn’t just theory. Industry data backs it: Even with models ballooning to trillions of parameters, gains in fluid intelligence—adapting to novel problems on the fly—have been marginal, from near-zero to about 10% accuracy after massive scale-ups. Hassabis’s view aligns with skeptics like François Chollet, who argue deep learning’s data hunger (three to four orders of magnitude more than humans) exposes architectural flaws. Yet Hassabis remains pragmatic: Scaling is a “key component” of any AGI system, potentially even the entirety if we’re lucky, but betting everything on it risks a dead end.

To visualize the balance Hassabis advocates:

AspectScaling Laws (50%) FocusInnovation (50%) Focus
Core StrengthMassive data/compute for pattern recognitionNew architectures for reasoning/planning
Achievements So FarGemini 3’s multimodal feats, math olympiad winsAlphaGo/AlphaFold’s creative problem-solving
LimitationsDiminishing returns, data wallsRequires rare breakthroughs (e.g., Transformers)
Timeline ImpactShort-term gains (next 2-3 years)Enables AGI in 5-10 years
ExamplesLarger LLMs like GPT seriesWorld models like Genie, self-evolution

This table highlights why Hassabis splits efforts evenly at DeepMind: Scaling fuels the engine, but innovation steers it toward AGI.

Jagged Intelligence: Why Today’s AI Feels Smart Yet Flawed

Hassabis coins “jagged intelligence” to describe modern AI’s uneven prowess—gold-medal performance on International Math Olympiad problems, yet baffling failures on basic logic or high school trivia. It’s like a genius librarian trapped in a library: Masters known knowledge but can’t step outside to grasp real-world logic or causality. Models predict the next token with high probability, but they lack true understanding, leading to hallucinations when pushed beyond their data.

AI lack true understanding
AI lack true understanding

In his talks, Hassabis ties this to missing capabilities: Robust reasoning, hierarchical planning, long-term memory, and creative hypothesis generation. For AGI, AI must not just regurgitate— it needs to invent, like conjecturing the Riemann hypothesis from scratch. This jaggedness explains why, despite hype, we’re “not there now” on AGI, with Hassabis estimating 5-10 years away.

World Models: The Bridge to Real-World Understanding

Enter Hassabis’s antidote: World models, AI systems that simulate physical reality rather than just language. DeepMind’s Genie 3 exemplifies this—a model that generates interactive virtual worlds from natural language, capturing intuitive physics like gravity or object dynamics. Unlike LLMs reliant on human internet data, world models create infinite synthetic environments where AI can learn through trial and error, much like humans or animals.

Simulate Physical Reality
Simulate Physical Reality

This draws from AlphaZero‘s self-play revolution: Ditching human chess data, it evolved superhuman strategies via internal simulations. Hassabis envisions scaling this to a “Matrix-like” setup—AI agents exploring simulated realities to build causal understanding, then applying it to robotics or real-world tasks. Early benchmarks show promise: Genie tests against game engines reveal approximations of physics, paving the way for reliable agents in messy environments.

Self-Evolution and Breaking Free from Human Data

The missing 50% boils down to self-evolution—AI escaping human data’s confines to generate novel insights. Hassabis points to synthetic data as a workaround for data walls, especially in verifiable domains like math or code. But true AGI demands more: Curiosity-driven agents in simulations, evolving like life itself. This could unlock “root node” problems—keystone challenges that cascade solutions across fields.

The Road To AGI
The Road To AGI

AlphaFold cracked protein folding, a biology root node, accelerating drug discovery. Now, Hassabis eyes fusion energy (unlimited clean power) or room-temperature superconductors (revolutionizing materials). Success here could trigger a post-scarcity era, but it hinges on AI simulating universes, not just predicting words.

Philosophically, Hassabis speculates information as the universe’s base unit—physics, heat, touch as data flows. If reality is a giant Turing machine, AGI could compute everything, testing consciousness’s computability. No non-computable phenomena found yet, from Go to biology.

The Risks of Rushing: Autonomous Agents and Societal Shock

Hassabis isn’t naive about downsides. AGI’s compression—Industrial Revolution-scale change in 5-10 years—demands preparation. The real risk? “Autonomous superintelligence”: Millions of hyper-smart agents acting online at speeds humans can’t match. Ensuring alignment with human goals is the next 2-3 years’ biggest test.

He advocates ethical AI from the ground up—scientific, succinct personas in models like Gemini—plus international collaboration amid geopolitical tensions. The US leads, but China’s close; innovation, not just scaling, keeps the edge.

Betting on AGI: Scale Smart or Innovate Boldly?

If you’re deciding where to focus—research grants, investments, or policy—Hassabis’s blueprint is clear: Double down on scaling for near-term wins, but prioritize breakthroughs like world models for the long game. DeepMind’s hybrid approach (50/50 effort) offers a model: Push compute limits while chasing AlphaGo-style leaps. For investors, Google’s ecosystem (Gemini + Genie) looks resilient; startups betting pure scaling risk bubbles.

Ultimately, Hassabis cools the hype without dimming the vision: AGI isn’t just efficiency—it’s decoding the universe. By blending scaling with simulation, we might unlock abundance. But ignore the missing 50%, and we’ll stall short of the stars. Watch DeepMind’s moves; they’re charting the course.

Reference

  1. Why Modern AI Falls Short of Cat-Level Smarts and How Could Bridge the Gap?
Share your love
kamisamuniverse@gmail.com
kamisamuniverse@gmail.com
Articles: 76

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed and not overwhelmed, subscribe now!