AI News

DeepMind Veteran David Silver Raises $1 Billion Seed Round to Build Superintelligence Without LLMs

In a move that signals a potential paradigm shift in the pursuit of Artificial General Intelligence (AGI), David Silver, the renowned researcher behind AlphaGo and AlphaZero, is reportedly raising a historic $1 billion seed round for his new venture, Ineffable Intelligence. The London-based startup, emerging from stealth with a valuation of approximately $4 billion, is betting against the industry's current fixation on Large Language Models (LLMs), aiming instead to achieve superintelligence through pure reinforcement learning.

The round is being led by Sequoia Capital, with participation discussions reportedly underway with tech giants including Nvidia, Google, and Microsoft. If completed, this deal would stand as the largest seed funding round in the history of the European technology sector, underscoring the immense weight investors place on Silver's track record and his contrarian thesis for the future of AI.

A Billion-Dollar Bet on "Ineffable" Intuition

The sheer magnitude of the capital injection—$1 billion for a company that has yet to ship a product—reflects the escalating stakes in the global AI arms race. While multi-billion dollar rounds have become commonplace for established players like OpenAI and Anthropic, a seed round of this size is unprecedented. It suggests that venture capitalists are preparing for a capital-intensive divergence in AI development, one that moves beyond simply scaling text-based models.

Ineffable Intelligence is headquartered in London, a decision that significantly bolsters the UK's position as a critical hub for frontier AI research. Sources close to the deal indicate that Sequoia partners Alfred Lin and Sonya Huang traveled personally to London to secure the deal, highlighting the fierce competition among VCs to back top-tier technical talent exiting major labs like Google DeepMind.

The Thesis: Experience Over Imitation

David Silver’s reputation is built on a specific, powerful history: he built the systems that achieved what was previously thought impossible. As the lead researcher for AlphaGo, he watched his creation dismantle 18-time world champion Lee Sedol in 2016. He then surpassed that achievement with AlphaZero, which mastered Go, Chess, and Shogi without any human data, learning solely through self-play.

This history forms the intellectual bedrock of Ineffable Intelligence. Silver’s central argument is that the current industry standard—LLMs like GPT-4 and Gemini—is fundamentally limited because it relies on imitating human data. Since LLMs are trained on the internet's text, they are bounded by the collective knowledge and reasoning errors of humanity. They can approximate intelligence, but they cannot easily transcend human capability.

Ineffable Intelligence posits that true superintelligence requires Reinforcement Learning (RL). In this paradigm, agents learn not by reading about the world, but by interacting with it—proposing actions, observing consequences, and updating their strategies based on rewards. This method, often described as "System 2" thinking or "search," allows an AI to discover novel solutions that humans might never conceive, much like AlphaGo played Move 37—a move no human player would have made, yet one that secured victory.

Table: Divergent Paths to Superintelligence

The table below outlines the fundamental differences between the prevailing LLM approach and Silver’s RL-focused methodology.

Feature Large Language Models (LLMs) Reinforcement Learning (RL)
Primary Data Source Static datasets (Internet text, books) Dynamic experience (Simulation, Self-play)
Learning Mechanism Pattern matching and next-token prediction Trial and error with reward feedback
Ceiling of Capability Limited to the sum of human knowledge Theoretically uncapped; can surpass human limits
Reasoning Style Intuitive, "System 1" (Fast) Deliberative, "System 2" (Slow, Search-based)
Primary Weakness Hallucinations, lack of true grounding Computational cost, difficulty in open environments

The "Era of Experience"

Silver has previously articulated this vision in academic circles, co-authoring a paper titled "Era of Experience" with fellow RL pioneer Richard Sutton. They argued that the next leap in AI will not come from feeding models more tokens, but from agents that "self-discover the foundations of all knowledge."

The challenge for Ineffable Intelligence will be applying the success of AlphaZero—which operated in the closed, perfect information environments of board games—to the messy, open-ended complexity of the real world. This is likely why the capital requirements are so high. Building "world models" or simulations robust enough to train general-purpose RL agents requires massive compute resources, rivaling the infrastructure costs of training the largest LLMs.

The Exodus of the Architects

Silver’s departure from Google DeepMind is part of a broader trend of high-profile exits from established AI labs. As bureaucracy grows within corporate giants, the scientists who built the foundational technologies are spinning out to pursue singular, uncompromised visions of AGI.

This movement has created a new class of "Super-Seed" startups—companies founded by AI luminaries that bypass traditional venture stages, raising billions immediately to purchase the necessary compute clusters.

Table: The New Frontier of AI Spinoffs

The following table compares Ineffable Intelligence with other high-profile ventures led by former big-tech researchers.

Startup|Founder(s)|Origin Lab|Core Philosophy
---|---|----
Ineffable Intelligence|David Silver|Google DeepMind|Pure Reinforcement Learning (Superhuman)
Safe Superintelligence (SSI)|Ilya Sutskever|OpenAI|Safety-first scaling towards AGI
Thinking Machines Lab|Mira Murati|OpenAI|Advanced AI Product & Research
xAI|Elon Musk|Various|Truth-seeking, maximum curiosity

Market Implications and Future Outlook

The launch of Ineffable Intelligence places immense pressure on the current leaders of the AI field. If Silver is correct, the diminishing returns of scaling LLMs will soon become apparent, and the industry may pivot aggressively toward RL-based approaches. This would validate the "scaling laws" of compute in a different direction—not for processing text, but for simulating experience.

For Europe, this is a watershed moment. Retaining a talent like Silver and securing a $1 billion investment for a London-based entity counteracts the narrative that all frontier AI development is destined for San Francisco.

However, the path ahead is fraught with technical risk. Reinforcement learning is notoriously difficult to stabilize outside of game environments. If Ineffable Intelligence succeeds, it won't just build a better chatbot; it will build a system capable of independent scientific discovery and strategic planning that exceeds human cognitive limits. If it fails, it will be one of the most expensive experiments in the history of computer science.

As negotiations for the round finalize, the involvement of strategic backers like Nvidia suggests that the hardware infrastructure is already being aligned to support Silver’s vision. The race for AGI has effectively split into two lanes: those reading the internet to learn how humans think, and those playing games against themselves to learn how to think better than humans ever could.

Featured