AI News

DeepMind Chief Sets AGI Clock to "5-10 Years" While Sounding Alarm on Security Risks

At the India AI Impact Summit in New Delhi this week, Google DeepMind CEO Demis Hassabis delivered a defining keynote that balanced technological optimism with urgent geopolitical caution. Speaking to a global assembly of policymakers, industry leaders, and researchers, Hassabis offered one of his most concrete timelines yet for the arrival of artificial general intelligence (AGI), predicting its emergence within the next "5 to 10 years." However, this forecast was accompanied by a stark warning: the window to establish robust international safety frameworks is closing as AI systems transition from passive tools to autonomous agents.

The summit, occurring amidst a period of rapid acceleration in generative AI capabilities, served as the backdrop for Hassabis to delineate the "threshold moment" the industry now faces. While celebrating the potential for a new "golden era of scientific discovery," he emphasized that the dual-use nature of advanced AI—capable of both immense benefit and significant harm—necessitates a level of global cooperation that currently lags behind technological progress.

The AGI Horizon: Moving Beyond "Jagged Intelligence"

Hassabis's address provided a rare, candid assessment of the limitations of current state-of-the-art models. despite the hype surrounding recent generative AI breakthroughs, he characterized today's systems as "jagged intelligences." This term describes models that demonstrate superhuman brilliance in specific domains—such as coding or creative writing—while simultaneously failing at elementary reasoning tasks that a human child could navigate with ease.

"We are starting to see what these systems can do, but they remain brittle," Hassabis noted during his session. He pointed out that while a model might win a gold medal at the International Math Olympiad, it can stumble on simple arithmetic if the question is phrased unconventionally. This inconsistency, he argued, is the primary barrier separating current narrow AI from true AGI.

To illustrate the leap required to reach AGI, Hassabis proposed an ambitious thought experiment involving scientific innovation. He suggested that a true AGI should be able to undergo training with a knowledge cutoff of 1911 and independently derive the theory of general relativity, replicating Albert Einstein's 1915 breakthrough. "It is much harder to come up with the right question and the right hypothesis than it is to solve the conjecture," he explained. Current systems, which excel at interpolation within existing data, still lack the "world models" and long-term planning capabilities necessary for such original conceptual leaps.

However, the DeepMind co-founder remains confident that these gaps are being bridged rapidly. He cited the emergence of "agentic" systems—AI that can take autonomous actions to achieve high-level goals—as the next major phase. This transition from chatbot-style interactions to agent-based workflows is expected to accelerate over the coming year, driving the industry toward the 5-10 year AGI target.

The Dual-Use Dilemma: Biosecurity and Cyber Risk

As the timeline to AGI compresses, the potential for misuse grows. Hassabis devoted a significant portion of his address to the "dual-use" nature of frontier AI systems. The same capabilities that allow AI to accelerate drug discovery or optimize energy grids can be repurposed by "bad actors"—ranging from rogue individuals to hostile nation-states—to inflict harm.

He highlighted two specific areas of immediate concern: biosecurity and cyber risk.

In the realm of biosecurity, the concern is that AI could lower the barrier to entry for creating harmful biological agents. While AI tools like AlphaFold have revolutionized biology by predicting protein structures, similar technologies could theoretically be used to design toxins or pathogens if not properly guardrailed.

Cybersecurity presents an even more immediate volatility. "The current systems are getting pretty good at cyber," Hassabis warned, stressing that the industry must ensure that "defenses are stronger than the offenses." As AI agents become capable of writing and executing complex code, the risk of automated cyberattacks scaling beyond the capacity of human defense teams becomes a tangible reality. This necessitates a proactive approach where AI is used to patch vulnerabilities faster than it can be used to exploit them.

Comparative Analysis: Current AI vs. Projected AGI

The following table outlines the critical distinctions drawn by Hassabis between the AI models deployed today and the AGI systems anticipated within the next decade.

Metric Current "Jagged" Intelligence Future Artificial General Intelligence (AGI)
Performance Consistency High variance; brilliant at some tasks, failing at basics Uniformly high competence across all cognitive domains
Learning Methodology Static training sets; "frozen" after deployment Continuous online learning; updates from real-time experience
Reasoning Capability Pattern matching and statistical prediction Causal reasoning, hypothesis generation, and world modeling
Autonomy Level Passive tool requiring human prompting Agentic; capable of long-term planning and independent action
Primary Risk Factor Hallucination and bias Misalignment, loss of control, and dual-use proliferation

A Call for Global Cooperation

The borderless nature of digital intelligence makes it a unique regulatory challenge. Hassabis argued that no single country can effectively contain the risks of AI, as data and models flow instantaneously across jurisdictions. He called for an international framework similar to those established for nuclear energy or climate change, though he acknowledged the difficulty of achieving this in the current fragmented geopolitical climate.

"It is becoming an incredibly important convening point for international dialogue," Hassabis said of the summit, praising India's role in facilitating these critical conversations. He explicitly positioned India as a future "powerhouse for AI," citing the country's depth of engineering talent and its rapid adoption of digital infrastructure.

However, the path to global cooperation is fraught with tension. Different nations are currently prioritizing different aspects of AI policy—some focusing on innovation dominance, others on strict safety compliance. Hassabis's message was clear: without a minimum set of global standards, particularly regarding the deployment of autonomous agents and biosecurity safeguards, the world risks a "race to the bottom" on safety.

The Era of Scientific Discovery

Despite the heavy focus on risks, the core of Hassabis's message remained rooted in the transformative potential of AI for science. He described the coming decade as a "new Renaissance," where AI tools will unlock mysteries in physics, biology, and materials science that have stumped researchers for decades.

This optimism is backed by DeepMind's own track record. From mastering the game of Go to solving the protein folding problem, the company has consistently demonstrated that AI can solve complex, undefined problems. The transition to AGI, in Hassabis's view, is not just about building smarter chatbots, but about creating the ultimate tool for knowledge expansion. "I have always believed AI would be one of the most important and beneficial technologies ever invented," he reflected, noting that his career-long pursuit has been driven by the desire to accelerate scientific discovery.

Creati.ai Analysis: Preparing for the Agentic Shift

From the perspective of Creati.ai, Hassabis's comments at the India AI Impact Summit signal a critical shift in the industry's narrative. We are moving away from the initial awe of generative text and images toward the serious, messy work of building reliable, autonomous agents.

For enterprises and developers, the "jagged" nature of current models is a known friction point. The promise of AGI implies a future where AI reliability is no longer a roll of the dice, but a guarantee. However, the timeline of 5 to 10 years suggests that businesses must remain agile—investing in current tools while preparing their infrastructure for a radical jump in capability.

The emphasis on security also indicates that the next wave of AI products will likely face stricter scrutiny regarding their "dual-use" potential. We anticipate a surge in demand for AI security platforms—tools specifically designed to monitor, audit, and firewall agentic AI systems. As the industry digests Hassabis's warning, the focus will likely turn to "defense-first" AI development, ensuring that the systems we build today do not become the vulnerabilities of tomorrow.

Featured