AI News

Global Experts Warn of "Deep Uncertainty" in AI Trajectory Ahead of India Summit

A global coalition of over 100 leading artificial intelligence experts has released the Second International AI Safety Report, issuing a stark warning about the unpredictable evolution of general-purpose AI systems. Published just days before the high-stakes India AI Impact Summit in New Delhi, the report highlights a critical disconnect between the rapid advancement of AI capabilities and the "insufficient" safeguards currently in place to manage them.

Chaired by Turing Award-winning scientist Yoshua Bengio, the report serves as a scientific consensus document intended to guide policymakers at the upcoming summit. While acknowledging the immense potential of AI to drive economic growth and scientific discovery, the findings paint a complex picture of a technology that is advancing at a breakneck pace, often outpacing the human ability to understand or control it.

The Paradox of "Jagged" Intelligence

One of the report's most significant findings is the phenomenon of "jagged" performance in state-of-the-art AI models. While these systems have achieved "gold-medal performance" on International Mathematical Olympiad questions and exceeded PhD-level expertise on specific science benchmarks, they continue to fail spectacularly at tasks that would be trivial for a human.

This inconsistency creates a dangerous illusion of competence. Users may over-trust systems in critical scenarios—such as medical diagnosis or legal analysis—based on their performance in other high-level domains. The report notes that this unpredictability is compounded by the emergence of agentic systems, which can act autonomously to complete multi-step tasks.

"How and why general-purpose AI models acquire new capabilities and behave in certain ways is often difficult to predict, even for developers," the report states.

The experts warn that as these agentic systems become more integrated into the economy, the loss of direct human control could allow "dangerous capabilities" to go undetected until after deployment.

Escalating Risks: From Biological Threats to Systemic Inequality

The 2026 report significantly expands on the risk categories identified in the inaugural 2025 edition. It presents new empirical evidence suggesting that the barrier to entry for malicious actors is lowering.

Key areas of concern include:

  • Biological & Cyber Risks: AI systems now match or exceed expert performance in tasks relevant to biological weapons development, such as troubleshooting virology lab protocols. Similarly, these tools are being used to lower the skill threshold for launching sophisticated cyberattacks.
  • Deepfakes & Non-Consensual Imagery: The proliferation of AI-generated content is fueling a rise in fraud and non-consensual intimate imagery (NCII), which disproportionately targets women and girls. The report cites a study finding that 19 out of 20 popular "nudify" apps specialize in this form of abuse.
  • Systemic Labor Disruption: Beyond immediate safety hazards, the report identifies broader structural risks. The integration of AI into labor markets threatens to exacerbate inequality, with the potential for massive displacement in sectors previously considered safe from automation.

Table: Critical Risk Categories Identified in the 2026 Report

Risk Category Primary Concern Current Status
Malicious Use Lowering barriers for cyberattacks and bioweapons High urgency; active exploitation observed
Systemic Risks Labor displacement and widening global inequality Long-term threat; requires policy intervention
Technical Failures Loss of control over autonomous agentic systems Deeply uncertain; safeguards are "fallible"
Misinformation Scale of AI-generated influence operations Rapidly growing; impacts democratic processes

The Global Divide: A Tale of Two Worlds

As the world prepares for the India AI Impact Summit, the report casts a spotlight on the uneven distribution of AI's benefits. While adoption has been "swift," with at least 700 million people using leading AI systems weekly, this usage is heavily concentrated in the Global North.

In contrast, adoption rates across much of Africa, Asia, and Latin America remain below 10%. This "digital divide" poses a severe risk: if advanced AI becomes the primary engine of future economic growth, nations without access to the technology—or the infrastructure to support it—could be left permanently behind.

This disparity aligns with the core themes of the upcoming summit in New Delhi. Branded under the "Sutras" of People, Planet, and Progress, the summit aims to shift the global conversation from theoretical safety debates to practical, inclusive outcomes that benefit the Global South.

A Fracture in Global Consensus?

In a notable geopolitical development, the United States declined to sign the final version of the report, despite providing feedback during the drafting process. This marks a departure from the previous year's unanimity. While the move is described by some observers as "largely symbolic," it underscores the growing tension between rapid innovation and international regulatory frameworks.

The US stance contrasts with the position of other major powers, including the European Union and China, who have backed the report's findings. This divergence may set the stage for contentious debates at the New Delhi summit, as nations struggle to balance the "race for AI supremacy" with the need for coordinated global governance.

Looking Ahead to New Delhi

The release of this report sets the agenda for the India AI Impact Summit, scheduled for February 16–20, 2026. Indian officials, including Minister Ashwini Vaishnaw, have emphasized that the gathering will focus on "responsible openness" and "fair access" to compute resources.

For the gathered policymakers, the challenge will be to translate the report's scientific warnings into actionable policy. As the document concludes, current risk management techniques are "improving but insufficient." The world is now looking to New Delhi to bridge the gap between identifying these existential risks and actually mitigating them.

Featured