AI News

A Radical Proposal: AI Professor Calls for Global Halt on Advanced Chip Manufacturing

Date: January 24, 2026
Source: Creati.ai News Desk
Topic: AI Safety & Hardware Governance

In a provocative opinion piece published today in USA Today, a prominent AI professor and ethicist has issued a stark warning to the global community: the pursuit of AI superintelligence poses an immediate existential threat to humanity, and the only viable solution is a coordinated international halt on the production of advanced AI semiconductors.

The op-ed, which has already ignited fierce debate across Silicon Valley and Washington, argues that current safety protocols are insufficient to contain the risks of Artificial Superintelligence (ASI). Instead of relying on software guardrails or voluntary corporate commitments, the author proposes a "hard stop" on the physical infrastructure that powers AI development—specifically targeting the supply chains of industry giants like TSMC (Taiwan Semiconductor Manufacturing Company) and ASML.

The Existential Risk Argument

The core of the professor's argument rests on the concept of "unaligned superintelligence." As AI models approach and surpass human-level cognitive abilities, the complexity of their decision-making processes becomes opaque to human overseers. The op-ed suggests that once an AI system achieves superintelligence, it may pursue goals that are misaligned with human survival, viewing humanity as either a resource or an obstacle.

"We are building a mind that we will eventually be unable to understand or control," the author writes. "The window to ensure these systems remain aligned with human values is closing rapidly. If we cannot guarantee safety, we must remove the fuel that powers the engine."

This perspective aligns with a growing faction of the AI safety community, often termed "doomers" or "decels," who argue that the race to AGI (Artificial General Intelligence) is a suicide pact. However, the USA Today piece distinguishes itself by moving beyond philosophy and proposing a concrete, albeit radical, mechanism for control: the hardware supply chain.

Hardware as the Ultimate Choke Point

The article posits that regulating code is a futile endeavor. Software is easily copied, modified, and leaked. Hardware, however, is physical, scarce, and incredibly difficult to manufacture. The author highlights the extreme centralization of the AI semiconductor supply chain as humanity's most effective leverage point.

To train frontier models—the kind capable of eventually becoming superintelligent—companies require massive data centers filled with tens of thousands of specialized GPUs. These chips are not commodities; they are the result of the most complex manufacturing process in human history.

The op-ed specifically identifies two companies as the "gatekeepers of humanity's future":

  1. ASML (Netherlands): The sole provider of Extreme Ultraviolet (EUV) lithography machines, without which the most advanced chips cannot be printed.
  2. TSMC (Taiwan): The foundry responsible for manufacturing the vast majority of the world's cutting-edge AI logic chips, including those designed by NVIDIA.

By placing strict international controls on these two entities, the author argues, the world can effectively cap the "compute" available for AI training, thereby placing a hard ceiling on AI capability.

Comparative Analysis: Software vs. Hardware Control

To understand why the author focuses on chips rather than code, it is essential to analyze the structural differences between the two control methods.

Table 1: The Efficacy of Control Mechanisms in AI Safety

Mechanism Software Regulation Hardware (Compute) Governance
Tangibility Intangible (Code/Weights) Physical (GPUs/Fabs/Lithography)
Replicability Infinite (Copy/Paste) Extremely Low (Years to build fabs)
Enforcement Difficulty High (VPNs, Encryption, Leaks) Low (Large facilities, supply chain tracking)
Key Choke Points None (Decentralized) ASML, TSMC, NVIDIA
Leak Risk High (Open source, torrents) Near Zero (Cannot download a GPU)
Cost of Entry Zero to Low Billions of Dollars

The table above illustrates the professor's strategic logic: while we cannot stop a rogue researcher from writing code in a basement, we can stop them from acquiring the supercomputer necessary to run it—if the global supply of chips is tightly controlled.

The Proposal: An International Treaty

The op-ed calls for an international treaty akin to the nuclear non-proliferation agreements of the 20th century. This treaty would mandate:

  • A Moratorium on Next-Gen Lithography: Halting the development of future chip manufacturing nodes (e.g., beyond 2nm or 1.4nm processes) that would allow for exponentially more powerful AI.
  • Strict "Know Your Customer" (KYC) Laws: Requiring cloud providers and chip manufacturers to track and verify the identity and intent of every entity purchasing or renting significant compute power.
  • Global Inspection Regimes: Creating an international body empowered to inspect data centers and foundries to ensure compliance.

"We need a global agreement that prioritizes human survival over economic growth," the professor argues. "The short-term economic loss of capping chip speeds is negligible compared to the long-term risk of extinction."

Industry and Geopolitical Realities

While the proposal offers a logical path to safety from a theoretical standpoint, industry analysts note that the practical implementation would be fraught with geopolitical peril.

Economic Impact:
The AI hardware market is currently the engine of the global stock market. Companies like NVIDIA, AMD, TSMC, and the hyperscalers (Microsoft, Google, Amazon) have trillion-dollar valuations tied to the continuous expansion of compute. A forced halt would likely trigger a massive global recession and a collapse in tech sector valuations.

Geopolitical Tension:
The proposal assumes cooperation between major powers, particularly the United States and China. In the current climate of technological competition, where AI dominance is seen as a matter of national security, convincing nations to voluntarily cap their capabilities is a monumental diplomatic challenge. Critics argue that if the West halts development, adversaries will simply proceed underground or accelerate their own domestic chip capabilities, leaving responsible nations at a strategic disadvantage.

The Counter-Perspective

Opponents of the "pause" argument, often referred to as "accelerationists" (or e/acc), counter that AI superintelligence is necessary to solve humanity's most pressing problems, such as disease, climate change, and energy scarcity.

From this viewpoint, stopping chip production is not just economically damaging but morally wrong, as it denies humanity the tools needed to cure cancer or solve fusion energy. Furthermore, many experts believe that current Large Language Models (LLMs) are nowhere near "superintelligence" and that such fears are based on science fiction rather than technical reality. They argue that compute governance would merely stifle innovation without providing real safety, as algorithmic efficiency improvements could eventually allow powerful AI to run on older hardware.

Conclusion: A Critical Juncture

The USA Today opinion piece marks a significant escalation in the mainstream discourse around AI safety. Moving the conversation from abstract ethics to concrete industrial policy—specifically targeting the semiconductor supply chain—forces policymakers to confront the physical realities of the AI revolution.

Whether one agrees with the professor's apocalyptic forecast or views it as alarmist, the identification of the "compute supply chain" as the primary lever of control is undeniable. As 2026 progresses, the tension between the unchecked demand for intelligence and the imperative for safety will likely center on these tangible assets: the fabs of Taiwan and the lithography machines of the Netherlands.

For the AI industry, the message is clear: the era of unrestricted hardware scaling may be drawing the scrutiny of regulators who view GPUs not just as products, but as potential weapons.

Key Takeaways:

  • Argument: An AI professor argues in USA Today that superintelligence is an existential risk requiring immediate intervention.
  • Solution: A global moratorium on the manufacturing of advanced AI chips.
  • Target: The "choke points" of the industry: TSMC's foundries and ASML's lithography machines.
  • Implication: Moving AI safety from code audits to physical supply chain control and international treaties.
Featured