
In a defining moment for the artificial intelligence industry, Meta Platforms has officially unveiled "Meta Compute," a new top-level initiative designed to overhaul and aggressively expand its AI infrastructure. Announced by CEO Mark Zuckerberg, the division represents a strategic pivot for the social media giant, transitioning its focus toward owning the physical "rails" of the next technological paradigm. With plans to deploy tens of gigawatts of compute capacity within the decade and investments projected to reach hundreds of billions of dollars, Meta is positioning itself to build the foundation for what Zuckerberg terms "personal superintelligence."
The launch of Meta Compute marks a significant restructuring of Meta’s internal operations. Historically, infrastructure at Meta served the needs of its application family—Facebook, Instagram, and WhatsApp. However, the exponential demands of training and running advanced AI models, such as the rumored Llama 4 "Behemoth," have necessitated a dedicated entity focused solely on compute scale.
Mark Zuckerberg announced the initiative via a post on Threads, stating, "How we engineer, invest, and partner to build this infrastructure will become a strategic advantage." The goal is not merely to keep pace with competitors like Google and Microsoft but to surpass them by securing energy and hardware independence.
The leadership structure of Meta Compute reflects this high-stakes ambition. The division will be co-led by Santosh Janardhan, Meta’s long-time Head of Global Infrastructure, and Daniel Gross, the former CEO of Safe Superintelligence who joined Meta in the summer of 2025. This dual-leadership model splits the focus between technical execution and long-term strategic capacity planning.
Meta Compute Leadership Structure
| Executive | Role | Primary Responsibilities |
|---|---|---|
| Santosh Janardhan | Co-Head, Meta Compute | Technical architecture, custom silicon (MTIA), software stack, and day-to-day data center fleet operations. |
| Daniel Gross | Co-Head, Meta Compute | Long-term capacity strategy, supplier partnerships, industry analysis, and business modeling. |
| Dina Powell McCormick | President & Vice Chairman | Sovereign and government partnerships, focusing on financing and regulatory alignment for global infrastructure deployment. |
The technical specifications outlined in the announcement are staggering. While current state-of-the-art data centers operate in the range of megawatts, Meta Compute is targeting "tens of gigawatts" by 2030, with a long-term vision of reaching hundreds of gigawatts. To put this in perspective, a single gigawatt is roughly enough energy to power hundreds of thousands of homes, or a city the size of San Francisco.
This expansion requires a fundamental rethink of data center design. Meta is reportedly breaking ground on several massive new facilities, including projects codenamed "Prometheus" and "Hyperion." These "titan clusters" are designed to house millions of GPUs and Meta’s proprietary MTIA (Meta Training and Inference Accelerator) chips.
The move to custom silicon is central to Meta Compute’s strategy. By reducing reliance on third-party hardware providers like NVIDIA, Meta aims to control its supply chain and optimize performance per watt—a critical metric when operating at the gigawatt scale.
Perhaps the most critical challenge for Meta Compute is energy. The power grid in its current state cannot support the localized density required for gigawatt-scale AI clusters. Consequently, Meta is aggressively pursuing independent energy solutions.
Industry reports indicate that Meta has secured preliminary agreements with nuclear energy providers, including Vistra, TerraPower, and Oklo. These partnerships aim to deploy Small Modular Reactors (SMRs) directly adjacent to data center sites, creating "behind-the-meter" power generation that bypasses the public grid’s bottlenecks.
Key Infrastructure Targets
| Metric | Current Status (Est.) | 2030 Target | Long-Term Goal |
|---|---|---|---|
| Compute Capacity | Multi-Megawatt Clusters | Tens of Gigawatts | Hundreds of Gigawatts |
| Primary Energy Source | Grid Mix (Renewables/Fossil) | Grid + On-site Nuclear/SMRs | Sovereign Energy Independence |
| Hardware Focus | Primarily NVIDIA H100/Blackwell | Hybrid NVIDIA + Custom MTIA | Dominance of Custom Silicon |
| Investment Scale | ~$35-40 Billion/Year (CapEx) | >$72 Billion/Year | Total >$600 Billion by 2035 |
From the perspective of Creati.ai, Meta’s move signifies a shift in how AI value is captured. For the past decade, value accrued to software platforms and aggregators. In the AGI (Artificial General Intelligence) era, value is shifting to the infrastructure layer—the physical assets required to generate intelligence.
By creating Meta Compute, Zuckerberg is signaling that he views compute power not as a commodity to be rented from cloud providers like AWS or Azure, but as a sovereign asset. This "sovereign compute" approach allows Meta to:
The sheer magnitude of this investment—projected to exceed $600 billion over the next few years—has unsettled some investors. Meta’s stock has seen volatility following the announcement, reflecting fears that capital expenditures (CapEx) will erode margins in the short term without immediate revenue generation.
Unlike Microsoft or Google, who can immediately offset infrastructure costs by renting capacity to enterprise cloud customers, Meta consumes its compute internally. This places immense pressure on its core advertising business to fund the build-out until AI-driven revenue streams (such as business agents or advanced creative tools) mature.
However, the appointment of Dina Powell McCormick suggests a potential secondary revenue stream: sovereign AI. By partnering with governments who wish to build their own national AI models but lack the infrastructure, Meta could potentially lease its "Meta Compute" capacity, effectively becoming a specialized cloud provider for nations rather than corporations.
Meta Compute is more than a reorganization; it is a declaration of intent. As the AI arms race intensifies, the bottleneck is shifting from data and algorithms to power and silicon. By committing hundreds of billions to solve this physical constraint, Meta is betting the company on the belief that the future belongs to those who own the generator, not just the lightbulb. For the broader AI ecosystem, this guarantees that the pace of model scaling will not slow down—in fact, with gigawatt clusters on the horizon, it is just getting started.