AI News

Meta's Superintelligence Labs Unveils 'Avocado' and 'Mango': A Pivot to Autonomous World Models

DAVOS — In a decisive move to reclaim its position at the frontier of artificial intelligence, Meta Platforms has broken its silence on the first major outputs from its secretive Superintelligence Labs (MSL). Speaking at the World Economic Forum in Davos this week, CTO Andrew Bosworth revealed that the company has achieved significant internal breakthroughs with two distinct AI models: Project Avocado, a next-generation text model optimized for high-level reasoning, and Project Mango, a visual intelligence system built on "world model" architecture.

This announcement marks a critical turning point for Meta. Following the mixed reception of Llama 4 in mid-2025, CEO Mark Zuckerberg executed a sweeping reorganization, establishing the Superintelligence Labs under the leadership of Alexandr Wang. The debut of Avocado and Mango signals that Meta’s aggressive talent acquisition and infrastructure investment are finally bearing fruit, with a public release targeted for Q1 2026.

The Superintelligence Mandate: A Strategic Pivot

The formation of Meta Superintelligence Labs represented a fundamental shift in the company’s AI philosophy. Moving away from the purely product-focused integration of earlier Llama iterations, MSL was tasked with a singular goal: achieving autonomy and deep reasoning.

The roadmap unveiled at Davos suggests that Meta is no longer satisfied with merely powering chatbots on Instagram and WhatsApp. Instead, the company is pivoting toward "agentic" systems—AI that can plan, reason, and execute complex tasks over long horizons.

"The industry has hit a wall with incremental scaling," Bosworth noted during his address. "With Avocado and Mango, we aren't just predicting the next token; we are modeling the underlying logic of the physical and digital worlds."

Project Avocado: Mastering Code and Logic

Project Avocado represents Meta’s direct answer to the growing demand for AI capabilities in software development and complex logical deduction. Unlike its predecessors, which were general-purpose omni-models, Avocado has been fine-tuned specifically to address the "reasoning gap" that plagued previous open-source models.

Beyond Pattern Matching

Internal reports suggest that Avocado utilizes a novel architecture that prioritizes "Chain of Thought" (CoT) processing at the pre-training level, rather than just during inference. This allows the model to:

  • Self-Correct Code: Identify and patch vulnerabilities in software loops without human intervention.
  • Multi-Step Planning: Deconstruct complex logical queries into executable sub-tasks with higher fidelity than GPT-5 class models.
  • Contextual Persistence: Maintain coherent logic streams over significantly longer context windows, essential for enterprise-grade applications.

By focusing heavily on coding and logic, Meta aims to capture the developer market that has increasingly consolidated around closed-source proprietary models.

Project Mango: The Physics of Visual Intelligence

While Avocado handles the abstract, Project Mango handles the concrete. Described as a "World Model" rather than a simple image generator, Mango is designed to understand the physics, causality, and temporal continuity of the physical world.

The 'World Model' Advantage

Current generative video models often struggle with "hallucinations" where objects morph unrealistically or defy gravity. Project Mango attempts to solve this by learning the laws of physics alongside pixel generation.

  • Temporal Consistency: Objects in Mango-generated videos maintain their shape, mass, and velocity over time.
  • Interactivity: Early demos hint at the ability for users to "interact" with generated scenes, changing variables (like lighting or object placement) while the model recalculates the physical outcome in real-time.
  • Multimodal Native: Mango is not just text-to-video; it accepts video input to analyze and predict future frames, effectively acting as a simulator for real-world scenarios.

Comparative Analysis: The New Landscape

The introduction of these models places Meta in direct confrontation with the industry's current leaders. The following comparison highlights how Avocado and Mango differentiate themselves from the existing ecosystem.

Table 1: Competitive Landscape Analysis (Projected Specs)

Model / Project Primary Focus Key Differentiator Target Architecture
Meta Project Avocado Logic & Coding Deep reasoning & self-correction CoT-Integrated Transformer
Meta Project Mango Visual Simulation Physics-based "World Model" Latent Diffusion + Physics Engine
OpenAI o-Series General Reasoning Broad knowledge base Reinforcement Learning
Google Gemini Ultra Multimodal Integration Native multimodal processing Mixture-of-Experts (MoE)

Strategic Implications for the AI Industry

The release of Avocado and Mango is more than a product launch; it is a validation of Meta’s controversial "year of intensity." The decision to bring in external leadership like Alexandr Wang and the massive capital expenditure on H200 clusters appears to have corrected the course after the Llama 4 stumble.

The Open Source Question

A critical question remains unanswered: Will Meta open-source Avocado and Mango?
Historically, Meta has championed open weights. However, the advanced capabilities of these models—particularly Mango’s potential for realistic simulation and Avocado’s cyber-offensive capabilities—may force a change in strategy. Bosworth hinted at a "tiered release," potentially reserving the most capable versions of these models for enterprise partners or releasing them under stricter safety licenses.

Road to Release: Q1 2026 and Beyond

As we approach the planned Q1 2026 release, the industry is bracing for a new wave of competition. Meta’s pivot to "World Models" and "Reasoning Agents" suggests that the next battleground for AI is not just about who can generate the best text or image, but who can build the most accurate simulation of reality.

Development Timeline & Milestones

Phase Milestone Status Key Deliverables
Phase 1 Internal Training Completed Core model architecture validated; 100k+ GPU cluster utilization.
Phase 2 Red Teaming In Progress Safety alignment; Adversarial testing for coding vulnerabilities.
Phase 3 Partner Beta Q1 2026 (Planned) API access for select enterprise partners; Integration into Ray-Ban Meta smart glasses.
Phase 4 Public Release H1 2026 Open weight release (TBD) or general API availability.

For developers and enterprises, the message from Davos is clear: Meta is back in the race, and this time, they are building for a world where AI doesn't just chat—it acts.

Featured