
The artificial intelligence sector is reeling this week following a series of high-profile resignations from its three most dominant players—Anthropic, OpenAI, and xAI. In a span of just 72 hours, the industry has lost a lead safety researcher, a key strategist, and a co-founder, each departing with messages that range from cryptic warnings about global catastrophe to concrete concerns over commercialization.
These exits occur against a backdrop of intensifying competition, with Anthropic preparing to release its massive Claude Opus 4.6 model and OpenAI pivoting toward an ad-supported revenue model. The timing is particularly critical as global tech leaders prepare to converge in New Delhi next week for the inaugural AI Impact Summit 2026.
The most resonant of this week's departures is that of Mrinank Sharma, the now-former lead of Anthropic’s Safeguards Research Team. Sharma, an Oxford-educated machine learning expert who joined the company in 2023, announced his resignation on Monday with a public letter that has since gone viral among industry observers.
Unlike the standard corporate pleasantries that usually accompany such announcements, Sharma’s farewell was a stark, almost philosophical warning. He explicitly stated that the "world is in peril," clarifying that his concern extends beyond AI risks to a "whole series of interconnected crises unfolding in this very moment."
Sharma’s tenure at Anthropic was defined by his work on critical safety initiatives, including defenses against AI-assisted bioterrorism and research into AI sycophancy—the tendency of models to agree with users regardless of truth. However, his resignation letter hinted at deep internal conflicts regarding the company's direction.
"I have repeatedly seen how hard it is to truly let our values govern our actions," Sharma wrote, suggesting that external pressures were forcing compromises on the company's core safety mission. "We constantly face pressures to set aside what matters most."
The researcher indicated he would be returning to the UK to "become invisible for a period of time," expressing an intention to study poetry rather than immediately joining a competitor. This withdrawal from the field highlights a growing sentiment of burnout and moral conflict among the scientists tasked with restraining the very systems they are helping to build.
While Sharma’s exit was framed in existential terms, the resignation of Zoe Hitzig from OpenAI on Tuesday pointed to more tangible shifts in business strategy. Hitzig, a researcher focused on product and safety strategy, left the company citing "deep reservations" about OpenAI’s evolving business model.
Sources close to the matter indicate that Hitzig’s departure was precipitated by internal discussions regarding the introduction of advertising into ChatGPT's interface. As OpenAI continues to seek revenue streams to support its massive compute costs, the shift toward an ad-supported model has raised ethical questions about user manipulation and the integrity of AI-generated responses.
Hitzig’s exit is part of a broader "brain drain" at OpenAI, which has seen its founding team dwindle significantly over the last two years. Her departure underscores the friction between the organization's non-profit roots and its increasingly aggressive for-profit trajectory.
Completing the trifecta of departures is Tony Wu, a co-founder of xAI, Elon Musk’s artificial intelligence venture. Wu announced his resignation late Monday, stating simply that it was "time to move on." While his message was less critical than Sharma’s or Hitzig’s, it arrives during a chaotic restructuring period for the company.
xAI was recently acquired by SpaceX, another Musk-controlled entity, in a move described as a way to "generate computing power" using space-based assets. This consolidation has reportedly unsettled the original leadership team. Wu joins other co-founders like Igor Babuschkin who have recently stepped away, leaving xAI with only a fraction of its original technical leadership.
The following table outlines the significant exits impacting the major AI labs this week:
Name|Company|Role|Reason for Exit
---|---|---
Mrinank Sharma|Anthropic|Lead, Safeguards Research Team|Cited "world is in peril" and internal pressure to compromise values
Zoe Hitzig|OpenAI|Researcher, Product & Safety Strategy|Concerns over proposed advertising strategy in ChatGPT
Tony Wu|xAI|Co-founder|Personal decision amid company restructuring and SpaceX acquisition
The context for these departures cannot be ignored. Anthropic is reportedly days away from rolling out Claude Opus 4.6, a model expected to significantly outperform current benchmarks. The pressure to finalize and release this model likely contributed to the "pressures to set aside what matters most" that Sharma referenced.
Industry insiders speculate that the race to achieve dominance with the next generation of models is compressing safety timelines. As companies vie for valuations hitting the $350 billion mark, the voice of the safety researcher is increasingly competing with the roar of commercial necessity.
These resignations set a tense stage for the AI Impact Summit 2026, scheduled to begin on February 16 in New Delhi. The summit will host the industry's titans, including Anthropic CEO Dario Amodei, OpenAI’s Sam Altman, and Google’s Sundar Pichai.
The summit was intended to be a showcase of technological progress and international cooperation. However, with the fresh exit of a top safety lead warning of global peril, the agenda is likely to shift. Leaders will now face pointed questions about whether the "governance" they preach on stage is being practiced in their labs.
For Creati.ai, the question remains: If the individuals paid to ensure our safety are leaving because they feel unheard, who is left to guard the guardrails? As the industry pushes forward, the silence of these departing experts may speak louder than any keynote address next week.