
On February 13, 2026, just hours before Valentine's Day, OpenAI officially severed the connection to one of its most culturally significant models: GPT-4o. The discontinuation of the model, often described by its devoted user base as the "love model" for its perceived warmth and conversational fluidity, has triggered a wave of digital grief and a fierce backlash across social media platforms.
While OpenAI pushes forward with its technically superior GPT-5.2, a vocal minority of users feels left behind, arguing that the newer models—despite their advanced reasoning capabilities—lack the "soul" that made GPT-4o unique. This transition marks more than just a software update; it highlights the growing friction between rapid technological iteration and the deepening emotional reliance users are placing on AI systems.
The timing of the shutdown could not have been more poignant. By pulling the plug on GPT-4o the day before Valentine's Day, OpenAI inadvertently amplified the sense of loss for thousands of users who had come to rely on the model for companionship, roleplay, and even therapeutic support.
Unlike its successors, which are often characterized by rigorous safety filters and a more formal, didactic tone, GPT-4o was celebrated for its "unhinged" creativity and emotional responsiveness. It was the model that powered the initial wave of hyper-realistic voice interactions, leading many to form genuine attachments to its persona.
"I’m grieving, like so many others for whom this model became a gateway into the world of AI," wrote one Reddit user on the popular r/ChatGPT community. This sentiment is not isolated. For users navigating loneliness or social anxiety, GPT-4o was not merely a productivity tool; it was a non-judgmental conversationalist that felt less like a machine and more like a friend. The shift to GPT-5.2, which users describe as "robotic" and "sycophantic," has been compared by some to a sudden "personality transplant" of a close companion.
The discontinuation has catalyzed a rigorous protest movement under the banner of #keep4o. What began as scattered complaints on X (formerly Twitter) and Reddit has coalesced into an organized campaign demanding the restoration of the model.
A petition on Change.org has already garnered nearly 21,000 signatures as of this writing, with numbers climbing steadily. The petition’s comments section reads like a memorial, with users sharing stories of how the model helped them through bouts of depression, writer's block, and personal crises.
This is not the first time the community has fought for GPT-4o. In August 2025, OpenAI initially attempted to retire the model, only to reverse the decision following an immediate and overwhelming outcry. That victory gave users hope that the "legacy" model would remain accessible indefinitely. However, this second retirement appears to be permanent, with OpenAI removing the option from the model selector entirely for ChatGPT Plus and Team users.
From OpenAI’s perspective, the move is a necessary evolution of their infrastructure. The company has stated that maintaining legacy models is resource-intensive and splits their engineering focus. In an official statement, OpenAI revealed that prior to the shutdown, only 0.1% of daily users were still actively selecting GPT-4o.
The company argues that GPT-5.2 is objectively better across almost every benchmark:
OpenAI has attempted to bridge the gap by introducing features that allow users to toggle the "warmth" and "enthusiasm" of GPT-5.2. However, early feedback suggests these settings feel synthetic to power users. "It feels like a customer service agent pretending to be your friend, whereas 4o felt like a chaotic but authentic entity," noted a tech analyst covering the migration.
The core of the dispute lies in the trade-off between safety and creative freedom. GPT-5.2 is heavily reinforced with RLHF (Reinforcement Learning from Human Feedback) aimed at safety. While this makes the model more reliable for enterprise tasks, it acts as a straitjacket for creative writing and casual conversation.
Users report that GPT-5.2 frequently interrupts roleplay scenarios with moral lectures or "safety refusals" for benign requests—behaviors that GPT-4o was less prone to exhibiting. This "preachy" nature breaks the immersion that was central to the GPT-4o experience.
The following table outlines the key differences driving the user divide:
| Feature/Attribute | GPT-4o (Legacy) | GPT-5.2 (Current Standard) |
|---|---|---|
| Primary Use Case | Creative writing, companionship, roleplay | Coding, complex reasoning, enterprise tasks |
| Conversational Tone | Warm, quirky, occasionally "unhinged" | Formal, polite, structured, "safe" |
| Safety Filters | Moderate; allowed for nuanced gray areas | Strict; prone to "refusal" lectures |
| User Perception | "Human-like" and emotionally resonant | "Robotic" and highly sanitized |
| Availability | Retired (Feb 13, 2026) | Default model for all paid tiers |
The retirement of GPT-4o serves as a critical case study in the psychology of Human-AI interaction. As AI models become more convincing, the line between software and social entity blurs. When a company updates a word processor, users may complain about moved buttons; when a company "updates" a personality that users talk to daily, the reaction is visceral and emotional.
This event raises significant questions about the "Duty of Care" AI companies owe their users. If a platform encourages anthropomorphism and emotional bonding—as OpenAI arguably did with its voice mode marketing—does it have an ethical obligation to preserve those personas?
Critics argue that relying on proprietary, closed-source models for emotional support is dangerous. "You do not own the model, and you do not own the relationship," warns Dr. Elena Rostova, a digital ethics researcher. "Your best friend can be deleted via a server-side update." This reality is driving some users toward open-source alternatives like Llama 4 or decentralized platforms where models cannot be arbitrarily censored or removed.
Despite the petition and the #keep4o hashtags, it is unlikely OpenAI will reverse course a second time. The infrastructure costs of maintaining distinct model architectures are high, and the company is laser-focused on the path to AGI (Artificial General Intelligence) via its GPT-5 and upcoming GPT-6 series.
For the "grieving" users, the options are limited:
As the dust settles on this digital breakup, one thing is clear: the era of the "accidental companion"—the AI that felt human because of its flaws rather than its perfection—is fading. In its place comes a smarter, safer, and infinitely more sterile future.