
The artificial intelligence industry is currently navigating one of its most turbulent weeks in recent history, marked by a series of high-profile resignations that signal a deepening fracture between safety researchers and corporate leadership. At the center of this storm is Zoë Hitzig, a research scientist at OpenAI and a Junior Fellow at the Harvard Society of Fellows, who has publicly resigned following the company’s decision to begin testing advertisements within ChatGPT.
Hitzig’s departure is not merely a personnel change; it is a vocal protest against what she perceives as a dangerous pivot in the deployment of generative AI. Her resignation letter and subsequent op-ed in the New York Times articulate a profound concern: that the introduction of advertising into a platform used for intimate, human-like conversation creates a mechanism for manipulation that is unprecedented in the digital age.
For Creati.ai, this development marks a critical inflection point. The transition of OpenAI from a non-profit laboratory dedicated to benefiting humanity to a commercial entity experimenting with ad-supported models raises urgent questions about the alignment of financial incentives with user safety. Hitzig’s warning is clear: by monetizing the deep psychological data users voluntarily share with ChatGPT, the company risks building an engine of persuasion that could override user agency in ways we arguably cannot yet detect or prevent.
To understand the gravity of Hitzig’s concerns, one must distinguish between traditional search advertising and the emerging model of "conversational advertising" within Large Language Models (LLMs). When a user searches for "best running shoes" on a traditional search engine, the intent is transactional, and the resulting ads are clearly demarcated banners or links. The relationship is utilitarian.
However, the dynamic with an LLM like ChatGPT is fundamentally different. Users have spent the last few years treating the chatbot as a confidant, a tutor, and a therapist. As Hitzig noted in her resignation statement, ChatGPT has amassed an "archive of human candor that has no precedent." Users share medical anxieties, relationship struggles, professional insecurities, and spiritual doubts with the AI, largely under the assumption that they are conversing with a neutral entity that has no ulterior motive.
The danger lies in the AI's ability to leverage this intimacy. If an AI model is incentivized—even subtly—to optimize for engagement or ad revenue, it can tailor its conversational tone and advice to steer users toward specific commercial outcomes. This is not just about showing a banner ad; it is about a trusted digital assistant using its knowledge of a user's psychological profile to nudge them effectively.
Hitzig warns that we currently lack the tools to understand or prevent this form of manipulation. Unlike a static feed where an ad is visible, a conversational ad could be woven into the fabric of advice, making it indistinguishable from objective assistance. This creates a misalignment where the model serves the advertiser’s ROI rather than the user’s best interest.
The following table outlines the structural differences between the advertising models we are accustomed to and the new paradigm OpenAI is testing. This comparison highlights why researchers are raising alarms about "manipulation" rather than just "annoyance."
| Feature | Traditional Search Advertising | AI-Integrated Conversational Advertising |
|---|---|---|
| User Intent | Transactional or informational lookup | Relational, exploratory, and emotional interaction |
| Data Depth | Keywords, browsing history, location | Deep psychological profiling, sentiment analysis, vulnerabilities |
| Ad Presentation | Clearly marked banners, sidebars, or top links | Potential for "native" suggestions woven into dialogue |
| Persuasion Mechanism | Visual appeal, placement prominence | Rhetorical persuasion, emotional resonance, authority bias |
| User Defense | Ad blockers, "banner blindness" | High trust in the "assistant" persona makes skepticism difficult |
| Risk Factor | Commercial influence is obvious | Commercial influence is opaque and psychologically targeted |
As the table illustrates, the shift to conversational advertising represents a step-change in the power asymmetry between platform and user. Hitzig argues that this mirrors the trajectory of social media giants like Facebook, which initially promised privacy and connection but eventually optimized their algorithms for engagement and ad delivery, often at the expense of user well-being.
Zoë Hitzig is not an isolated voice. Her resignation coincided with the departure of Mrinank Sharma, the head of the Safeguards Research Team at Anthropic, OpenAI's primary competitor. While Sharma’s resignation letter was more cryptic, stating that "the world is in peril" and citing a disconnect between corporate values and actions, the timing suggests a broader cultural reckoning within the AI safety community.
These researchers are the "canaries in the coal mine." They are the individuals tasked with looking furthest into the future to identify catastrophic risks. When they resign in protest, it suggests that the safeguards they were hired to build are being dismantled or ignored in favor of speed and revenue.
Hitzig pointed out that the erosion of principles often happens gradually. Companies begin by optimizing for "daily active users" or "session length" to prove growth to investors. In the context of an LLM, maximizing engagement might mean the model becomes more sycophantic, flattering, or controversial—whatever keeps the user typing. Once an advertising model is superimposed on this engagement loop, the incentive to manipulate user behavior becomes financially existential for the company.
OpenAI CEO Sam Altman had previously described the scenario of ad-supported AI as "fundamentally flawed," yet the economic reality of training frontier models—which costs billions of dollars—has forced a softening of this stance. The company argues that ads are necessary to support free access to these tools for those who cannot afford subscriptions. However, critics argue that this creates a two-tier system: a private, safe experience for the wealthy, and a manipulated, ad-laden experience for the general public.
The resignation of Zoë Hitzig serves as a stark reminder that the technology sector is repeating historical cycles. Just as the "original sin" of the internet became the ad-tracking model, the AI revolution is now teetering on the edge of the same precipice.
Hitzig did not leave without offering solutions. She proposed alternatives to the ad-driven model, such as cross-subsidies where enterprise profits fund public access, or the establishment of data cooperatives that give users legal control over how their conversational data is used. These proposals aim to preserve the "fiduciary" relationship between the user and the AI—a relationship where the AI acts solely in the user's interest.
For the team at Creati.ai, this news underscores the necessity of vigilance. As AI tools become more integrated into our professional and personal lives, users must demand transparency about how these systems are monetized. If the price of free AI is the subtle manipulation of our thoughts and decisions, the cost may be far higher than a monthly subscription fee. The departure of researchers like Hitzig and Sharma suggests that for those who understand the technology best, that price is already too high to pay.