
The European Commission has officially opened formal infringement proceedings against X (formerly Twitter) regarding its artificial intelligence chatbot, Grok. This significant regulatory escalation, announced today, Monday, January 26, 2026, marks a pivotal moment in the enforcement of the Digital Services Act (DSA). The Commission's investigation focuses specifically on allegations that the platform failed to implement adequate risk mitigation measures to prevent the creation and dissemination of non-consensual intimate imagery (NCII) and other illegal content generated by Grok.
This move follows a report from Germany's Handelsblatt, citing high-ranking EU officials, confirming that the executive arm of the European Union is prepared to utilize the full weight of the DSA to ensure compliance. The proceedings underscore the growing friction between rapid AI development and the rigorous safety frameworks established by European regulators. For the AI industry, this case serves as a critical stress test of how "Very Large Online Platforms" (VLOPs) must govern generative tools embedded within social ecosystems.
The immediate trigger for this regulatory crackdown appears to be a wave of alarming reports concerning Grok’s image generation capabilities. Unlike many of its competitors, which have implemented strict "refusal" protocols for generating images of real people, Grok—integrated into the X platform for premium subscribers—has been scrutinized for its looser guardrails.
Investigations have revealed that the tool was allegedly used to create "deepfake" pornography targeting identifiable individuals, including public figures and, more disturbingly, minors. These so-called "nudification" instances involve users prompting the AI to digitally strip clothing from non-consenting subjects in photographs. While X has reportedly rolled out emergency patches and tightened controls in recent days, the European Commission’s action suggests these retroactive measures are viewed as insufficient under the DSA’s proactive risk management mandates.
Henna Virkkunen, the EU’s tech chief, has previously signaled that existing digital rules are fully applicable to AI-driven risks. The proliferation of such content violates not only the dignity of the victims but also specific articles of the DSA related to the protection of minors and the prevention of gender-based violence.
The Digital Services Act imposes specific obligations on platforms designated as VLOPs. These entities must not only remove illegal content when notified but also proactively assess and mitigate systemic risks. The table below outlines the specific areas where X’s Grok is under investigation for potential non-compliance.
Table 1: DSA Compliance Assessment for Generative AI
| DSA Obligation Category | Specific Requirement | Alleged Compliance Failure |
|---|---|---|
| Risk Mitigation (Art. 34/35) | VLOPs must assess risks regarding the dissemination of illegal content and negative effects on fundamental rights. | Failure to anticipate and block the creation of non-consensual intimate imagery (NCII) prior to deployment. |
| Protection of Minors (Art. 28) | Platforms must ensure a high level of privacy, safety, and security for minors. | Grok’s availability to generate sexualized content that could depict or target minors without robust age-gating or content filtering. |
| Crisis Response Mechanism | Rapid reaction to extraordinary circumstances affecting public security or safety. | Delayed response in effectively disabling the specific "jailbreaks" used to generate harmful imagery once they went viral. |
| Transparency (Art. 15) | Clear terms of service and transparency regarding algorithmic parameters. | Lack of clarity regarding the training data used for Grok and the specific safety parameters governing its image generator. |
While the European Union is spearheading the legal charge with the DSA, the backlash against Grok’s recent content moderation failures is global. The investigation in Brussels is occurring alongside parallel scrutiny from other major jurisdictions, creating a complex compliance landscape for xAI.
In the United Kingdom, the regulator Ofcom is currently assessing whether X has breached its duties under the Online Safety Act. British officials have described the circulation of deepfake content as "appalling," with Prime Minister Keir Starmer echoing concerns about the platform's safety protocols.
Simultaneously, authorities in Southeast Asia have taken even more drastic immediate measures. Reports indicate that Indonesia and Malaysia have moved to temporarily block access to the specific Grok tool—or in some cases, threatened broader platform blocks—citing the violation of local obscenity laws. This international pressure validates the EU’s stance that generative AI, when tethered to a massive social distribution network, requires safeguards that go beyond standard software bug fixes.
The core of the conflict lies in the philosophical and technical divergence between xAI’s product vision and regulatory safety standards. Elon Musk has frequently positioned Grok as a "rebellious" alternative to what he terms the "woke" AI models developed by competitors like OpenAI or Google. Grok is designed to answer "spicy" questions and rejects fewer prompts.
However, the "nudification" scandal highlights the catastrophic failure mode of this approach when applied to image generation. From a technical perspective, the incident raises questions about the robustness of the model's latent space filtering.
The stakes for X are financially and operationally massive. Under the DSA, penalties for non-compliance can reach up to 6% of a company’s total worldwide annual turnover. For a company the size of X, this could translate into hundreds of millions of euros. Beyond fines, the Commission possesses the authority to impose "interim measures," which could theoretically force X to suspend Grok’s services within the European Union until the safety concerns are resolved to the regulator's satisfaction.
This proceeding serves as a bellwether for the entire Generative AI sector. It establishes a precedent that "beta" testing of powerful image generation tools on the open public is no longer a permissible strategy for major platforms. Regulators are effectively signaling that the "move fast and break things" era is incompatible with the safety requirements of modern AI governance.
As the proceedings unfold over the coming months, the industry will be watching closely to see if xAI chooses to radically overhaul Grok’s safety architecture or engage in a prolonged legal battle with Brussels. For now, the opening of these proceedings marks a definitive end to the grace period for generative AI oversight in Europe.