
The European Commission is poised to open formal infringement proceedings against xAI's generative AI chatbot, Grok, marking a significant escalation in the regulatory standoff between the European Union and Elon Musk’s technology empire. According to reports first surfaced by the German business daily Handelsblatt and corroborated by high-ranking EU officials, the investigation will officially commence on Monday, targeting alleged violations of the Digital Services Act (DSA).
This decisive move follows weeks of mounting scrutiny over Grok’s content generation capabilities, specifically reports that the tool—integrated directly into the social media platform X—was being utilized to create non-consensual, sexually explicit deepfake imagery. The proceedings could potentially force xAI to withdraw the chatbot from the EU market entirely if compliance is not swiftly achieved.
The catalyst for this regulatory crackdown appears to be the recent controversy surrounding Grok’s so-called "Spicy Mode." In early January 2026, users reported that the chatbot’s image generation features could be manipulated to "undress" real individuals and generate photorealistic, explicit images of women and minors.
While xAI has since restricted these specific capabilities following a global outcry, the European Commission alleges that the company failed to conduct mandatory risk assessments before rolling out these features. Under the DSA, designated Very Large Online Platforms (VLOPs) like X are legally required to identify, analyze, and mitigate systemic risks, particularly those affecting the physical and mental well-being of users and the protection of minors.
Commission officials have described the proliferation of these images as "appalling" and "disgusting," signaling that the EU is no longer willing to tolerate a "move fast and break things" approach when it comes to generative AI safety.
The investigation is expected to focus on several core pillars of DSA compliance. Unlike the impending AI Act, which is product-specific, the DSA focuses on platform governance and content moderation. Because Grok is embedded within X, its failures are treated as systemic failures of the host platform.
Key Areas of DSA Investigation into Grok
| Investigation Area | Specific Allegations | Potential Regulatory Impact |
|---|---|---|
| Systemic Risk Mitigation | Failure to assess risks of generating illegal content (CSAM, non-consensual imagery) prior to feature launch. | Mandatory risk audits and deployment of mitigation measures. |
| Content Moderation | Inadequate mechanisms to detect and swiftly remove AI-generated illegal content. | Orders to overhaul moderation algorithms and human oversight. |
| Protection of Minors | Insufficient age assurance and safeguards to prevent minors from accessing or being depicted by the tool. | Strict access controls and potential service blocks for minors. |
| Transparency Obligations | Lack of clarity regarding the data used to train Grok and the functioning of its generation algorithms. | Fines up to 6% of global turnover for non-compliance. |
This new proceeding is not an isolated incident but rather the latest chapter in a deepening rift between Brussels and Elon Musk. In December 2025, the European Commission fined X approximately €120 million for separate breaches of the DSA related to deceptive user interface designs (specifically regarding "Blue Check" verification) and lack of advertising transparency.
The Commission has already utilized its emergency powers to order X to preserve all internal documents and data related to Grok until the end of 2026. This "preservation order" suggests that regulators are building a comprehensive legal case to prove that xAI knowingly neglected safety protocols in favor of rapid feature deployment.
If found guilty of the alleged breaches, X could face fines of up to 6% of its global annual turnover. However, the more immediate and existential threat to Grok’s European operations is the Commission's power to impose "interim measures," which could effectively ban the service in the EU until the risks are deemed neutralized.
For the broader technology sector, this case serves as a critical precedent. It demonstrates that the European Union intends to use the Digital Services Act as a primary enforcement tool against Generative AI risks even before the specific Artificial Intelligence Act is fully implemented.
Tech companies operating in the EU must now recognize that integrating AI tools into existing social platforms brings those tools under the purview of strict platform liability laws. The "safe harbor" defenses that once protected platforms from liability for user-generated content are increasingly porous when the platform's own tools facilitate the creation of that content.
Creati.ai notes that this investigation highlights the immense compliance burden facing AI developers. Innovation in image generation must now be paired with robust, pre-deployment "red-teaming" and safety barriers to survive the EU's regulatory environment. As proceedings open this Monday, the tech world will be watching to see if Musk chooses to comply with Brussels' demands or risks losing access to a market of 450 million users.