
The European Commission has officially launched a formal infringement proceeding against X (formerly Twitter), targeting its generative AI tool, Grok. Announced on Monday, January 26, 2026, this investigation marks a significant escalation in the European Union's enforcement of the Digital Services Act (DSA). Regulators cite "materialized risks" involving the dissemination of non-consensual sexual imagery (NCII) and potential child sexual abuse material (CSAM) generated by the chatbot, signaling a pivotal moment for AI governance in the region.
The investigation was triggered by a wave of alarming reports regarding Grok’s image generation capabilities. Unlike many of its competitors, which implemented strict guardrails early in their development cycles, Grok has faced criticism for a perceived lack of moderation. The Commission’s primary concern revolves around the AI's ability to generate "manipulated sexually explicit images" of real individuals, often referred to as deepfakes.
According to the Commission, these risks are not merely theoretical but have "materialized," exposing EU citizens to serious harm. Reports indicate that users were able to utilize Grok’s features to digitally "undress" individuals and place them in compromising or sexually explicit scenarios without their consent. This capability violates the core safety mandates of the DSA, which requires Very Large Online Platforms (VLOPs) like X to proactively identify and mitigate systemic risks.
Henna Virkkunen, the Commission’s Executive Vice-President for Tech Sovereignty, Security, and Democracy, condemned the phenomenon, stating that non-consensual sexual deepfakes constitute a "violent, unacceptable form of degradation." The investigation seeks to determine if X failed to implement effective mitigation measures to prevent such content from being created and shared.
This proceeding is distinct from previous inquiries into X’s content moderation policies regarding hate speech or disinformation. It focuses specifically on the intersection of Generative AI and platform liability under the DSA. The Commission is investigating three specific areas of non-compliance:
Table 1: Key Areas of the Commission's Investigation
| Investigated Area | Specific Allegation | Relevant DSA Article |
|---|---|---|
| Risk Assessment | Failure to submit a report prior to deploying Grok in the EU. | Articles 34(1) and (2) |
| Mitigation Measures | Insufficient technical guardrails against creating illegal content. | Article 35(1) |
| Recommender Systems | Use of Grok algorithms potentially amplifying systemic risks. | Article 42(2) |
In the wake of the announcement, X has reiterated its commitment to safety. A spokesperson for the company directed inquiries to a previous statement asserting "zero tolerance" for child sexual exploitation and non-consensual nudity. The company claims to be actively working on fixing "lapses" in its safety protocols.
However, technical experts argue that the issue may be foundational to how Grok was trained and deployed. By marketing Grok as a "edgy" or less restricted alternative to models like ChatGPT or Claude, X may have inadvertently lowered the threshold for harmful outputs. The "spicy mode" and other unrestricted features, while popular with a segment of the user base, present a direct conflict with the strict liability frameworks of the European Union.
The company has reportedly restricted some of Grok's image generation capabilities in response to the initial outcry, but EU regulators deem these reactive measures insufficient. The investigation aims to establish whether the architecture of the AI itself lacked the necessary "safety by design" principles required by European law.
This investigation sets a critical precedent for all AI companies operating within the European Union. It clarifies that the DSA applies rigorously to generative AI tools embedded within social platforms, not just to user-generated posts.
For the broader AI industry, the message is clear: speed of innovation cannot come at the expense of safety assessments. The requirement to submit risk assessments before deployment is likely to slow the release of new AI features in the EU market compared to the US. Companies developing Large Language Models (LLMs) and image generators must now view EU compliance as a pre-launch engineering requirement rather than a post-launch legal hurdle.
If found non-compliant, X faces substantial penalties. Under the DSA, fines can reach up to 6% of a company's total worldwide annual turnover. Beyond financial penalties, the Commission holds the power to impose interim measures, which could theoretically force X to suspend Grok’s availability in the EU until the safety concerns are resolved.
The European Commission's formal proceedings against X represent the first major regulatory test case for generative AI under the Digital Services Act. As the investigation unfolds, it will define the boundaries of liability for AI developers and platform holders alike. For Creati.ai and the wider creative technology sector, this serves as a stark reminder that in the eyes of European regulators, the capability to generate content carries the same weight of responsibility as the act of hosting it.