The European Union (EU) has not yet agreed on how to apply the Digital Services Act (DSA) to ChatGPT, even though the chatbot has surpassed 120 million monthly users in Europe. An official decision is only expected by mid-2026.
ChatGPT has already fallen under the scope of the AI Act since August 2025, forcing OpenAI to assess and mitigate risks, or face potential fines of up to 15 million euros (≈ 16.1 million USD).
With its user base exceeding 45 million per month, ChatGPT also belongs to the group of very large online platforms (VLOPs/VLOSEs) under the DSA—meaning it could face fines of up to 6% of its global revenue for violations.
The central issue is the scope of designation: Will the EU classify ChatGPT merely as a search engine, or as a comprehensive AI platform? The broader the scope, the greater OpenAI’s compliance and risk-reporting obligations.
The risks ChatGPT must report include: impacts on elections, public health, fundamental rights, and the design of its recommender systems.
OpenAI acknowledged that 1.2 million users per week use ChatGPT in contexts of “suicidal ideation,” and that “in some rare cases, the model may not respond appropriately.”
Legal experts like Mathias Vermeulen (AWO Agency) warn that OpenAI will have to “comprehensively upgrade” its risk control processes, as the DSA does not accept the “voluntary” approach used previously.
If classified comprehensively, OpenAI might have to implement a “notice-and-action” mechanism, similar to major social networks.
The DSA and the AI Act risk overlapping: The AI Act classifies based on risk level (high, limited, minimal), while the DSA requires “systemic” risk assessments (elections, health, individual rights).
Some researchers, like João Pedro Quintais, warn that the misalignment between the two laws could create “regulatory gaps,” allowing AI companies like OpenAI to benefit from “safe harbor” provisions.
📌 The EU is facing a major challenge: how to control ChatGPT when current laws haven’t kept pace with the speed of generative AI development. With 120 million monthly users and a potential fine of 6% of global revenue, OpenAI will become the first test case for Europe’s AI governance capabilities—but we must wait until at least the second half of 2026 for an answer.
