- The European Commission plans to postpone the application of high-risk AI regulations for at least one year to remain competitive against the US and China; the move comes amidst a breakthrough year for AI (2025) and the return of Donald Trump.
- The AI Act (approved in August 2024) is under significant pressure from the US government, technology corporations, and lobbying groups, leading to accusations that the EU is “weak against external pressure.”
- High-risk AI practices such as hiring, credit scoring, and exam grading will not have to comply with new obligations until after August 2026.
- Official Reason: The technical standards supporting compliance were delayed twice, not completed before 2026; this is viewed as “too ambitious from the start.”
- Several other regulations are also being eased: more businesses are exempted, and the deadline for watermarking AI-generated image content is extended.
- Digital industry lobbyists describe the initial timeline as “unfeasible”; Germany and France openly support the 1-year postponement; Sweden, Poland, the Czech Republic, and Denmark previously called for a delay.
- Germany emphasizes the need to “take the foot off the brake” to boost innovation; many countries have not yet prepared their national supervisory bodies and also want more time.
- Scholars and civil society groups warn that the EU’s credibility is damaged, eroding the protection of fundamental rights and causing further legal uncertainty.
- Meanwhile, regulations banning unacceptable high-risk AI, such as social scoring and predictive policing, have been in effect since February 2025; complex models like GPT must also comply with their own set of rules starting August 2025.
- The postponement proposal must be approved by the European Parliament and member states; the original implementation deadline was August 2026.
📌 Summary: The EU surprisingly steps back from its strategy of “leading AI control,” postponing the AI Act’s high-risk regulations for at least one year due to delayed technical standards and strong pressure from US big tech. Many member states want more time to prepare national supervisory bodies, while digital rights groups warn of damage to legal credibility. Although some AI bans have been in effect since February 2025, the further postponement risks prolonging pressure and creating a regulatory gap in the European AI ecosystem.

