Listen: Is the EU backtracking on AI regulation?

rss · EUobserver 2026-05-11T08:15:57Z en
Two years after adopting its AI Act, the EU is already preparing to simplify it. Does this mean Europeans are scaling back their regulatory ambitions on AI?
Production: By Europod, in co-production with Sphera Network. EUobserver is proud to have an editorial partnership with Europod to co-publish the podcast series “Briefed” hosted by Léa Marchal. The podcast is available on all major platforms. Find the full transcript below: Two years after adopting its AI Act, the EU is already preparing to simplify it. Does this mean Europeans are scaling back their regulatory ambitions on AI? On Thursday (7 May), after a long meeting that lasted late into the night, negotiators from the European Parliament, the Council of the EU, and the European Commission reached an agreement on how to revise the AI Act. The AI regulation is a text that governs the use and development of artificial intelligence. I discussed it in the February 23 episode of Briefed. So why does a text adopted in 2024, which is not even fully in force yet, need to be revised? Because the first version was too heavy, and according to some stakeholders, almost impossible to implement. So here is what was decided to improve it: First, lawmakers have postponed certain obligations. These concern high-risk AI systems. This includes tools used in critical infrastructure, education, or border management. All systems falling into this high-risk category now have until December 2027 to comply. An even longer deadline has been granted for high-risk AI systems used in already regulated products, such as toys. Next, the obligation to label AI-generated content as such has also been delayed by a few months. Finally, another major change was made after intense debate: machines that use AI are no longer covered by the regulation. They are now exempt. Of course, these changes still need to be formally approved, but that should be a formality. Should this revision be seen as a retreat? Yes, partly, but not entirely. Indeed, part of the AI tools are now exempt. And this is the direct result of pressure from the industry. But regarding the deadlines granted for various obligations, they mainly reflect another reality: some guidelines and even some standards needed to implement the AI Act were simply not ready. It is also important to note that the text has been strengthened in other areas. The future version of the AI Act bans so-called “nudify” contents. This refers to AI systems generating realistic depictions of intimate body parts or sexually explicit images, without consent. A striking example of this trend: Italian prime minister Giorgia Meloni was herself depicted in underwear on social media. More broadly, hundreds of underage girls, even children, have been similarly undressed online, notably through Grok, the AI system of X. So with AI Act 2.0, generative AI systems like Grok or ChatGPT will have to ensure this functionality is not available for users. Otherwise, they will face fines. So ultimately, does the EU still believe in leading by example and regulating the digital space, particularly AI? Yes, since it is not abandoning its regulation, and secondly, because it is addressing some of its gaps. The EU is not completely changing course; it still wants to set an example and influence AI models globally. But at the same time, the EU must also be pragmatic. Its main AI competitors, the United States and China, are not regulating at all. As a result, companies may be tempted to develop their tools elsewhere, where they do not have to comply with a heavy regulatory framework before launching their services. This reflects the same dilemma as with the European Green Deal, which I discussed last week in Briefed: balancing the defense of EU values on one hand and limiting regulation on the other.

Knowledge Graph

Situations
Entities
Highlight