On 7 May 2026, the European Parliament and the Council reached a political agreement on the Digital Omnibus on AI (“AI Omnibus”), aiming to amend the AI Act. The package responds to delayed standards, unclear governance and heavier-than-expected compliance costs.
According to press releases by the Council and the EU Parliament, the co-legislators reached agreements on postponing rules for high-risk AI systems and watermarking requirements, a ban of so called nudifier apps, and other aspects of the AI Act. The provisional agreement still needs to be formally enacted into law.
Postponing AI Act requirements
One of the central expectations of the AI Omnibus since its publication has been the postponement of the entry into force of various provisions. The political agreement now confirms the following key application dates:
- 2 December 2026 – New deadline for complying with the AI Acts watermarking requirements under Article 50 (2) postponing the original deadline set at 2 August 2026 by four months. Likewise, beginning on 2 December 2026 the new rules on prohibited AI practices relating to so called nudifier apps shall apply.
- 2 August 2027 – Postponed deadline for Member States to establish national AI regulatory sandboxes;
- 2 December 2027 – new application date for high-risk obligations for stand-alone AI systems, i.e. AI systems qualifying as high-risk under Article 6(2) in conjunction with Annex III. These include, for example, AI systems used in education and vocational training, employment and HR.
- 2 August 2028 – new application date for high-risk obligations for embedded AI systems, i.e. AI systems that qualify as high-risk under Art. 6 (1) in conjunction with Annex I, such as AI-enabled toys under the Toy Safety Directive or AI-enabled medical devices under the Medical Devices Regulation. In parallel, the Commission is expected to specify machinery-related safeguards by delegated act by the same date. For overlaps with other sectoral product regimes, delegated acts are expected earlier, by August 2027.
Prohibited practices – Nudifier App
The political agreement confirms the ban of AI systems used for non-consensual intimate-content (NCIM) generation and child sexual abuse material (CSAM). According to the press release by the Parliament the new ban is applicable from 2 December 2026, targeting AI systems capable of generating or manipulating NCIM and CSAM. Read against leaked documents of the intended final text of the AI Omnibus, the ban applies where generation is the system’s intended purpose or where it is reasonably foreseeable and the system does not have safeguards to prevent that outcome.
Companies that develop or use AI systems that could be in scope should assess their systems shortly to avoid infringing the bans on prohibited AI practices. Infringing the AI Act’s rules on prohibited AI practices can lead to substantial fines as high as EUR 15 000 000 or 7% of annual worldwide turnover.
Reducing the impact of the AI Act
One of the main goals of the AI Omnibus was to reduce the bureaucratic burden of the AI Act on companies. The press releases indicate certain measures. In particular, AI in machinery products only need to comply with sector specific safety rules instead of sector specific rules and the AI Act. Further, the definition of “safety component” will be narrowed down, potentially reducing the number of AI systems being considered high-risk AI systems.
The press releases further confirm that the AI Act’s interaction with sectoral product legislation will be softened. The political deal appears to provide the clearest sectoral carve-out for machinery products. The compromise reportedly foresees additional safeguards to be introduced under the Machinery Regulation by delegated act, suggesting that the shift is not meant to amount to a complete deregulation of industrial AI. The Council press release further refers to sectors such as medical devices, toys, liftsand watercraft and describes a mechanism to resolve cases where sectoral law contains similar AI-specific requirements. Read against the leaked Annex I compromise, this means that the application of AI Act obligations regarding product AI may be limited through implementing acts According to current reporting, the Commission is expected to specify the machinery-related safeguards by delegated act by August 2028, while delegated acts for overlaps with other sectoral product regimes are expected earlier, by August 2027. Further, the SME exemptions from certain rules to small mid-cap enterprises (SMCs) are extended to support their growth. Read against previously leaked documents, this likely includes simplified technical documentation forms, proportionate quality and risk-management obligations, and lighter procedural burdens for smaller operators.
Further, enforcement of certain general-purpose AI systems within the EU’s AI Office will be streamlined. According to the press materials, this concerns AI systems based on general-purpose AI models where the model and the AI system are developed by the same provider. Read together with the leaked compromise text, the rule is particularly relevant for vertically integrated AI providers and groups of undertakings that develop the underlying model and commercialise or deploy downstream AI systems based on that model. Ordinary deployers using third-party AI tools should generally remain outside this centralised AI Office track.
Industry groups have criticised the compromise as falling short of meaningful simplification. In particular, CCIA Europe and the Business Software Alliance argue that the package leaves substantial legal uncertainty in place and that, apart from the grace period for high-risk obligations and the machinery carve-out, it does little to reduce the practical compliance burden under the AI Act. The TÜV Association likewise criticised the machinery carve-out, arguing that the “sector exit” for machinery could create regulatory fragmentation, legal uncertainty and, ultimately, more rather than less bureaucracy for companies, while also slowing the development of industrial AI standards in Europe.
Bias detection with sensitive data
The press releases confirm that processing of personal data for bias detection will be expressly allowed. This is an important step because there is a natural tension between AI innovation which requires processing of data (including personal data) and data protection law which aims to limit the processing of personal data as much as possible.
Leaked documents on the text of the new provision, however, show that the effect should not be overestimated. The provision sets very strict rules which may limit the impact of the new rule. Providers of high‑risk AI systems may only “exceptionally” process such data, if bias correction cannot be achieved by other means (including using synthetic data). Further, the data must be subject to certain security measures.
AI literacy – obligation to encouragement
The press releases remain silent on the matter of AI literacy. A previously leaked compromise text indicates, though, that there was already agreement to replace the AI Act’s obligation for companies to ensure employee’s AI literacy with a softer model under which the Commission and Member States “encourage” providers and deployers to support the development of AI literacy.
This is good news for companies, because the previous one-size-fits-all duty was widely criticized as impractical and disproportionately burdensome, particularly for smaller enterprises. Still, companies should consider providing specific training where needed to avoid detriments from carless use of AI.
Assessment of the Omnibus, Entry into Force and Next Steps
Overall, the AI Omnibus will only bring a temporary relief. As also criticized by industry and civil society organizations, the package delivers only limited simplification and leaves significant legal uncertainty in place. While the temporary relief is highly welcomed by companies, some of the uncertainties coming from the AI Act’s many unspecific wordings are not resolved. Companies should use the extra time to assess the necessary steps to be ready when the requirements finally apply.
The political agreement is provisional and still needs to go through the formal steps -expected by July 2026 – before it becomes binding law. In the meantime, the key compliance planning points are now clearer: (i) fixed application dates for high-risk obligations, (ii) an earlier date for watermarking and the new prohibited-practice controls, and (iii) a clear policy direction to reduce overlap with sectoral product regimes while narrowing the high-risk perimeter for non-safety AI functions.