The EU is quietly dismantling the rules it spent 5 years building
The Digital Omnibus pushes AI Act compliance to 2028 and permanently exempts already-deployed systems. All under the banner of simplification.

The EU is quietly dismantling the rules it spent 5 years building
Five years of negotiations. Hundreds of thousands of hours of parliamentary debate. And at the end of it, a text the Commission was selling to the world as the model for regulating AI.
Since November 2025, much of that work has been getting undone. Not head-on. Through a series of postponements, exemptions, and redefinitions bundled into what is now called the Digital Omnibus.
What is the Digital Omnibus?
In November 2025, the European Commission published two texts: one amending the AI Act, the other tweaking the GDPR. Together they form the "digital package," rebranded as the Digital Omnibus. The official rationale? Administrative simplification. Cutting red tape so European companies can stay competitive against US and Chinese giants.
That "competitiveness" framing isn't groundless. AI is moving fast, and the technical standards needed for compliance weren't ready. But the way you simplify matters as much as the simplification itself.
What actually changes in the AI Act
The AI Act sorts AI systems by risk level. So-called high-risk systems (credit scoring, CV screening, biometric recognition, housing access decisions) were set to face strict obligations starting August 2026: conformity assessments, public risk reports, mandatory registry.
The Digital Omnibus pushes those obligations to December 2027 for most of them, and to August 2028 for systems embedded in regulated products like medical devices or vehicles. The watermarking requirement for AI-generated content is delayed to February 2027 for tools already on the market.
The most consequential piece, though, is what happens to systems already deployed. Under the Commission's proposal, tools in production before the new deadlines would never fall under the rules originally planned. This isn't a grace period. It's a permanent carve-out. Anything currently running inside European companies could stay outside the regulatory frame indefinitely.
What changes in the GDPR
The GDPR side of the Digital Omnibus has gotten less coverage, but it might matter just as much. The proposal narrows what counts as personal data and broadens "legitimate interest" as a legal basis for processing. It's like quietly shifting the boundaries of a property: nobody notices day to day, but the usable surface changes.
In practice, companies could lean more easily on legitimate interest to train AI models, without having to ask users for explicit consent. This kind of change rarely makes headlines, but it shifts the underlying balance.
Who actually wrote this text?
In January 2026, the Corporate Europe Observatory published a side-by-side comparison between tech industry lobbying positions and the Commission's final text. The result: of eight major changes in the Digital Omnibus, seven directly match what tech actors had been pushing for.
That's not statistical noise. The lobbying numbers in Brussels set the context: the tech sector now spends 151 million euros per year on EU lobbying, up from 113 million in 2023, a 33% jump in two years. There are now 890 full-time tech lobbyists in Brussels, more than the number of MEPs. In the first half of 2025, major platforms held 146 meetings with senior Commission officials, more than one per business day.
Amnesty International described the whole process as "the largest rollback of fundamental digital rights in EU history, carried out behind closed doors with procedures designed to avoid democratic scrutiny."
Parliament pushed back, partially
On March 26, 2026, the European Parliament adopted its negotiating position by 569 votes to 45. MEPs reintroduced some safeguards the Commission had stripped out and set tighter deadlines. They also proposed a ban on non-consensual sexual deepfakes, absent from the original draft.
But the deadline extensions stayed in the text. And the question of already-deployed systems wasn't resolved.
The final trilogue between Commission, Parliament, and member states opens this Tuesday, April 28. That session is meant to lock in the final text. Whatever each of the three institutions agrees to give up in the coming hours will shape how far the postponements and exemptions actually reach.
What this means for you
If you use credit services, go through automated hiring platforms, or face algorithmic decisions in housing or social services, the protections the AI Act was supposed to deliver in 2026 won't arrive before 2027 at the earliest, 2028 for some systems.
And the tools already running in those areas today? They might never be subject to those rules at all.
To check whether a system that affects you falls under the AI Act's "high-risk" categories (Annex III), the official registry will be accessible through the European AI Office portal once it goes live. It isn't there yet. But it's worth knowing what the list looks like: financial scoring, recruitment, educational assessment, crowd management, access to public services.
The question isn't whether these systems should be regulated. Even the Commission agrees on that. The question is at what pace, and how far the rules will reach the systems already in place.
Sources: EDRi, Amnesty International, Corporate Europe Observatory, OneTrust, European Parliament (Digital Omnibus legislative tracker), IAPP



