Co-fumnded by EU logo

AI Act timeline in motion – what does the Digital Omnibus mean in practice?

The EU AI Act timeline appears set to change as part of the Digital Omnibus / AI Omnibus package. Although the amendment is not yet legally in force, it may bring additional time and simplifications, particularly for obligations concerning high-risk AI systems.

Teemu Moilanen, Elisa Laatikainen 12.5.2026 | Picture Adobe Stock Photos

The EU AI Act is being implemented in phases, and the next significant application phase has been expected to begin on 2 August 2026. At that point, obligations concerning certain high-risk AI systems were expected to become broadly applicable, including requirements relating to risk management, documentation, quality management and conformity assessment.

In the health technology sector, the timeline has been partly different, as AI-enabled medical devices and in vitro diagnostic medical devices often fall within the AI Act’s high-risk category linked to product safety legislation. The original application date for this category was 2 August 2027.

However, the discussion around the implementation of the EU AI Act has changed rapidly during spring 2026. Based on the information currently available, the timeline is now very likely to change. This is due to the so-called Digital Omnibus / AI Omnibus package, which aims to ease and simplify obligations particularly for high-risk AI systems. The discussion is part of a broader EU-level effort to strengthen European competitiveness, reduce regulatory overlaps and accelerate the uptake of AI in businesses.

As of 12 May 2026, the current situation is that negotiators from the European Parliament and the Council have reached a provisional political agreement on the Digital Omnibus / AI Omnibus package. This indicates that the original AI Act timeline is very likely to change, but the change is not yet legally in force. The amendment still requires formal adoption by the European Parliament and the Council, as well as publication in the Official Journal of the EU. The key development took place on 7 May 2026, when the co-legislators’ negotiators reached a provisional trilogue agreement on the package simplifying the AI Act.

The discussion has focused in particular on:
• postponing obligations for high-risk AI systems
• reducing overlaps between sector-specific regulation and the AI Act
• simplifying certain documentation and compliance requirements
• clarifying watermarking and deepfake rules

In particular, for high-risk systems, the current deadline of 2 August 2026 appears likely to be postponed by more than a year, until 2 December 2027. For systems linked to product safety legislation, such as many AI-enabled medical devices and in vitro diagnostic medical devices falling under the MDR and IVDR frameworks, the application date would be postponed until 2 August 2028. In addition, the agreement proposes narrowing the concept of a “safety component”: an AI function would not automatically fall within the scope of high-risk obligations merely because it assists the user or optimises the product’s performance, provided that its failure or malfunction does not create a health or safety risk. At this stage, however, the agreement is not yet adopted legislation.

Therefore, the legally important point remains:
The original AI Act timeline remains in force until any legislative amendment has been formally adopted and officially published by the EU.

What does this mean for companies?

From a business perspective, the situation may feel contradictory. On the one hand, the market expects regulatory simplification and additional time. On the other hand, preparations should not be paused, because the final content and timeline have not yet been confirmed.

In practice, for many organisations the most sensible approach at this stage is to:
• continue AI governance and compliance preparations
• prioritise the most business-critical use cases
• actively monitor the progress of the legislative process
• avoid heavy, oversized compliance investments before the final legislative text is available

More broadly, this development also reflects a shift in the priorities of EU AI policy. The discussion increasingly emphasises competitiveness, investment capacity, AI sovereignty and the ability of European companies to scale AI solutions in global competition.

The debate seen during spring 2026 shows that the AI Act is no longer viewed solely as a regulatory project — it has also become a central part of Europe’s industrial and competitiveness strategy.

Summary

Question

Situation on 12 May 2026

Is the original AI Act timeline for high-risk obligations from 2 August 2026 still legally in force?

Yes

Does it look likely to change?

 Probably yes

Is the political direction clear?

Yes – towards postponement and simplification

Should companies stop compliance preparations?

No

Is it reasonable to assume there will be more time?

Yes, fairly safely

When will there be final clarity?

The EU aims to process the matter before 2 August 2026.

Sources 
AI Act. European Commission. Available: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
AI Act: deal on simplification measures, ban on “nudifier” apps. European Commission 2026. Available: https://www.europarl.europa.eu/news/en/press-room/20260427IPR42011/ai-act-deal-on-simplification-measures-ban-on-nudifier-apps
EU countries, lawmakers clinch provisional deal on watered-down AI rules. Reuters 7.5.2026. Available: https://www.reuters.com/world/eu-countries-lawmakers-strike-provisional-deal-watered-down-ai-rules-2026-05-07/

Haaga-Helia_logo_nega black 001 final
Uudenmaan-liitto-vaakalogo (2) valkoinen teksti final
EN_Co-fundedbytheEU_RGB_WHITE Outline testi

Ota yhteyttä

Elisa Laatikainen
Projektipäällikkö
Haaga-Helia ammattikorkeakoulu
elisa.laatikainen(at)haaga-helia.fi