EU pauses rules on high-risk AI – but companies still have work to do
The European Commission is proposing amendments to its AI Regulation. Among the changes proposed, the most significant is that the start of the high-risk rules will be pushed back until harmonised standards and support tools are in place. This aligns with the position of the Confederation of Swedish Enterprise, which emphasises that the rules must be practically workable before they enter into force.
The purpose of the EU’s AI Regulation, which sets out harmonised rules for artificial intelligence, is to promote trustworthy AI in Europe. The AI Act is being introduced in stages, and rules on prohibited AI practices and general-purpose AI models already apply today. What remains – and what is most business-critical – are the provisions for systems classified as high-risk under Annexes I and III (see fact box).
These include requirements for risk management, documentation, data governance, logging, robustness and human oversight. Under the current timeline, the requirements in Annex III were set to apply from 2 August 2026, and those in Annex I from 2 August 2027 – but the high-risk rules will now be postponed.
There are several reasons for this.
First, the harmonised standards are delayed. The high-risk rules in the AI Regulation are intended to be operationalised through standards, and without them, companies lack a practical and legally sound way to demonstrate compliance.
Second, the Commission wants to avoid a scenario in which the rules start to apply before standards, common specifications and supervisory structures are ready. That would lead to high costs, risks of fragmented national interpretations and major difficulties for companies in meeting the requirements on time and remaining compliant. Postponing implementation is therefore a way to avoid a regulatory framework that is, in practice, unworkable.
Coordination is essential
However, the pause only resolves the immediate question of when the rules will apply – not how the regulatory framework should function in practice alongside other EU legislation. Many companies therefore need to work more strategically to coordinate the AI Regulation with, for example, the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the Cyber Resilience Act (CRA), and the upcoming Swedish cybersecurity law implementing the NIS2 Directive.
The Commission’s AI Office will play a central role in developing common templates, guidance and practical tools to harmonise implementation and to reduce administrative burdens and regulatory fragmentation across Member States. But we are not there yet.
What can companies do now?
First, AI systems should be classified early, since the difference between the rules in Annex III and Annex I determines the entire planning process – from investments to contracts.
Second, companies should start building up their internal governance structures: roles, documentation, logging and risk processes that can be established irrespective of the detailed standards.
Third, it is wise to coordinate regulatory workstreams, ensuring that GDPR, DSA, CRA, NIS2 and the AI Regulation are handled within a common compliance framework rather than as separate projects.
Companies need to stay on track
The pause offers valuable breathing room – but it is not a pause button for the business sector. The timeline remains fluid. Negotiations in Brussels may take longer, standardisation work is not complete, and overlaps with other EU rules remain. Companies must therefore prepare for multiple scenarios, including the possibility that high-risk requirements could enter into force earlier than the proposed dates.
Those companies that already map their AI systems, strengthen their internal governance and coordinate their compliance processes will be best prepared when the clock starts again – whether that happens in 2027, 2028 or earlier.
Text: Adam Ack
Proposal to postpone high-risk rules in the AI Regulation
Annex III: High-risk areas
Examples: Recruitment, credit assessment, education, biometrics
- Proposed latest start date: 2 December 2027 (previously 2 August 2026)
- Earlier start possible: Yes, if standards and support tools are deemed ready
- Transition period if early start: 6 months
Annex I: High-risk under product safety legislation
Examples: Medical devices, machinery, personal protective equipment
- Proposed start date: 2 August 2028 (previously 2 August 2027)
- Earlier start possible: Yes, by decision of the European Commission
- Transition period if early start: 12 months
What does this mean for companies?
- The timeline will be driven by the completion of standards, but with fixed latest start dates.
- Companies will receive support for compliance, but must be prepared for the rules to apply earlier.
- If the start is brought forward, the adaptation window will be relatively short: 6–12 months.

