⚠ Test Instance — not production data
Back to Blog

August 2026 Is 5 Months Away: Your Step-by-Step Compliance Roadmap for High-Risk AI Systems

Team KI-Akte
Team KI-Akte
March 6, 2026 · 9 min read
August 2026 Is 5 Months Away: Your Step-by-Step Compliance Roadmap for High-Risk AI Systems

On August 2, 2026, every obligation for Annex III high-risk AI systems under the EU AI Act becomes enforceable. That is not a soft target. It is the date after which market surveillance authorities — in Germany, the Bundesnetzagentur (BNetzA) — can inspect, fine, and order the withdrawal of non-compliant AI systems.

If your organization uses AI for hiring, credit scoring, insurance pricing, student assessment, critical infrastructure monitoring, or any of the other eight Annex III categories, this deadline applies to you. The penalties are severe: up to EUR 35 million or 7 percent of global annual turnover, whichever is higher (Article 99).

Yet a recent survey found that 72 percent of German companies do not know how to implement the EU AI Regulation. This article provides a concrete, month-by-month compliance roadmap for the five months remaining.

What Happens on August 2, 2026?

The EU AI Act (Regulation 2024/1689) follows a phased enforcement timeline. Prohibited AI practices and Article 4 AI literacy requirements have been in effect since February 2, 2025. General-purpose AI model rules applied from August 2, 2025. The August 2026 deadline covers the broadest and most impactful category: high-risk AI systems listed in Annex III.

Annex III defines eight categories of high-risk use cases:

  1. Biometrics — Remote identification, emotion recognition, biometric categorization
  2. Critical infrastructure — Safety components in energy, water, gas, heating, digital infrastructure, road traffic
  3. Education and training — Admission decisions, assessment of learning outcomes, proctoring
  4. Employment and HR — CV screening, job advertising targeting, interview evaluation, performance monitoring, promotion decisions
  5. Essential private services — Credit scoring, creditworthiness assessment, life and health insurance risk pricing, emergency services dispatch
  6. Law enforcement — Risk assessment of natural persons, polygraphs, evidence evaluation
  7. Migration and border control — Risk assessment of irregular migration, visa and asylum application processing
  8. Administration of justice and democratic processes — AI assisting judicial authorities in researching and interpreting facts and law

For each of these, providers must complete conformity assessments, prepare Annex IV technical documentation, implement Article 9 risk management systems, affix CE markings, and register in the EU database under Article 49. Deployers must conduct Fundamental Rights Impact Assessments where required (Article 27), ensure human oversight, and maintain logs for at least six months.

The penalty tiers

ViolationMaximum penalty
Prohibited AI practicesEUR 35M or 7% of global turnover
High-risk AI system obligationsEUR 15M or 3% of global turnover
Incorrect information to authoritiesEUR 7.5M or 1% of global turnover
SME and startup reductionProportionate lower amounts apply

Will the Digital Omnibus Push the Deadline Back?

Some enterprises are banking on the European Commission's Digital Omnibus proposal, published on November 19, 2025, which would extend compliance deadlines for Annex III systems by up to 16 months. The European Parliament's rapporteurs proposed fixed alternative dates in their February 5, 2026 draft report: December 2, 2027 for Annex III and August 2, 2028 for Annex I systems.

This sounds reassuring, but here is the reality: the Omnibus is still in legislative procedure. The Council's Cyprus Presidency published alternative text on January 23, 2026. The feedback period closes in March 2026. Neither trilogue negotiations nor a final vote have occurred. There is no guarantee the Omnibus will be adopted before August 2, 2026, and even if it is, the compliance work is identical — you just might have more time to finish it.

The only responsible strategy is to work toward the current deadline. If the Omnibus grants additional time, treat it as a buffer for refinement, not an excuse to start later.

The 5-Month Compliance Sprint

Month 1: Inventory and Classify (March 2026)

Everything starts with knowing what AI systems your organization actually uses. This sounds simple, but most enterprises dramatically undercount their AI footprint. Shadow AI — tools embedded in SaaS products, vendor APIs with AI features, department-level ChatGPT subscriptions — is pervasive.

Conduct a comprehensive inventory across every business unit, department, and vendor relationship. For each AI system, document:

  • What it does and what decisions it influences
  • Who provides it (internal development or third-party vendor)
  • What data it processes (personal data, special categories, financial data)
  • Which Annex III category it falls under, if any
  • Who is responsible for it within your organization

A centralized AI use case register is the foundation. Without it, every subsequent compliance step is guesswork.

Month 2: Apply the High-Risk Test (April 2026)

Not every AI system that touches an Annex III category is automatically high-risk. Article 6(3) provides a narrow exemption: a system is not high-risk if it "does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons." Four conditions can qualify a system for this exemption — performing narrow procedural tasks, improving previously completed human activity, detecting patterns without influencing assessment, or performing preparatory tasks.

However, there is a critical trap: any AI system that performs profiling of natural persons is always classified as high-risk, regardless of any exception. Profiling, as defined in GDPR Article 4(4), covers any automated processing that evaluates personal aspects — work performance, economic situation, health, preferences, behavior, or location. This catches far more systems than most organizations realize. HR analytics tools, customer segmentation engines, behavioral targeting, and predictive models based on personal attributes all constitute profiling.

For each system in your inventory, document the classification rationale. If you conclude a system is not high-risk under Article 6(3), you must prepare a written assessment before placing it on the market — and provide it to authorities on request. Getting this wrong carries its own penalty: EUR 7.5 million or 1 percent of turnover for supplying incorrect information.

Month 3: Build the Compliance Foundation (May 2026)

For every confirmed high-risk system, you now need to implement three core requirements:

Risk management system (Article 9): A continuous, iterative process to identify, analyze, evaluate, and mitigate risks. This is not a one-time assessment — it must run throughout the entire lifecycle of the AI system, from development through deployment and decommissioning.

Quality management system (Article 17): Documented policies and procedures covering design and development practices, testing and validation, data management, post-market monitoring, record-keeping, and resource management. For many enterprises, this means extending existing ISO 9001 or similar quality frameworks to specifically address AI.

Technical documentation (Annex IV): A comprehensive document covering the system's intended purpose, design specifications, training data description, testing and validation results, monitoring and governance measures, and instructions for use. This document must be detailed enough for an authority to assess compliance without needing access to the system itself.

Month 4: Assess and Document (June 2026)

Conformity assessment (Article 43): Most Annex III systems can follow the internal control procedure in Annex VI — essentially a thorough self-assessment documented to regulatory standards. However, some biometric systems require third-party assessment by a notified body.

Fundamental Rights Impact Assessment (Article 27): If your organization is a public body, provides public services, or deploys high-risk AI for employment decisions or access to essential services like credit or insurance, you must conduct a FRIA before deployment. This is separate from a DPIA under GDPR and covers a broader set of rights: non-discrimination, dignity, freedom of expression, effective remedy, workers' rights, and consumer protection. The FRIA must be notified to the market surveillance authority.

Month 5: Register and Go Live (July 2026)

CE marking: Once conformity assessment is complete, affix the CE marking to your AI system or its packaging to declare compliance.

EU database registration (Article 49): Register each high-risk AI system in the EU database maintained by the AI Office. This registration is public for most systems and must be kept current throughout the system's lifecycle.

Deployer obligations (Article 26): Ensure human oversight is assigned and trained, input data is relevant and representative, usage is monitored continuously, and logs are retained for at least six months.

Incident reporting: Establish a process to report serious incidents to the market surveillance authority without delay once you become aware of them.

Germany, Austria, Switzerland: National Implementation

Germany: The KI-MIG (Gesetz zur Durchführung der KI-Verordnung) was approved by the Federal Cabinet on February 11, 2026 and is proceeding through the Bundestag and Bundesrat. The Bundesnetzagentur (BNetzA) is designated as the central market surveillance authority, with a KoKIVO coordination center for uniform interpretation. Different supervisory channels apply depending on the use case — HR, financial services, medical devices, and telecoms each have sector-specific oversight.

Austria: The Austrian government is developing its implementation legislation in parallel, with the RTR (Rundfunk und Telekom Regulierungs-GmbH) expected to take a central role. Austrian enterprises with German market exposure should monitor both jurisdictions.

Switzerland: While not an EU member, the EU AI Act has extraterritorial scope. Swiss companies whose AI output is used within the EU or who place AI systems on the EU market must comply. The Swiss Federal Council is evaluating alignment measures, but compliance with the EU framework is already a practical necessity for most Swiss enterprises active in the DACH market.

The Hidden Blocker: You Cannot Comply Without a Complete AI Inventory

Every step in this roadmap depends on one prerequisite: knowing which AI systems your organization operates. Without a comprehensive, current, centralized inventory, you cannot classify risk levels, you cannot prioritize conformity assessments, and you cannot demonstrate compliance to a regulator.

A structured AI use case register — one that tracks each system's purpose, risk classification, responsible owner, review status, and documentation — is not just a nice-to-have. It is the foundational requirement that makes every other compliance action possible. Start there, and the rest follows.