EU AI Act August 2026 Deadline: What Companies Must Do Before August 2
The EU AI Act’s main enforcement deadline is August 2, 2026. By that date, deployers of Annex III high-risk AI systems must complete conformity assessments, prepare technical documentation, register systems in the EU AI database, and — where applicable — conduct a Fundamental Rights Impact Assessment (FRIA). Companies that fail to classify their AI systems and complete the required steps face fines of up to €15 million or 3% of global annual turnover. Under the EU AI Act (Regulation (EU) 2024/1689), approximately 3.5 months remain.
Which Companies Must Act Before August 2, 2026?
The August 2026 deadline applies to any organisation — regardless of size or nationality — that deploys or puts into service an AI system in the EU market.
The full obligations fall on deployers of Annex III high-risk AI systems. These include organisations that:
- Use AI for recruitment, CV screening, or employee monitoring (Annex III, Category 4) — see our AI recruitment screening compliance guide
- Deploy biometric identification or facial recognition systems (Annex III, Category 1) — see our AI facial recognition compliance guide
- Use AI in critical infrastructure: energy, water, transport (Annex III, Category 2)
- Apply AI for credit scoring, insurance risk assessment, or loan eligibility (Annex III, Category 5) — see our AI Act for financial services guide
- Use AI in education for student assessment or access decisions (Annex III, Category 3)
- Operate AI for law enforcement or border management (Annex III, Categories 6–7)
- Deploy AI-assisted tools in the administration of justice (Annex III, Category 8)
General-purpose AI (GPAI) model providers had an earlier deadline of August 2, 2025 for GPAI-specific obligations. If that milestone was missed, corrective action is overdue.
Most SMEs using off-the-shelf AI tools — Microsoft Copilot, standard chatbots, common SaaS products with embedded AI — are not deployers of high-risk systems. Their primary obligation is the Article 50 transparency disclosure when users interact with AI that could be mistaken for a human. The August 2026 deadline does not trigger heavy compliance burdens for this group.
6 Steps Companies Must Complete Before August 2, 2026
This is the compliance checklist for Annex III high-risk AI system deployers.
1. Build an AI System Inventory
Document every AI system your organisation uses, develops, or deploys. For each system record: vendor name, system purpose, processing logic (where known), personal data involved, and the population of people affected. This inventory is the foundation for every subsequent step — without it, classification is impossible.
2. Classify Each System by Risk Level
Under Regulation (EU) 2024/1689, AI systems fall into four categories:
- Prohibited (banned since February 2, 2025): emotion recognition in workplaces, social scoring, real-time biometric surveillance in public spaces
- High-risk (Annex III): full compliance obligations by August 2, 2026
- Limited-risk: transparency disclosure to users required under Article 50
- Minimal-risk: no specific AI Act requirements
Check each system against the eight Annex III categories. When in doubt, seek legal advice — misclassifying a high-risk system as minimal-risk is itself a compliance failure with the same penalty exposure.
3. Complete the Conformity Assessment
For Annex III systems, a conformity assessment (Article 43) is mandatory before deployment and before the August 2026 enforcement date. For most high-risk categories, deployers may conduct an internal conformity assessment — a structured self-evaluation that documents technical compliance against Chapter III requirements. Certain biometric identification systems require a third-party assessment by a notified body.
The assessment must cover: risk management system, data governance, technical documentation, logging, transparency, human oversight, and accuracy metrics.
4. Prepare Technical Documentation
Article 11 requires comprehensive technical documentation for high-risk AI systems. This includes: a general system description, design specifications, training methodology and data sources, testing results, performance metrics, and instructions for use. Documentation must be maintained and updated throughout the system’s operational lifecycle — it is a living document, not a one-off filing.
5. Conduct a Fundamental Rights Impact Assessment (FRIA) Where Required
Article 27 requires a Fundamental Rights Impact Assessment for:
- Public bodies deploying any Annex III high-risk system
- Private operators deploying high-risk AI for credit scoring or insurance risk classification
The FRIA evaluates potential impacts on fundamental rights — privacy, non-discrimination, and fair access to services — and must be available to the relevant market surveillance authority on request. For HR and recruitment AI, consider whether a FRIA is needed alongside the standard conformity assessment.
6. Register in the EU AI Database
Article 71 requires high-risk AI system registrations in the EU AI public database before deployment. For public sector deployers, registration is directly mandatory. For private deployers of most Annex III systems, registration is currently required at the provider level — confirm with your vendor that this step has been completed. Registration requires: system name and description, intended purpose, deployment geography, responsible person contact details, and reference to the conformity assessment.
What Counts as a High-Risk AI System Under Annex III?
Annex III of Regulation (EU) 2024/1689 lists eight categories of high-risk AI systems. Here is a summary with examples relevant to German businesses:
| Category | Description | German Business Examples |
|---|---|---|
| 1. Biometric identification | Real-time or post-hoc identification of natural persons | Facial recognition at office entry, biometric time-attendance systems |
| 2. Critical infrastructure | AI managing utilities, transport, or digital infrastructure | Smart grid management, rail traffic control |
| 3. Education | AI for student access, assessment, or evaluation | Exam grading algorithms, university admissions AI |
| 4. Employment | AI for recruitment, performance evaluation, task allocation | CV screening tools, employee monitoring software |
| 5. Essential services | AI for access to credit, insurance, or public benefits | Credit scoring models, loan eligibility engines |
| 6. Law enforcement | AI for criminal risk profiling or evidence evaluation | Predictive policing (currently uncommon in Germany) |
| 7. Migration and border | AI for visa assessment or irregular migration detection | Border control AI systems |
| 8. Justice | AI supporting judicial or dispute resolution processes | AI contract analysis tools used in legal proceedings |
For German businesses, the most commonly relevant categories are 1 (biometrics), 4 (employment), and 5 (financial services). See our AI employee monitoring compliance guide for the intersection of employment AI with German works council law.
Germany-Specific Considerations
Supervisory Authority Not Yet Formally Designated
As of April 2026, Germany has not yet formally designated the national AI Act market surveillance authority (Marktüberwachungsbehörde). The Federal Government has indicated that the Bundesnetzagentur (Federal Network Agency) will assume horizontal oversight, with sectoral authorities — BaFin, BSI, and others — retaining competence in their domains. Until formal designation is confirmed, companies should prepare documentation as if full enforcement applies from August 2, 2026 — which it does under the directly applicable EU Regulation.
Works Council Rights Under BetrVG Section 87
German employers deploying AI that monitors employees must comply with Section 87(1)(6) of the Works Constitution Act (BetrVG). Works councils hold co-determination rights over technical systems capable of monitoring worker behaviour or performance. AI employee monitoring tools — whether for productivity tracking, anomaly detection, or attendance management — require a works agreement (Betriebsvereinbarung) before deployment.
This obligation runs parallel to and independently of AI Act compliance. A works council can halt deployment of an AI system regardless of whether a conformity assessment is in place. See our AI employee monitoring guide for detail on BetrVG-compliant implementation.
DSK Guidance on GDPR-Compliant AI Development (June 2025)
The German Data Protection Conference (Datenschutzkonferenz, DSK) published guidance in June 2025 on GDPR-compliant AI system development and deployment. Key points for German businesses:
- A lawful basis under GDPR must be established before personal data is used for AI training or inference
- Data minimisation applies — AI systems should process no more personal data than the task strictly requires
- Automated individual decisions under Article 22 GDPR require human review to be available
- Data Protection Impact Assessments (DPIAs) are required for high-risk AI systems processing personal data
Practical implication: A GDPR DPIA and an AI Act conformity assessment are distinct documents with overlapping content. Running them together reduces duplication and cost.
BaFin for Financial Services Firms
German financial institutions face dual oversight: the AI Act’s Article 26 deployer obligations, and BaFin’s existing requirements on model governance, algorithmic fairness, and senior management accountability. For AI-driven credit scoring, loan origination, or insurance risk models, BaFin’s supervisory expectations effectively exceed the minimum AI Act standard. Our AI Act for financial services guide covers sector-specific compliance obligations in detail.
Penalties for Non-Compliance
The AI Act’s penalty structure under Article 99 is among the most severe in EU regulatory history:
| Violation | Maximum Penalty |
|---|---|
| High-risk system non-compliance (Article 26) | €15 million or 3% of global annual turnover (whichever is higher) |
| Prohibited AI systems (Article 5) | €35 million or 7% of global annual turnover |
| Providing false information to authorities | €7.5 million or 1% of global annual turnover |
For SMEs and micro-enterprises, the Regulation specifies reduced absolute maximums — but the turnover-based percentages may still produce lower figures than the caps and would then apply.
Enforcement begins at the national level through designated market surveillance authorities. Even before formal enforcement actions, regulatory inquiries, audit requests, and public disclosure requirements create compliance costs from August 2026 onward.
How Compound Law Helps
Compound Law advises businesses, startups, and founders in Germany and the DACH region on EU AI Act compliance. For the August 2026 deadline, we assist with:
- AI system inventory and classification — mapping your AI use against all eight Annex III categories
- Conformity assessment support — guiding internal assessments under Article 43
- Technical documentation — drafting Article 11-compliant system documentation
- FRIA preparation — Fundamental Rights Impact Assessments for public bodies and financial services operators
- Works council coordination — BetrVG-compliant AI works agreements
- Vendor due diligence — reviewing AI procurement contracts for AI Act compliance provisions
The August 2, 2026 deadline is firm. Starting the compliance process now — even with a basic AI inventory — puts your organisation significantly ahead of those waiting until summer.
Frequently Asked Questions
When exactly is the EU AI Act August 2026 deadline?
The deadline is August 2, 2026. This is when the full obligations under Chapter III of Regulation (EU) 2024/1689 become enforceable for deployers and providers of Annex III high-risk AI systems. Prohibitions on banned AI systems have applied since February 2, 2025. GPAI provider obligations have applied since August 2, 2025.
Which companies are affected by the August 2, 2026 deadline?
Any company that deploys or puts into service a high-risk AI system listed in Annex III of Regulation (EU) 2024/1689 within the EU market. This includes German and international companies using AI for HR decisions, biometric identification, credit scoring, critical infrastructure management, or education and training assessment.
What is an Annex III high-risk AI system under the EU AI Act?
Annex III lists eight categories of AI systems treated as high-risk because of their potential impact on fundamental rights, health, or safety. They include AI used for biometric identification, recruitment, employee monitoring, credit scoring, and critical infrastructure. The full list appears in Article 6(2) read together with Annex III of the Regulation.
What is a conformity assessment and do I need one?
A conformity assessment (Article 43) is a structured evaluation confirming that a high-risk AI system meets all requirements in Chapter III — covering risk management, data governance, technical documentation, logging, transparency, human oversight, and accuracy. If you deploy an Annex III high-risk system, you are required to complete one before August 2026. For most categories, an internal self-assessment is permitted. Some biometric systems require a third-party assessment by a notified body.
What is a FRIA and who needs to conduct one?
A Fundamental Rights Impact Assessment (FRIA) under Article 27 is required for public sector bodies deploying any high-risk system, and for private operators deploying AI for credit scoring or insurance risk classification. It evaluates the system’s potential impact on fundamental rights — including privacy, non-discrimination, and fair access to services — and must be documented and available for regulatory review.
What are the penalties for missing the August 2026 deadline?
Non-compliance with high-risk AI system obligations under Article 26 can result in fines of up to €15 million or 3% of global annual turnover, whichever is higher. Deploying a prohibited AI system under Article 5 carries fines of up to €35 million or 7% of global annual turnover.
Does the EU AI Act apply in Germany even without a national implementing law?
Yes. The EU AI Act (Regulation (EU) 2024/1689) is an EU Regulation — it is directly applicable in all member states without national implementing legislation. Germany does not need to enact a separate Ausführungsgesetz for the core obligations to apply. National law will designate supervisory authorities and establish procedural enforcement mechanisms, but the substantive obligations and penalty levels are set at EU level and apply from August 2, 2026.
What should German startups do first before August 2026?
Start with an AI system inventory. Map every AI tool your company uses — including embedded AI features in SaaS products. For each tool, determine whether it falls into an Annex III category. Most startups will find that only a small subset of their AI use is genuinely high-risk. Focus compliance effort on those systems first. If you use AI for hiring, credit decisions, or biometric identification, seek legal advice without delay.