EU AI Act Compliance Checklist for German Tech Companies
The EU AI Act is law. The first compliance deadline — August 2, 2025 — has already passed. If you use, deploy, or develop AI systems and operate in Germany, you need to understand where you stand.
This checklist covers the key compliance areas for German tech companies: what applies to you based on your AI risk classification, what the August 2025 requirements actually required, and what the August 2026 high-risk obligations will demand.
Step 1: Understand the Risk Classification Framework
Not everything you do with AI is regulated equally. The EU AI Act uses a tiered risk framework.
Prohibited AI systems — banned entirely. Includes social scoring, biometric categorization based on sensitive characteristics, subliminal manipulation, and real-time remote biometric identification in public spaces (with narrow exceptions). If you operate any of these, stop immediately.
High-risk AI systems — subject to strict requirements. This includes AI systems used in hiring and HR decisions, credit scoring, essential private and public services, safety components of products, and a range of other applications defined in Annex III of the Regulation. If your AI outputs affect individual rights or access to services, you’re likely here.
Limited-risk AI systems — subject to transparency obligations only. Chatbots and other AI that interacts with users must disclose that users are interacting with AI.
Minimal-risk AI systems — largely unregulated. Most AI features (recommendation engines, spam filters, basic automation) fall here.
Your first compliance action: Map your AI systems against this framework. You cannot comply until you know where you sit.
Step 2: August 2025 Transparency Requirements — Are You Compliant?
The August 2, 2025 deadline applied to two main areas:
1. Prohibited AI systems — full prohibition applied. If your systems included any of the banned categories, they should have been discontinued.
2. General-purpose AI models — providers of GPAI models became subject to transparency and cooperation obligations. If you develop foundation models or GPAI models (not just use them), this applied to you directly.
3. AI literacy obligations — all providers and deployers of AI systems must ensure sufficient AI literacy among their staff and those who operate their systems. This isn’t just formal training; it means your team needs to understand the AI systems they work with.
Checklist — August 2025:
- Prohibited AI systems identified and discontinued
- AI literacy programs implemented for relevant staff
- GPAI model documentation in place (if applicable)
- Transparency obligations met for chatbots and AI-interaction interfaces
Step 3: August 2026 High-Risk Requirements — Prepare Now
The full high-risk AI obligations apply from August 2, 2026. If your systems are high-risk under the Act, you have approximately 12 months to build compliance infrastructure.
Risk management system — you need a documented, ongoing risk management system for each high-risk AI system. This isn’t a one-time assessment; it’s a continuous process covering the entire lifecycle of the system.
Data governance — training, validation, and testing datasets must meet specific quality criteria. Practices for data preparation, examination, and management must be documented.
Technical documentation — comprehensive documentation covering system purpose, design, development process, validation results, and performance metrics. This documentation must be sufficient to allow a conformity assessment.
Transparency and instructions for use — high-risk AI systems must come with clear instructions for intended users covering capabilities, limitations, human oversight mechanisms, and technical measures.
Human oversight — high-risk systems must be designed to allow human oversight by natural persons during use. The ability to intervene, override, and monitor must be built into the system.
Accuracy, robustness, and cybersecurity — systems must meet appropriate levels of accuracy, they must be robust against errors and inconsistencies, and they must be secure against adversarial attacks.
Registration — high-risk AI systems must be registered in the EU database prior to being placed on the market.
Conformity assessment — most high-risk AI systems require a conformity assessment before deployment. Some require third-party assessment; others can be self-assessed.
Checklist — August 2026 preparation:
- High-risk AI systems identified and catalogued
- Risk management system framework developed
- Data governance documentation started
- Technical documentation process established
- Human oversight mechanisms designed
- Conformity assessment pathway identified
- EU database registration timeline set
Step 4: The Works Council Angle — §87 BetrVG
German employment law adds a layer that pure EU AI Act compliance doesn’t cover: if you have a works council (Betriebsrat), deploying AI systems that affect employees triggers co-determination rights under §87 BetrVG.
Monitoring employees, performance assessment using AI, or changing work processes via automated systems all require works council consultation before implementation. This applies regardless of where your AI system sits in the EU AI Act risk framework.
If you’re deploying AI internally — in HR, performance management, productivity monitoring, or workflow automation — you need to run a parallel track with your works council alongside EU AI Act compliance.
Step 5: GDPR Overlap — The Dual Framework Problem
Many AI Act compliance questions also implicate GDPR. Automated decision-making (Art. 22 GDPR) already carries restrictions and transparency requirements. High-risk AI systems that process personal data will need to satisfy both frameworks simultaneously.
Key overlap areas:
- Data minimization requirements vs. AI training data needs
- Rights to explanation for automated decisions
- Data protection impact assessments (DPIAs) — many high-risk AI DPIAs and EU AI Act risk assessments will need to be coordinated
- Cross-border data flows if your AI systems process data using providers in non-EU countries
A compliance approach that treats EU AI Act and GDPR as separate workstreams will create gaps. They need to be handled together.
Step 6: What SaaS Companies Specifically Need to Check
If you provide B2B SaaS and your product includes AI features, you’re in the supply chain. Your obligations depend on whether you are a provider (you developed the model or system), deployer (you put a third-party AI system to work), or importer/distributor.
Provider obligations are heaviest. If you built the AI and offer it to customers, you bear full compliance responsibility.
Deployer obligations are real but lighter. If you use a third-party AI system in your product, you still have obligations: transparency to end users, monitoring, and — if the system is high-risk — ensuring the provider has met their own requirements and maintaining usage logs.
Contractual clarity matters. Your AI vendor contracts should now explicitly address EU AI Act obligations, technical documentation handover, and allocation of compliance responsibility. Many off-the-shelf vendor agreements don’t cover this yet.
Getting Legal Support for EU AI Act Compliance
EU AI Act compliance isn’t just a technical problem — it’s a legal one. Classification decisions, documentation standards, conformity assessment pathways, and GDPR coordination all require legal analysis tailored to your specific systems and use cases.
Compound Law advises German tech companies on EU AI Act compliance alongside GDPR, works council requirements, and employment law. If you’re not certain where your AI systems sit under the framework, schedule a consultation to get a clear picture before August 2026.