EU AI Act compliance checklist for German tech companies
Guides

EU AI Act Compliance Checklist for German Tech Companies

Short answer

If your company builds, deploys, or buys AI in Germany, you need a documented risk classification now, compliance with the first EU AI Act duties that started on August 2, 2025, and a practical roadmap for high-risk controls before August 2, 2026.

  • Classify each AI use case before you decide what obligations apply.
  • Check transparency, GPAI, and AI literacy duties that are already live.
  • Coordinate EU AI Act work with GDPR and works council review.

The EU AI Act is law. The first compliance deadline — August 2, 2025 — has already passed. If you use, deploy, or develop AI systems and operate in Germany, you need to understand where you stand.

This checklist covers the key compliance areas for German tech companies: what applies to you based on your AI risk classification, what the August 2025 requirements actually required, and what the August 2026 high-risk obligations will demand.


Step 1: Understand the Risk Classification Framework

Not everything you do with AI is regulated equally. The EU AI Act uses a tiered risk framework.

Prohibited AI systems — banned entirely. Includes social scoring, biometric categorization based on sensitive characteristics, subliminal manipulation, and real-time remote biometric identification in public spaces (with narrow exceptions). If you operate any of these, stop immediately.

High-risk AI systems — subject to strict requirements. This includes AI systems used in hiring and HR decisions, credit scoring, essential private and public services, safety components of products, and a range of other applications defined in Annex III of the Regulation. If your AI outputs affect individual rights or access to services, you’re likely here.

Limited-risk AI systems — subject to transparency obligations only. AI chatbots and other AI that interacts with users must disclose that users are interacting with AI.

Minimal-risk AI systems — largely unregulated. Most AI features (recommendation engines, spam filters, basic automation) fall here.

Your first compliance action: Map your AI systems against this framework. You cannot comply until you know where you sit.


Step 2: August 2025 Transparency Requirements — Are You Compliant?

The August 2, 2025 deadline applied to two main areas:

1. Prohibited AI systems — full prohibition applied. If your systems included any of the banned categories, they should have been discontinued.

2. General-purpose AI models — providers of GPAI models became subject to transparency and cooperation obligations. If you develop foundation models or GPAI models (not just use them), this applied to you directly.

3. AI literacy obligations — all providers and deployers of AI systems must ensure sufficient AI literacy among their staff and those who operate their systems. This isn’t just formal training; it means your team needs to understand the AI systems they work with.

Checklist — August 2025:

  • Prohibited AI systems identified and discontinued
  • AI literacy programs implemented for relevant staff
  • GPAI model documentation in place (if applicable)
  • Transparency obligations met for chatbots and AI-interaction interfaces

Step 3: August 2026 High-Risk Requirements — Prepare Now

The full high-risk AI obligations apply from August 2, 2026. If your systems are high-risk under the Act, you have approximately 12 months to build compliance infrastructure.

Risk management system — you need a documented, ongoing risk management system for each high-risk AI system. This isn’t a one-time assessment; it’s a continuous process covering the entire lifecycle of the system.

Data governance — training, validation, and testing datasets must meet specific quality criteria. Practices for data preparation, examination, and management must be documented.

Technical documentation — comprehensive documentation covering system purpose, design, development process, validation results, and performance metrics. This documentation must be sufficient to allow a conformity assessment.

Transparency and instructions for use — high-risk AI systems must come with clear instructions for intended users covering capabilities, limitations, human oversight mechanisms, and technical measures.

Human oversight — high-risk systems must be designed to allow human oversight by natural persons during use. The ability to intervene, override, and monitor must be built into the system.

Accuracy, robustness, and cybersecurity — systems must meet appropriate levels of accuracy, they must be robust against errors and inconsistencies, and they must be secure against adversarial attacks.

Registration — high-risk AI systems must be registered in the EU database prior to being placed on the market.

Conformity assessment — most high-risk AI systems require a conformity assessment before deployment. Some require third-party assessment; others can be self-assessed.

Checklist — August 2026 preparation:

  • High-risk AI systems identified and catalogued
  • Risk management system framework developed
  • Data governance documentation started
  • Technical documentation process established
  • Human oversight mechanisms designed
  • Conformity assessment pathway identified
  • EU database registration timeline set

Step 4: The Works Council Angle — §87 BetrVG

German employment law adds a layer that pure EU AI Act compliance doesn’t cover: if you have a works council (Betriebsrat), deploying AI systems that affect employees triggers co-determination rights under §87 BetrVG.

Monitoring employees, performance assessment using AI, or changing work processes via automated systems all require works council consultation before implementation. This applies regardless of where your AI system sits in the EU AI Act risk framework.

If you’re deploying AI internally — in HR, performance management, productivity monitoring, or workflow automation — you need to run a parallel track with your works council alongside EU AI Act compliance.


Step 5: GDPR Overlap — The Dual Framework Problem

Many AI Act compliance questions also implicate GDPR. Automated decision-making (Art. 22 GDPR) already carries restrictions and transparency requirements. High-risk AI systems that process personal data will need to satisfy both frameworks simultaneously.

Key overlap areas:

  • Data minimization requirements vs. AI training data needs
  • Rights to explanation for automated decisions
  • Data protection impact assessments (DPIAs) — many high-risk AI DPIAs and EU AI Act risk assessments will need to be coordinated
  • Cross-border data flows if your AI systems process data using providers in non-EU countries

A compliance approach that treats EU AI Act and GDPR as separate workstreams will create gaps. They need to be handled together.


Step 6: What SaaS Companies Specifically Need to Check

If you provide B2B SaaS and your product includes AI features, you’re in the supply chain. Your obligations depend on whether you are a provider (you developed the model or system), deployer (you put a third-party AI system to work), or importer/distributor.

Provider obligations are heaviest. If you built the AI and offer it to customers, you bear full compliance responsibility. This applies equally to SaaS teams offering AI code generation tools to developers — provider obligations extend to any AI system placed on the market.

Deployer obligations are real but lighter. If you use a third-party AI system in your product, you still have obligations: transparency to end users, monitoring, and — if the system is high-risk — ensuring the provider has met their own requirements and maintaining usage logs.

Contractual clarity matters. Your AI vendor contracts should now explicitly address EU AI Act obligations, technical documentation handover, and allocation of compliance responsibility. Many off-the-shelf vendor agreements don’t cover this yet.


EU AI Act compliance isn’t just a technical problem — it’s a legal one. Classification decisions, documentation standards, conformity assessment pathways, and GDPR coordination all require legal analysis tailored to your specific systems and use cases.

Compound Law advises German tech companies on EU AI Act compliance alongside GDPR, works council requirements, and employment law. If you’re in a professional services business, our sector guide on EU AI Act compliance for professional services firms covers the specific obligations that apply. If you’re not certain where your AI systems sit under the framework, schedule a consultation to get a clear picture before August 2026.

You might also like

AI tools for law firms in Germany — BRAO compliance and GDPR guide
Guides

AI for Law Firms in Germany: Tools, Compliance, and BRAO

A practical guide for German law firms and lawyers on using AI tools legally — BRAO, GDPR, confidentiality rules, and which AI tools work in practice.

AI recruitment compliance in Germany — GDPR and automated hiring decisions
Guides

AI Recruitment in Germany: What HR Teams Need to Know About GDPR and Automated Hiring

Can German companies use AI to screen job applicants? Here's what GDPR Article 22, BDSG, and the EU AI Act require for lawful AI recruitment in Germany.

Pay Transparency Directive Germany employer guide
Guides

Pay Transparency Directive Germany: Employer Guide Before 7 June 2026

German employers should prepare now for the EU Pay Transparency Directive with salary-range, pay-history, information-request, and reporting readiness.

Start a company in Germany with GmbH or UG
Guides

Start a Company in Germany: GmbH, UG, and the Key Legal Steps

Learn how to start a company in Germany, compare GmbH vs UG, and plan the legal steps founders usually miss.

VSOP Germany founder guide on leaver clauses and vesting
Guides

VSOP Germany: Leaver Clauses, Vesting After BAG and BGH

VSOP Germany after BAG and BGH: how founders should redraft vesting, bad leaver, and de-vesting clauses in 2026.

Fractional GC vs in-house counsel decision for German startups
Guides

Fractional GC vs. In-House Counsel in Germany: What's Right for Your Stage?

Hiring a full-time GC in Germany is expensive and often premature. How to use a fractional GC and when to switch to in-house.

More From News

Frequently asked questions

When did the first EU AI Act obligations start to apply?

The first obligations started to apply on August 2, 2025. That includes bans on prohibited AI practices and AI literacy duties, while broader high-risk system requirements follow on August 2, 2026.

Do German SaaS companies need to care about the EU AI Act if they use third-party models?

Yes. Even if you do not build the underlying model, you can still have deployer obligations around transparency, monitoring, vendor coordination, and downstream compliance.

Is EU AI Act compliance separate from GDPR in Germany?

No. Most practical implementations overlap with GDPR and, for employee-facing systems, can also trigger works council co-determination under German labor law.

Ready to get started?

Book Free Call