EU AI Act and Recruitment AI in Germany: Compliance Guide
Short answer
AI recruitment tools in Germany — CV screening, candidate ranking, automated interview analysis — are high-risk AI systems under EU AI Act Annex III. Employers deploying such systems must implement human oversight, technical documentation, and transparency measures by August 2, 2026.
- EU AI Act Annex III classifies AI used in hiring, candidate screening, and employment decisions as high-risk.
- Both AI providers and deploying employers face binding obligations from August 2, 2026.
- Human oversight of AI-assisted hiring decisions is mandatory — pure automation violates both the AI Act and GDPR Article 22.
- Works councils (Betriebsrat) have mandatory co-determination rights under BetrVG Section 87 when AI hiring tools are introduced.
AI recruitment tools in Germany — from CV screening algorithms to automated interview analysis platforms — are high-risk AI systems under EU AI Act Annex III. Employers deploying these tools face binding compliance obligations from August 2, 2026. This guide explains what the AI Act requires, how it interacts with GDPR and German works council law, and what HR teams should do now.
What the EU AI Act Classifies as Recruitment AI
EU AI Act Annex III, Section 4 lists the following as high-risk AI systems in the employment domain:
- AI used to screen or filter job applications and CVs
- AI used to assess candidates in tests during or preparatory to the recruitment process
- AI used to make or materially influence decisions on promotion, termination, task allocation, and performance monitoring
The classification is deliberately broad. A ranking algorithm embedded in an applicant tracking system is high-risk. A video interview platform that scores emotional affect is high-risk. A tool that generates a “culture fit” score from written applications is high-risk. The determining factor is not the technology — it is whether the system influences consequential employment decisions.
What is not high-risk: Administrative tools that do not touch individual employment decisions — scheduling coordination, meeting room booking, general productivity AI — face only lighter general-purpose obligations and are not covered by Annex III.
Who Is Affected — Providers and Deployers
The AI Act distinguishes two categories with separate obligations:
Providers (companies that develop and market recruitment AI) must conduct a conformity assessment, prepare a technical file, implement a quality management system, register the system in the EU database for high-risk AI, and affix CE marking before placing the product on the EU market.
Deployers (German employers who use recruitment AI in their operations) must:
- Use only CE-marked systems that comply with the AI Act
- Implement human oversight as specified by the provider
- Monitor for unexpected risks during actual use
- Conduct a fundamental rights impact assessment before deploying high-risk AI affecting employment
- Not operate the system in ways that exceed the intended purpose documented by the provider
As a German employer, your primary AI Act exposure is as a deployer. If you build custom AI features into your HR processes using third-party models or APIs, you may additionally qualify as a provider for those custom components.
High-Risk Obligations in Practice
Technical documentation and conformity assessment
For providers, the technical file must document the system’s intended purpose, architecture and development methodology, training data and testing approach, performance metrics across demographic subgroups, and risk management measures. This supports the conformity assessment — typically a self-assessment for most Annex III systems, or third-party review where warranted.
As a deployer, ask your vendors for a summary of their technical documentation and CE marking status before contract renewal.
Risk management and bias testing
Article 9 requires a continuous risk management system throughout the AI system’s lifecycle. For recruitment AI, this means regular bias testing across gender, age, ethnic background, and disability status. A system that performs well on aggregate metrics but systematically disadvantages protected groups fails this requirement.
Providers must publish their bias testing methodology and results as part of transparency obligations. Request this documentation before deployment and review it annually.
Human oversight requirements
Article 14 requires that high-risk AI systems allow competent persons to effectively oversee the system and intervene when necessary. For recruitment AI, this means:
- HR staff must be trained to understand the system’s limitations and failure modes
- There must be a genuine human review step — not an interface for rubber-stamping AI output
- HR reviewers must have the practical ability to override, disregard, or correct the AI’s recommendation
This obligation directly reinforces GDPR Article 22, which prohibits purely automated decisions with significant effects on applicants. The AI Act operationalises the human oversight requirement that GDPR already mandates. See our AI recruitment compliance guide for Germany for the full GDPR analysis.
Transparency to candidates
Under Article 13, deployers must inform candidates when a high-risk AI system is assessing their application, explain the system’s purpose at a level candidates can understand, and communicate their rights — including the GDPR Article 22(3) right to request human review.
This information belongs in the applicant privacy notice before the recruitment process begins, not as an afterthought.
GDPR and the AI Act — A Combined Obligation
In German recruitment, two regulatory frameworks apply simultaneously and neither displaces the other:
GDPR Article 22 prohibits purely automated individual decisions with significant effects unless an exception applies (contract necessity, legal authorization, or explicit consent). Even where an exception applies, Article 22(3) requires safeguards including the right to human review and the ability to contest the decision.
BDSG Section 26 is the German legal basis for processing applicant data. It permits processing necessary to decide on establishing an employment relationship. Consent is rarely appropriate for applicant data given the employer-applicant power imbalance.
Meeting the AI Act’s human oversight obligations does not automatically satisfy GDPR Article 22 — both must be addressed. A compliance checklist that covers only one framework leaves significant legal exposure.
Works Council Rights Under BetrVG Section 87
For German employers with five or more employees, the Betriebsrat (works council) has mandatory co-determination rights under BetrVG Section 87(1)(6) over technical systems capable of monitoring worker behavior or performance.
German labor courts have consistently held that AI-driven hiring and HR systems fall within this scope, even for applicants who are not yet employees. The rationale: such systems affect workforce composition, interact with HR employees who are employees, and typically remain in use for ongoing monitoring once an applicant joins.
In practice:
- Inform the works council before deploying any AI recruitment tool
- Provide the technical documentation and vendor’s intended-use description
- Negotiate a Betriebsvereinbarung (works agreement) covering data processed, retention periods, who sees outputs, the human review process, and candidate redress
- If no agreement is reached, the employer cannot legally use the tool — the Betriebsrat cannot prevent hiring decisions, but it can block the specific system
The AI Act’s fundamental rights impact assessment process aligns well with works council engagement. Running both in parallel is efficient and builds the documentation trail both frameworks require.
What to Do Before August 2026
August 2, 2026 is the compliance date for high-risk AI systems. Systems deployed after that date must comply on day one. Systems already in use have until August 2, 2027 to achieve full compliance, but providers must register in the EU database and provider obligations activate from August 2, 2026.
Recommended steps for German employers:
- Audit your HR tech stack. Identify every tool with AI-enabled screening, ranking, scoring, or decision-support in hiring or employment decisions.
- Classify each tool. Does it fall under AI Act Annex III? Engage legal counsel if uncertain — misclassification in either direction creates risk.
- Request vendor compliance roadmaps. Ask for AI Act documentation now. Build non-compliance into contract renewal decisions.
- Conduct a DPIA. A GDPR Article 35 Data Protection Impact Assessment is almost certainly required for systematic automated applicant profiling.
- Engage the works council early. Early dialogue prevents delays at go-live and produces better Betriebsvereinbarung outcomes.
- Update candidate privacy notices. Add AI Act transparency information now — it is already best practice and some supervisory authorities expect it before the 2026 deadline.
Compound Law advises German employers on AI Act deployment obligations, DPIA scoping, and works council engagement for HR technology. For individual guidance, speak with our team.
This guide provides general legal information. It does not constitute legal advice. Specific compliance questions — including works council negotiations, vendor contract reviews, and DPIA scoping — require individual legal counsel.