Cybersecurity: What German Companies Need to Know
Cybersecurity AI is increasingly common in German businesses. The EU AI Act establishes clear requirements depending on how these systems are used and what decisions they influence.
Risk Classification
Cybersecurity applications are minimal risk. The key question: does your AI make or significantly influence decisions that affect people’s rights, safety, or access to services?
Most operational uses face lighter requirements. When AI touches consequential decisions about individuals, requirements escalate to high-risk compliance.
Transparency Requirements
Regardless of risk classification, if people interact directly with your AI thinking it’s human, you must disclose. Article 50 of the AI Act makes this non-negotiable.
For generated content that could be mistaken for human-created, marking requirements apply.
German Considerations
Works council rights under §87 BetrVG apply when AI systems affect employees. Data protection under GDPR layers onto AI Act requirements. Industry-specific regulations may add further obligations.
What This Means Practically
Map your cybersecurity AI systems. Classify their risk level based on how they’re used and what decisions they influence. Implement appropriate transparency. Document your compliance approach.
The August 2025 transparency deadline and August 2026 high-risk deadline are approaching.
How Compound Law Helps
- AI inventory and risk classification
- Compliance framework appropriate to your risk level
- Transparency implementation
- Works council coordination where applicable
- GDPR integration
- Ongoing compliance monitoring
Frequently Asked Questions
Is cybersecurity AI typically high-risk? It minimal risk. Systems making consequential decisions about individuals face stricter requirements.
Do we need works council approval? If the AI affects employees or their work conditions, likely yes under §87 BetrVG.
When do requirements take effect? Transparency requirements: August 2025. Full high-risk compliance: August 2026.