EU AI Act for Telecoms: What Telecommunications Companies Need to Know
The EU AI Act imposes binding compliance obligations on telecommunications companies that deploy AI systems in Germany and across the EU. Telecoms operating AI for fraud detection, customer credit scoring, network infrastructure management, or automated customer service face high-risk classification requirements under Annex III of the AI Act — meaning full conformity assessments, mandatory human oversight, and registration in the EU AI database before deployment. Companies that fail to meet these obligations face fines of up to €30 million or 6% of global annual turnover.
This guide explains which telecom AI use cases are affected, what the compliance requirements mean in practice, and how to build a compliant AI governance framework for your organization.
AI in Telecoms: Key Use Cases Under the EU AI Act
Telecommunications is one of the most AI-intensive sectors in the European economy. Telcos, MVNOs, and enterprise telecoms buyers use AI across a wide range of operations — and the EU AI Act treats these use cases very differently depending on what decisions the AI makes and who is affected.
Fraud Detection and Network Anomaly Systems
AI-powered fraud detection systems that automatically block accounts, suspend SIM cards, or restrict services based on behavioral analysis may qualify as high-risk where they make enforceable decisions affecting individuals’ access to telecommunications services. Under Annex III, AI systems used in credit and access decisions for essential services are explicitly high-risk. Telecom fraud detection that cuts off service is functionally equivalent to an access decision.
If your fraud detection AI can suspend or terminate a customer account without mandatory human review, you likely need full high-risk compliance. This means documented risk management, bias testing, explainability mechanisms, and logging of every automated decision.
Customer Scoring and Creditworthiness
AI used to assess customer creditworthiness for postpaid contracts, handset financing, or enterprise service agreements is classified as high-risk under Annex III, Point 5(b) of the EU AI Act. This applies whether you use a third-party credit bureau system or a proprietary internal scoring model. The key criterion is whether the AI is making or materially influencing a consequential decision about an individual.
Requirements for this use case include: data quality documentation, bias analysis across demographic groups, explainability to customers, human review mechanisms, and logging of decisions for audit purposes. Customers have a right to a meaningful explanation of any automated decision that affects them — this intersects directly with GDPR Article 22 rights.
Automated Customer Service (Chatbots and IVR)
AI-driven chatbots and interactive voice response (IVR) systems are not classified as high-risk but are subject to mandatory transparency obligations under Article 50 of the EU AI Act. Any system that interacts with customers while appearing to be human — or that generates text, voice, or video content — must disclose its AI nature clearly and in a timely manner.
For telecoms, this applies to:
- Customer support chatbots on web and mobile apps
- AI voice agents in call centres
- Automated complaint handling systems
- IVR systems using AI-generated speech
- Messaging bots for billing queries or technical support
The disclosure must be made at the start of the interaction and must be understandable to a typical consumer. Failing to disclose is a direct regulatory violation, even if the system itself is low-risk.
Predictive Maintenance
AI used for predictive maintenance of physical infrastructure (antenna systems, cable networks, data centres, exchanges) is generally not high-risk under the current Annex III framework, provided it does not control the infrastructure in a way that affects public safety. Predictive maintenance tools support human engineers rather than replacing human judgment in safety-critical decisions.
Document these systems in your AI inventory, but they do not require conformity assessments under the current framework.
Network Optimization and Traffic Management
AI that manages network traffic, allocates bandwidth, or performs quality-of-service prioritization operates in a nuanced area. Where the AI manages critical infrastructure — systems whose failure or disruption would have significant consequences for public safety or essential services — it may be classified as high-risk under Annex III, Point 2 (critical infrastructure management).
For most commercial network optimization tools, the risk level is lower. The key question is: does this AI make autonomous decisions that could cause service outages affecting essential services? If yes, treat it as potentially high-risk and seek legal advice on classification. The intersection with KRITIS (critical infrastructure protection) obligations under German law is also relevant here.
High-Risk AI Systems in Telecoms: What Qualifies Under Annex III
The EU AI Act’s Annex III lists eight categories of high-risk AI systems. Telecoms companies are most directly affected by three:
Category 2 — Critical Infrastructure Management: AI used to manage and operate critical digital infrastructure, including core network components, routing systems, or telecommunications systems that underpin essential services. This is relevant for carriers operating backbone networks or systems designated under KRITIS regulation.
Category 5 — Access to Essential Services: AI systems used to evaluate creditworthiness and determine access to essential services. Postpaid contracts, data plans, and connected services are increasingly treated as essential — meaning AI scoring systems for these fall under high-risk classification.
Category 6 — Law Enforcement (indirect): While not typical for commercial telecoms, AI used in lawful interception coordination or network-level regulatory compliance may intersect with this category. Legal advice is recommended for any AI touching law enforcement-adjacent functions.
What High-Risk Classification Requires
If your AI system is high-risk, you must comply with Chapter III of the EU AI Act before deploying it. The requirements are:
- Risk management system — documented, ongoing assessment of risks the AI poses throughout its lifecycle
- Data governance — data quality standards, bias analysis, and documentation of training data sources and composition
- Technical documentation — full technical file per Annex IV, including system architecture and performance specifications
- Record-keeping — automatic logging of system inputs, outputs, and decision parameters for post-hoc audit
- Transparency — information provided to deployers and affected natural persons about how the system works and what it decides
- Human oversight — effective mechanisms for humans to review, override, correct, or stop the AI system
- Accuracy and robustness — tested performance benchmarks, cybersecurity measures, and ongoing monitoring
- EU registration — entry in the EU AI database before deployment (mandatory from August 2026)
For telecoms, most high-risk AI obligations fall on the deployer (you, the telco) rather than the AI system provider, because you determine the deployment context and the decisions being made.
Transparency Obligations for Telecoms AI
Beyond high-risk classification, all telecoms companies deploying AI face specific transparency obligations under Article 50:
Chatbot and virtual agent disclosure: You must inform users at the start of any interaction that they are communicating with an AI system — unless this is obvious from context. A voice agent that sounds human requires explicit disclosure.
AI-generated content: If you use AI to generate personalised billing summaries, targeted offers, or customer communications, these must be marked as AI-generated where there is a meaningful risk of consumer confusion.
Deepfakes and synthetic voice: Using AI to clone or synthesise human voice in customer communications (for example, a synthetic voice representing your brand) triggers watermarking obligations under Article 50(2) for providers. This is particularly relevant for telcos developing branded AI assistant voices.
Right to explanation: Where an AI system makes an automated decision affecting an individual — such as account suspension, service downgrade, or rejection of a contract application — the individual has a right to a meaningful explanation under both the AI Act and GDPR Article 22. Your customer service processes need to accommodate these explanation requests.
Timeline and Deadlines for Telecoms Companies
The EU AI Act entered into force on 1 August 2024. Key compliance deadlines for telecoms:
| Deadline | Requirement |
|---|---|
| 2 February 2025 | Prohibited AI systems must be discontinued |
| 2 August 2025 | General-purpose AI model (GPAI) obligations apply |
| 2 August 2026 | High-risk AI system obligations fully apply (Annex III) |
| 2 August 2026 | EU AI database registration mandatory for new high-risk deployments |
| 2 August 2027 | Obligations apply to existing high-risk systems already in service |
Telecoms companies deploying new high-risk AI systems after August 2026 must have full compliance documentation in place before go-live. Existing systems already deployed have until August 2027 — but only if they were in service before the obligations took effect.
BEREC (Body of European Regulators for Electronic Communications) and ENISA are developing sector-specific guidance that will supplement the general AI Act requirements for telecoms. The Bundesnetzagentur (BNetzA) will play a role in coordinating national AI Act enforcement with existing telecom regulation.
Compliance Checklist for Telecoms Companies
Use this checklist to assess your current AI Act readiness:
Step 1: AI Inventory
- Map all AI systems used across your organization
- Identify which systems interact with or make decisions about customers
- Identify which systems manage network infrastructure or operations
- Document which systems use third-party AI providers or foundation models
Step 2: Risk Classification
- Classify each AI system against Annex III (high-risk) criteria
- Identify systems subject to transparency obligations under Article 50
- Flag any general-purpose AI models (GPAIs) embedded in your products
- Review KRITIS classification status for infrastructure AI
Step 3: High-Risk Compliance (where applicable)
- Appoint an AI Act compliance responsible person
- Implement risk management documentation for each high-risk system
- Commission data quality and bias assessments for training datasets
- Draft technical documentation per Annex IV
- Implement logging and record-keeping for automated decisions
- Design human oversight processes for each high-risk system
Step 4: Transparency
- Update chatbot and IVR interfaces to include mandatory AI disclosure
- Train customer service teams on AI disclosure and explanation requirements
- Draft template explanations for automated decisions affecting customers
- Update privacy notices and customer terms to reflect AI use
Step 5: Registration and Governance
- Register high-risk AI systems in the EU AI database (from August 2026)
- Establish an internal AI governance policy
- Integrate AI Act compliance with existing GDPR compliance processes
- Schedule regular internal audits of AI system performance and bias
How Compound Law Helps Telecoms Companies
Compound Law works with telecommunications companies across the DACH region on EU AI Act compliance, combining expertise in telecom regulation, data protection law, and AI governance.
We help telecoms companies:
- Classify AI systems against Annex III and identify which systems require full high-risk compliance procedures
- Build compliance documentation including risk management frameworks, technical documentation files per Annex IV, and data governance policies
- Integrate AI Act compliance with GDPR and TKG obligations, ensuring a coherent regulatory approach
- Design human oversight processes that satisfy AI Act requirements without creating operational bottlenecks
- Prepare for Bundesnetzagentur (BNetzA) oversight and coordinate between national AI Act enforcement and telecom-specific regulation
- Train compliance and technical teams on practical AI governance requirements
For more on AI compliance topics relevant to telecoms, see our guides on AI customer service compliance and our AI Act compliance overview.
Contact Compound Law for a free initial consultation on EU AI Act compliance for your telecommunications business.
Frequently Asked Questions
Is AI in telecoms high-risk under the EU AI Act?
Not all telecom AI is high-risk. AI used for customer credit scoring, access decisions affecting essential services, and critical infrastructure management is classified as high-risk under Annex III of the EU AI Act. Network optimization, predictive maintenance, and most operational tools are generally not high-risk. The classification depends on whether the AI makes or materially influences consequential decisions about individuals or critical infrastructure.
What does the EU AI Act mean for telecom chatbots?
Telecom chatbots and virtual assistants are subject to transparency obligations under Article 50 of the EU AI Act. Companies must clearly disclose to users at the start of every interaction that they are communicating with an AI system. Chatbots are not classified as high-risk under the current Annex III, but the disclosure requirement is mandatory and enforceable from August 2026.
Do telcos need to register AI systems under the EU AI Act?
Yes — high-risk AI systems must be registered in the EU AI database before deployment. This obligation applies from 2 August 2026 for new deployments, and from 2 August 2027 for AI systems already in service before that date. The registration must include technical documentation, the intended purpose, risk classification, and conformity assessment results.
What is the deadline for AI Act compliance for telecoms?
The main high-risk AI system obligations under Chapter III of the EU AI Act apply from 2 August 2026. Telecoms companies deploying new high-risk AI systems must be fully compliant before go-live. Existing high-risk systems already in service have until 2 August 2027. Prohibited AI practices were required to stop by 2 February 2025.
How does the EU AI Act interact with German telecom regulation (TKG)?
The EU AI Act operates alongside the Telekommunikationsgesetz (TKG) and does not replace sector-specific requirements. Telecoms companies must comply with both regimes. The Bundesnetzagentur is the relevant national regulatory authority for telecom matters, while AI Act enforcement is handled by national AI authorities. BEREC is developing supplementary sector guidance.
What are the fines for non-compliance with the EU AI Act?
Fines for violations of high-risk system obligations (most relevant for telecoms) can reach €20 million or 4% of global annual turnover, whichever is higher. Violations of prohibited AI practices carry fines of up to €35 million or 7% of global turnover. Providing incorrect information to supervisory authorities carries fines of up to €7.5 million or 1% of global turnover.
This guide provides general legal information and does not constitute legal advice. Specific compliance decisions require individual legal counsel based on your organization’s AI systems and circumstances. Contact Compound Law for tailored advice.