AI APIs for Law Firms: BRAO Compliance Guide Germany
Short answer
German law firms can use AI APIs including ChatGPT, Claude, and Gemini, but §43a BRAO's Verschwiegenheitspflicht applies — client data sent to an external API is a transfer to a third-party processor, requiring a DPA, subprocessor review, and a documented confidentiality architecture before going live.
- §43a BRAO's duty of confidentiality applies whenever client data reaches an external AI API.
- §43e BRAO requires lawyers to maintain professional supervision over any AI-assisted work product.
- A DPA under Art. 28 GDPR is mandatory but not sufficient — a BRAO-specific confidentiality analysis must accompany it.
German law firms can use AI APIs — including ChatGPT (OpenAI API), Claude (Anthropic API), and Gemini — but §43a BRAO’s Verschwiegenheitspflicht applies the moment client data enters an external model. That transfer constitutes outsourcing to a third-party processor under both professional conduct rules and Art. 28 GDPR, which means a DPA, subprocessor review, and a BRAO-compliant confidentiality architecture are required before going live. Firms that treat AI APIs as off-the-shelf software without this structure face serious professional liability exposure.
This guide explains what BRAO, BORA, and GDPR require for AI API deployments in law firms — and what a compliant setup actually looks like in practice.
Can German Law Firms Use AI APIs?
Yes — with the right structure. There is no statutory provision in the Bundesrechtsanwaltsordnung that prohibits German lawyers from using AI APIs. The Bundesrechtsanwaltskammer (BRAK) confirmed this in its December 2024 guidance: AI tools are permissible, provided firms address professional secrecy, supervision, data protection, and transparency obligations in a coherent deployment model.
The critical point is that “using AI APIs” is not a single legal question. The answer depends entirely on:
- what data enters the API (public research vs. active client matter files)
- how the provider relationship is structured contractually
- where data is processed and retained
- what human oversight is in place
- how the firm documents the above
A law firm that uses an AI API to help draft template clauses in a sandboxed environment faces a very different legal analysis from one that pipes raw matter files, correspondence, and personal data into an API without classification or controls.
§43a BRAO: Verschwiegenheitspflicht and AI APIs
§43a BRAO codifies the lawyer’s Verschwiegenheitspflicht — the professional duty of confidentiality covering everything the lawyer learns in the course of their mandate. This duty does not end at the firm’s server boundary. It extends to every third party that processes mandate-related information on behalf of the lawyer.
When client data is sent to an external AI API, the provider processes that information as a service provider. Under §43a(2) BRAO, the lawyer must ensure that third parties involved in mandate-related activities are bound to equivalent confidentiality obligations.
What this means in practice
A BRAO-compliant AI API deployment requires:
- a written contract with the provider that includes an explicit confidentiality commitment covering client data
- a DPA under Art. 28 GDPR specifying what data may be processed, for what purpose, and with which subprocessors
- purpose limitation: the provider must not use client data for model training or product improvement without explicit consent
- access controls that prevent unauthorised internal or provider-side access to matter data
- deletion logic that is documented and technically enforced
The BRAK’s 2024 guidance specifically calls out foreign service providers as a focus area — firms using US-headquartered providers need to pay particular attention to cross-border transfer mechanisms and the contractual protection of confidentiality alongside GDPR safeguards.
What does not satisfy §43a BRAO
Signing a DPA alone is not sufficient under §43a BRAO. The DPA covers data protection law. It does not automatically satisfy the professional confidentiality obligations under BRAO unless the contract also contains explicit provisions limiting provider access to mandate data, prohibiting secondary use, and committing the provider to equivalent confidentiality standards.
§43e BRAO: Supervision of AI-Assisted Legal Work
§43e BRAO, introduced as part of the modernisation of the professional rules, requires lawyers to exercise appropriate professional supervision over AI-assisted work. A lawyer who delegates drafting, research, or analysis to an AI system remains fully professionally responsible for the result.
The practical implications are significant:
- AI-generated documents (contracts, pleadings, legal opinions) require qualified lawyer review before release
- legal conclusions produced by an AI must be verified by the responsible lawyer
- AI-generated citations or legal references must be checked for accuracy before they are relied upon
- external communications to clients, courts, or counterparties may not be sent without approval by the responsible professional
§43e BRAO does not prohibit AI-assisted workflows — it requires meaningful human oversight. A workflow in which lawyers use an AI API to accelerate research, produce first drafts, or compare clause alternatives, subject to consistent professional review, is structurally sound. A workflow in which AI outputs flow directly to clients or courts without review is not.
AI literacy under the EU AI Act
Article 4 of the EU AI Act, which has applied since 2 February 2025, requires providers and deployers to ensure that staff working with AI systems have sufficient AI literacy. For law firms, this means training lawyers and paralegals on:
- the capabilities and limitations of the specific AI tools in use
- hallucination risk and how to identify unreliable outputs
- the professional rules that apply to AI-assisted outputs
- the firm’s internal approval and escalation procedures
This is not a one-time briefing — it requires a documented and maintained training approach.
BORA §2: Independence and Professional Integrity
BORA §2 protects the lawyer’s professional independence. AI tool selection can create subtle conflicts if vendors position themselves in ways that influence legal decision-making, create dependency, or condition tool access on commercial relationships.
Law firms selecting AI APIs should ensure that:
- tool selection follows an objective vendor review, not marketing relationships
- AI recommendations on legal strategy are clearly flagged as tool output, not legal judgment
- no vendor lock-in prevents the firm from switching providers if the contractual or compliance position changes
- the selected tools do not create undisclosed conflicts with client interests
In practice, BORA §2 concerns are most relevant for law firms building AI-powered client-facing products or integrating AI into mandate management workflows at scale.
Attorney-Client Privilege and AI APIs
Legal professional privilege in Germany protects mandate-related communications and strategies from disclosure. When client data is processed by an external AI provider, privilege does not automatically extend to cover that provider’s infrastructure, staff, or records.
This creates a specific risk: if a provider’s systems are subject to law enforcement requests, data breach exposure, or regulatory inspection in another jurisdiction, privilege may not protect the underlying client information from being accessed or disclosed.
Law firms should assess:
- whether the chosen provider can be subjected to data access demands outside Germany or the EU
- whether mandate data at rest in provider infrastructure is adequately protected
- how the firm would respond to a provider-side incident affecting client confidentiality
For cross-border matters and sensitive mandates (litigation, M&A, regulatory defence, criminal matters), this analysis is especially important. Firms handling such mandates should document why the chosen API deployment is compatible with their privilege and confidentiality obligations for those specific matter types.
GDPR Requirements for Law Firms Using AI APIs
AI API usage in a legal context almost always involves personal data — client names, financial data, employee information, counterparty details, or case facts. This triggers the full GDPR framework alongside BRAO.
Key requirements:
Data Processing Agreement (Art. 28 GDPR)
A DPA is mandatory when a law firm uses an AI provider to process personal data as a processor. The DPA must specify: data categories, processing purpose, subprocessors, security measures, deletion timelines, and support access restrictions.
Legal Basis
Law firms typically rely on Art. 6(1)(b) GDPR (contract performance) or Art. 6(1)(c) (legal obligation) for processing client personal data. Using that same data in an AI API workflow may require analysis of whether the basis extends to the AI processing step — particularly if the provider uses data for purposes beyond direct service delivery.
Special Categories of Data
Legal matters frequently involve Art. 9 GDPR special category data — health information, criminal proceedings, trade union membership, or financial distress. Processing these through an AI API requires explicit legal basis and heightened security measures.
Data Protection Impact Assessment (Art. 35 GDPR)
A DPIA is required when processing is likely to create a high risk to individuals’ rights. For law firms, this threshold is frequently met when:
- large volumes of sensitive client data are processed systematically
- the AI system analyses or profiles individuals
- criminal, health, or financial data is involved
The DPIA must precede the processing — not follow it.
Cross-Border Transfers
Where AI providers process data outside the EEA, law firms must ensure an adequate transfer mechanism: adequacy decision, Standard Contractual Clauses (SCCs), or Binding Corporate Rules. This is a live issue for US-headquartered providers. The contractual basis must be documented and verifiable.
Which AI APIs Are BRAO-Compatible? An Evaluation Framework
There is no official list of BRAO-approved AI APIs. Compliance depends on the specific deployment, not the provider alone. The following framework helps law firms assess any AI API:
| Criterion | What to check |
|---|---|
| DPA | Available, current, covers relevant data categories |
| Data region | EU or German processing available and selected |
| Subprocessors | Disclosed with specificity; changes notified |
| Training use | Client data not used for model training |
| Retention | Processing limited; deletion schedules documented |
| Security | SOC 2, ISO 27001 or equivalent; breach notification |
| Support access | Human support access to content restricted and logged |
| Confidentiality clause | Explicit contractual commitment, not just DPA terms |
OpenAI API, Anthropic API, and Azure OpenAI all offer DPAs and are used by German law firms. But compliance is not a provider-level conclusion — it depends on the specific architecture, region selection, data categories involved, and surrounding contractual and operational controls.
Subprocessor Risk for Law Firms
All major AI providers use subprocessors — cloud infrastructure, monitoring tools, security services. For law firms, each subprocessor in the processing chain is a potential confidentiality and GDPR exposure point.
Before going live, a law firm should:
- obtain the subprocessor list from the provider and assess whether any subprocessor creates a conflict with mandate confidentiality
- verify that Art. 28 GDPR obligations flow down to subprocessors contractually
- monitor subprocessor changes — providers are required to notify customers of new subprocessors in advance; law firms should have a change-monitoring process
- assess jurisdiction: where subprocessors are headquartered and operate can affect privilege, data access risk, and transfer compliance
For sensitive practice areas, the subprocessor analysis may justify selecting a provider with a shorter, EU-anchored subprocessor chain over one with global infrastructure.
Internal Governance: What Your Law Firm Needs
A BRAO-compliant AI API deployment is not just about external contracts. Internal governance is equally important:
AI Acceptable Use Policy
Every law firm using AI APIs should have a written AI acceptable use policy covering:
- which AI tools are approved and for which data categories
- what is never permitted (e.g., uploading unredacted client files, sharing matter strategies)
- prompt rules and output handling
- logging and record-keeping requirements
- escalation and incident response procedures
Works Agreement
If the firm employs staff, introducing AI tools that monitor, evaluate, or affect working conditions may require a Betriebsvereinbarung under the Betriebsverfassungsgesetz. Legal and HR teams should assess this before rollout.
DPIA
As noted above, a DPIA is required for high-risk processing scenarios. Even where not strictly mandatory, a lightweight privacy impact review before rollout is good practice and reduces subsequent exposure.
Staff Training
Combine BRAO supervision requirements with AI Act Art. 4 AI literacy obligations: train all users on the tools in use, the professional rules that apply, and the firm’s internal approval process for AI-assisted outputs.
Checklist: BRAO-Compliant AI API Deployment
Use this checklist before going live with any AI API in a law firm:
- Define the use case — research, drafting, summarisation, or client-facing? Which data categories?
- Classify the data — establish what may never enter an API, what requires redaction, and what can be used more freely
- Review the vendor — DPA, subprocessors, data region, training use, retention, support access
- Conduct a BRAO confidentiality analysis — map §43a BRAO obligations to the specific deployment model
- Assess GDPR requirements — legal basis, transfer mechanics, DPIA if applicable
- Negotiate the contract — ensure confidentiality commitments go beyond the DPA template
- Set internal roles and approval flows — who may use the API, who reviews AI outputs, who authorises release
- Draft an AI acceptable use policy — approved uses, prohibited uses, escalation paths
- Document AI literacy measures under Art. 4 EU AI Act
- Run a limited pilot before full rollout
When to Bring in External Counsel
External support adds particular value when:
- the firm plans to process active matter files or personal data systematically through an AI API
- the project involves cross-border processing or US-headquartered providers
- the firm wants to build a client-facing AI product or automated external communications
- sensitive practice areas are involved (criminal defence, regulatory, employment, M&A)
- the firm’s works council must be engaged before rollout
At that point, the question moves beyond compliance hygiene to professional responsibility design — and a documented legal basis for the specific deployment becomes essential.
Related Reading
- AI APIs for Law Firms in Germany: Compliance Guide
- AI Legal Research Tools: Compliance Guide
- AI Act for Legal Services
- OpenAI API for German Companies
- Anthropic API for German Companies
- Azure OpenAI for German Companies
FAQ
Does §43a BRAO prohibit German law firms from using AI APIs?
No. §43a BRAO does not prohibit AI API usage. It requires that client data shared with external providers is contractually protected to the same standard as mandate information generally — through a DPA, explicit confidentiality commitment, purpose limitation, and access controls. A blanket prohibition does not exist; a compliance obligation does.
What does §43e BRAO require for AI-assisted legal work?
§43e BRAO requires that the responsible lawyer supervises AI-assisted work and remains professionally accountable for every legally relevant output. AI can assist drafting, research, or analysis, but a qualified lawyer must review the result before it is released internally or externally.
Is a DPA enough for BRAO compliance when using AI APIs?
No. A DPA covers GDPR obligations but does not automatically satisfy the professional confidentiality requirements under §43a BRAO. Law firms need the DPA plus an explicit contractual confidentiality commitment, purpose limitation on mandate data, and a documented operational model — not just a signed template agreement.
Do German law firms need a DPIA before deploying AI APIs?
A DPIA under Art. 35 GDPR is required when processing is likely to result in a high risk to individuals. For law firms processing sensitive client data (health, criminal, financial) systematically through an AI API, the threshold is likely met. Even where not strictly required, a privacy impact review before rollout is strongly advisable.
Can German law firms use ChatGPT or Claude under BRAO?
Yes, in principle. The decisive factors are: Does the provider offer a DPA with EU data residency? Are subprocessors disclosed and EU-anchored? Is data excluded from model training? Can the firm structure the deployment to satisfy §43a BRAO’s confidentiality requirements? The provider is not the compliance answer — the deployment model is.
What is the BRAK’s position on AI tools for law firms?
The Bundesrechtsanwaltskammer issued guidance in December 2024 confirming that AI tools are permissible but require careful assessment against §43e BRAO, professional secrecy, GDPR, and transparency obligations. The BRAK does not prohibit AI API usage but emphasises that firms must examine provider selection, contract structure, confidentiality protection, and mandate-data governance before going live.
Next Step
Compound Law advises law firms and legal departments on BRAO-compliant AI deployments — from vendor review and DPA negotiation to internal AI policies, GDPR assessments, and EU AI Act readiness. If your firm is planning an AI API rollout, a structured pre-deployment review is the right starting point.
This article provides general legal information only and does not constitute legal advice. For guidance on your specific situation, please consult a qualified lawyer.