AI as a lawyer: What is permitted? The guidelines from BRAK & DAV provide clarity

Cover Image for AI as a lawyer: What is permitted? The guidelines from BRAK & DAV provide clarity

BRAK 12/2024 & DAV 07/2025: Two guides for the productive use of AI in law firms

Both publications send the same basic message: AI should be used – but in a well-organized manner. The BRAK guidelines from December 2024 clearly emphasize professional diligence, especially the independent final review of legal work. The DAV guidelines from July 2025 build on this, but remove practical hurdles and focus on a pragmatic, risk-based approach to cloud services and security requirements. Taken together, both texts offer a reliable guideline for law firms that want to use legal AI reliably in their everyday work.

At their core, both guides clearly recognize that AI—especially large language models—can significantly accelerate and qualitatively support the work in law firms: in research, structuring, drafting, and repetitive workflows. At the same time, hallucinations, distortions, and contextual errors remain real risks. That is why AI must not replace the work of lawyers, but rather support it; in the end, there is always a human quality and plausibility check. This is the main point emphasized by the BRAK text: the final check is standard, “human-in-the-loop” is the rule, not the exception.

Both also agree on the protection of confidentiality. External AI or cloud providers can be integrated – but only with strict access restrictions, clear purpose limitation, and contractual safeguards. Law firms should minimize data, pseudonymize it where possible, and carefully select whom they entrust with what. GDPR compliance is not seen as an insurmountable barrier, but as a design task: a clear legal basis, transparent processes, and comprehensible technical and organizational measures.

The DAV emphasizes this point in practice. It explicitly clarifies that there is no blanket obligation to use particularly complex encryption if this would make use disproportionately difficult. The decisive factors are the risk assessment in each individual case and an appropriate, not excessive level of protection. Equally noteworthy is its stance on client requests: if a client expressly requests that an AI result be used without further review by a lawyer, this is permissible in the DAV's opinion. This is not a rejection of due diligence, but an emphasis on autonomy and freedom of contract – albeit with the implicit expectation that law firms document and manage such cases responsibly. The BRAK guidelines are more cautious in this regard and uphold final review as the norm.

Another common framework is the EU AI Regulation. Two dates are particularly relevant for law firms: From February 2, 2025, AI competence will come to the fore – law firms will need trained employees, clear internal rules, and documented processes. From August 2, 2026, transparency requirements will apply to the publication of AI-generated content for public information. At the same time, both publications emphasize that typical law firm applications do not generally constitute high-risk AI; nevertheless, it should be checked whether specific constellations fall outside the scope.

What does this mean in concrete terms for everyday life? Law firms benefit most when they do not use AI “on the side” but standardize it: with an easily understandable AI policy, clear roles and rights, guidelines for data and prompts, defined approval levels, and reliable logging. Confidentiality becomes the default – EU/EEA hosting is preferred, access is based on the need-to-know principle, AV contracts are in place, and data is only as extensive as necessary. Data protection remains manageable if you think about it pragmatically: pseudonymization where it makes sense; otherwise, choose providers carefully, use transparent procedures, and keep an eye on storage periods. And when it comes to risk classification, a sober approach helps: most legal AI workflows are not high-risk, but they do deserve clear responsibilities and documented controls.

You can read here how PyleHound can help you as secure AI for lawyers.

Conclusion: The BRAK guidelines provide a stable framework—professional ethics, final control, confidentiality—while the DAV guidelines specify it in a risk-based and practical manner by reducing excessive demands and formulating a risk-based approach suitable for everyday use. Taking these two perspectives together, a consistent picture emerges: AI has a place in law firms as long as it is used as a tool in responsible, well-organized processes.

Links to the statements: BRAK 12/2024 & DAV 07/2025