Auditors ✓ Lawyers ✓ Tax advisors ✓ and business consultants ✓ : Four perspectives. One solution. Worldwide. Learn …
Auditing and audit-related advice for companies ✓ Experienced auditors ✓ Excellent advice ✓ Tailor-made solutions » …
Our clients entrust us with their most important legal matters. Learn more about our legal services!
Tax laws are complex and dynamic. We face the challenge of tax law together with you - find out more.
Business consulting for companies ✓ Experienced consultants ✓ Excellent advice ✓ Tailor-made solutions » more
Employment and Labour Laws Newsletter: International Trends and Current Legal Developments
ICT risks when using AI: New BaFin guidance
How tax structure affects the purchase price at GmbH & Co. KG
Baker Tilly advises Capmont on add-on acquisitions in the electrical segment
New Partner in Real Estate Valuation: Baker Tilly Expands Advisory Services
Baker Tilly advises Rigeto: Matignon Group acquires MEON locations
Research Allowance 2026: A New Impulse for Innovation and Growth
Temporary employment: Employer-of-Record model permitted again
BFH clarifies three-property limit for corporations
One year of DORA: What's next for financial companies
Survey: Two thirds of German automotive suppliers anticipate a market shakeout
Cross-industry expertise for individual solutions ✓ Our interdisciplinary teams combine expertise & market …
Germany Fund Launched – A New Framework for Private Investment
Carve-out or collapse? How automotive suppliers are saving themselves.
German Federal Court of Justice approves building cost subsidies for battery storage systems
Risk management ✓ Compliance and controls ✓ Increase and ensure security & conformity ✓ more»
Baker Tilly offers a wide range of individual and innovative consulting services. Find out more!
Algorithms in a regulatory straitjacket: The German Financial Supervisory Authority (BaFin) has issued new guidelines setting out guardrails for the use of artificial intelligence in financial companies.
The financial industry is on the cusp of one of the most profound changes in its history. The integration of artificial intelligence (AI) into core banking processes promises efficiency gains, more accurate risk models, and hyper-personalized customer interactions.
But with technological power comes vulnerability. The German Federal Financial Supervisory Authority (BaFin) has sent a clear signal with its “Guidance on ICT risks when using AI in financial companies”: the era of unregulated experimentation is over. AI is no longer an abstract topic for the future, but a concrete ICT asset that is also subject to the strict requirements of the Digital Operational Resilience Act (DORA).
This article summarizes key aspects of the guidance published on December 18, 2025. We begin by illustrating modern, AI-supported lending in order to explain the operational risks involved. We then analyze BaFin's requirements throughout the entire AI lifecycle—from governance and development to decommissioning. Finally, we will outline the contribution independent auditing can make to obtaining reliable evidence of prudent risk management with AI risks and operational resilience.
In order to fully understand the regulatory requirements of BaFin, it is essential to clarify what these rules actually apply to.
Traditional lending processes were based on causal relationships and linear rules according to the formula: “If income > X and debt < Y, then loan = yes.” This deterministic worldview is being fundamentally challenged by AI.
Modern systems, often based on machine learning (ML), do not primarily search for causalities, but rather for complex, non-linear correlations in huge data sets. They transform the question “Can the customer pay?” into a statistical probability forecast that weighs a multitude of variables (features) against each other in a fraction of a second.
The life cycle of an AI-based lending decision can be technically divided into five critical phases. Each of these phases involves specific ICT risks, which are addressed in the BaFin guidance.
Phase 1: Intelligent data capture and extraction (input management)
The process begins at the point of sale – whether in an app or at the bank counter.
Phase 2: Feature engineering and data enrichment
The raw data is now transformed into processable signals.
Phase 3: Risk prediction (inference)
This is the heart of the system – the “black box.”
Phase 4: Decision-making and explainability (XAI)
A score alone does not meet regulatory requirements.
Phase 5: Continuous monitoring and fraud detection
The AI lifecycle does not end after the initial credit decision. Rather, a permanent operational and monitoring phase begins which, in terms of ICT risks, is one of the most critical phases from a regulatory perspective.
This massive automation and dependence on complex, often nontransparent algorithms create new vulnerabilities. What happens if the model has been trained on discriminatory historical data (bias)? What if attackers deceive the model by making minimal changes to the input data (adversarial attacks)? What if the model suddenly delivers incorrect forecasts due to changes in the macroeconomic environment (e.g., pandemic, inflation) (model drift)? This is precisely where BaFin comes in with its guidance. It does not view the AI system as a magic crystal ball, but as a critical ICT asset that must be managed, secured, and monitored.
The guidance is based, among other things, on discussions with financial companies and does not represent a binding interpretation of DORA by BaFin. Overall, the supervisory authority makes it clear that the handling of AI risks is under observation. Financial companies deviating from the guidance expose themselves not only to ICT but also to compliance risks. In the following, we analyze the document chapter by chapter in order to illustrate the depth of the requirements.
Legal nature
BaFin makes it clear that this is “non-binding guidance.” In financial supervision practice, however, this regularly means a reversal of the burden of proof. Anyone who ignores the guidance must provide detailed evidence in the event of an audit that their alternative measures offer at least an equivalent level of protection. The guidance is primarily aimed at CRR institutions (credit institutions) and Solvency II insurers that are required to apply the full ICT risk management framework under Articles 5-15 of DORA.
Definition of an AI system
BaFin is not reinventing the wheel, but refers to the definitions in the EU AI Regulation (AI Act). However, the classification as a “machine-assisted system” is crucial for IT supervision. This definition firmly anchors AI in the concept of “network and information systems” according to DORA. This means that all general DORA requirements automatically also apply to AI systems – supplemented by AI-specific risks, such as stochastics.
Governance and strategy
If AI applications support critical or important functions, the guidance recommends that an AI strategy be formulated. This can be a standalone strategy or integrated into the IT/DORA strategy. The strategy must clarify why AI is being used (efficiency, risk minimization), what risks are acceptable (risk appetite), and what the resource planning looks like. A strategy that calls for AI innovation but does not provide budgets for cloud infrastructure or specialized personnel is inconsistent.
BaFin emphasizes the management's ultimate responsibility (Art. 5 (2) DORA). Board members cannot claim ignorance. DORA explicitly requires that members of the management body acquire sufficient ICT knowledge. In the context of AI, this means that while a bank board member does not need to be able to write code, they must understand what “model drift” is, why “hallucinations” in LLMs pose a risk, and where the limits of automation lie.
Integration into the risk management framework
AI risks must not be viewed in isolation, financial regulators demand in their guidance. Financial companies must conduct a complete inventory of their AI systems. This includes “shadow AI” and AI components in purchased standard software (e.g., HR tools, ticketing systems).
Furthermore, BaFin stipulates that risk treatment measures must be specific, i.e., they must address a specific risk. If a risk has been identified in the area of “adversarial attack,” the measure must be technical in nature (e.g., adversarial training) and not merely organizational.
Here, BaFin delves into the technical implementation and applies principles of software development to the discipline of data science.
Software development
BaFin considers the training of a model similar to the compilation of software.
The testing paradigm
Testing stochastic systems (which are based on probabilities) is fundamentally different from testing deterministic software.
An AI model is not a “fire and forget” system. It ages from the moment it is put into operation.
Monitoring and drift
Reality is constantly changing, but the trained model remains static. This discrepancy is called model drift.
Cloud specifics and exit strategies
Since modern AI (especially GenAI) requires scalable computing power, there is often no alternative to the cloud. With regard to vendor lock-in, BaFin warns against dependence on proprietary AI services (e.g., use of AutoML features from a hyperscaler).
Financial companies must develop strategies for maintaining AI operations in the event the cloud provider fails or terminates the contract. This includes the technical capability to port data and models. With proprietary models (such as GPT-4), model export is generally impossible. In this case, functional alternatives (e.g., fallback to an open-source model) must be planned. In addition, BaFin emphasizes that the supervisory notice on outsourcing to cloud providers must be observed.
AI systems are attractive targets for cybercriminals, partly because they work with sensitive and valuable data and are also involved in decision-making.
Specific attack vectors BaFin therefore calls for protective measures against AI-specific attacks:
Data security
This is where DORA and GDPR intersect. The integrity and confidentiality of data flows must be guaranteed. Data must not only be encrypted “at rest” and “in transit.” BaFin also refers to protection “in use,” which suggests the use of confidential computing (encryption in memory/processor), especially in cloud environments.
This article has made it clear that the use of AI is under close scrutiny by companies supervised by BaFin. The requirements are complex, technically profound, and organizationally far-reaching. An error in implementation or documentation can not only result in regulatory sanctions, but also jeopardize the operational resilience of your company.
We are an interdisciplinary team of cyber and IT control experts who are familiar with regulatory requirements. We audit whether your use of the AI system meets the requirements of the BaFin guidance. The result of such audit will be presented to you in a comprehensive and easy-to-understand report that explains all aspects of the guidance throughout the AI lifecycle.
The audit is conducted in accordance with IDW PS 860 (“IT Audit Outside the Scope of the Annual Audit”). This standard of the Institute of Public Auditors in Germany (IDW) is specifically designed to subject IT-supported systems, processes, or applications to an objective assessment separate from the annual audit. It provides the ideal methodological framework for AI audits, as it can be flexibly applied to specific criteria catalogs (such as DORA and the BaFin guidance).
Of course, we are also happy to support our clients outside of a formal audit in accordance with IDW PS 860. Possible approaches range from compact initial and maturity assessments to topic-specific reviews (e.g., governance, risk management, data quality, model control, or control concepts) to technical evaluations of individual AI applications. These formats are particularly suitable for internal classification of the current implementation status, targeted preparation for regulatory audits, or step-by-step further development of your AI governance throughout the life cycle. Please feel free to contact us for more information.
Daniel Boms
Director
Certified Information Systems Auditor (CISA)
Dr. Christoph Wronka, LL.M. (London)
Director, Head of Anti-Financial Crime Audit & Advisory
Certified Anti-Money Laundering Specialist (CAMS), Certified Internal Auditor (CIA)
Kilian Trautmann
Manager
Certified Information Systems Auditor (CISA), Certified Information Security Manager (CISM)
Talk to us. Simply without obligation
Get in touch
View all news