Auditors, lawyers, tax consultants and management consultants: Four perspectives. One solution. Worldwide. Find out …
Our clients entrust us with their most important legal matters. Learn more about our legal services!
Tax laws are complex and dynamic. We face the challenge of tax law together with you - find out more.
Deemed Supply Chain: VAT Risks with Third-Country Connections
Baker Tilly advises CFL on container vessel acquisition
US tariffs: Short term optimization – medium-term preparation
BAG overturns forfeiture clause for share options after termination
Art. 273a ZPO: More protection for trade secrets in civil proceedings
Social insurance obligation for freelance teachers only from 2027
Industry-specific knowledge is essential in order to create the best conditions for customised solutions. Find out …
Baker Tilly advises biotech startup Real Collagen GmbH investment by US investor
Energy study: Uncertainty slows down investments by industry and utilities in Germany
After ECJ ruling: Financial investors still have no direct access to medical care centers
Benefit from bundled interdisciplinary competencies, expert teams and individual solutions. Learn more!
Baker Tilly offers a wide range of individual and innovative consulting services. Find out more!
The European Union's AI Act (Artificial Intelligence Act) officially came into force on August 1, 2024. The world's first comprehensive set of regulations for the use of artificial intelligence is intended to create a balance between innovation and risk protection. The EU member states must now transpose the AI Act into national law. The first rules of the AI Regulation apply from February 2, 2025.
The AI Act pursues a so-called risk-based approach and regulates in particular the risk classification of the various AI systems as well as the requirements for high-risk AI systems and GPAI systems (general purpose AI). Accordingly, the higher the risk of the application, the stricter the requirements.
There is also a transparency obligation: artificially generated or processed content must be clearly labeled as such. The regulation affects users and operators of AI systems as well as providers, importers, retailers and manufacturers of AI systems.
AI Act
Contract design
Data protection
IP law
Technology and IT law
Dr. Christian Engelhardt, LL.M.
Partner
Attorney-at-Law (Rechtsanwalt)
Boris Ortolf
Director
Certified Information Systems Security Professional (CISSP), Certified Cloud Security Professional (CCSP)
Talk to us. Simply without obligation.
Contact now
The legally compliant assignment of AI systems to one of the risk categories is relevant in the context of the AI Regulation and the AI Liability Directive. Accordingly, an AI system is a machine-based system that is designed to operate with varying degrees of autonomy and can demonstrate adaptability after use – for explicit or implicit goals from the input it receives. AI systems entailing an unacceptable risk are prohibited as they pose a threat to people. These include biometric real-time remote identification systems and AI systems that enable social scoring or manipulation techniques.
Furthermore, there are also high-risk AI systems. These pose a high risk to the health and safety or fundamental rights of individuals, but are permitted. Their development and use are therefore subject to comprehensive documentation, monitoring and quality requirements. AI systems posing a limited risk to the end user are those that are intended for interaction with individuals. They are subject to a limited number of transparency obligations. AI systems that are considered to pose a minimal risk are permitted without further ado.
High-risk AI systems are divided into two categories: On the one hand, this includes AI systems that are used in products subject to EU product safety regulations. On the other hand, it also includes AI systems that are used in sensitive areas such as health, transport, justice or the police and which must be registered in an EU database. In the case of high-risk AI systems, there is a particular obligation to establish a risk management system, record-keeping and transparency, the creation of technical documentation, the obligation for human supervision and the obligation for these AI systems to achieve an appropriate level of accuracy, robustness and cybersecurity.
A central aspect of this is AI governance which describes the entirety of measures that ensure the ethical and legal use of artificial intelligence. Transparency, fairness and compliance with data protection regulations are crucial in this respect.
For companies, this means that the functioning of their AI systems must be comprehensible in order to create trust among users and authorities. In addition, possible distortions in the algorithms must be actively prevented and security gaps avoided. This can only be achieved through regular employee training and clear guidelines. The right governance therefore plays a key role in ensuring that the benefits of AI can be exploited while risks remain controlled.
GPAI systems (General Purpose AI) are AI systems with a general purpose. Providers of GPAI systems must fulfill special requirements, such as the creation and updating of information and documentation as well as compliance with copyright law. GPAI systems are accessible to the general risk classification and are subject to the corresponding requirements.
The solution is complex and feasible. Irrespective of the AI Regulation, the use of artificial intelligence already requires compliance with all legal data protection regulations if personal data is processed with AI.
The principles of the GDPR such as lawfulness, purpose limitation, transparency, data minimization and accuracy must be observed.
According to the AI Regulation, AI systems,
are to be classified as high-risk AI systems. In this respect, the user or operator of AI systems, as well as providers, importers, dealers or manufacturers of AI systems, are subject to special obligations.
The employer is entitled to determine whether or not AI will be introduced in the employment relationship. However, the co-determination rights of the works council must be taken into account when determining the “how”. In particular, Art. 87 (1) No. 6 BetrVG (German Works Constitution Act) provides for a right of co-determination in the introduction and use of technical equipment that is objectively suitable for monitoring the behavior or performance of employees.
If the works council has to assess the introduction and use of AI in order to perform its duties, it has the right to consult an expert. The employer must inform the works council about the planned use of AI as early as possible.
Using AI systems – just as human decisions – may entail discrimination. This is due to the fact that AI applications are fed by humans with training data that may already be unrepresentative and can therefore lead to unequal treatment. But how can this be countered legally? The German General Equal Treatment Act (“AGG”) also offers protection in this context, according to which employees may not be discriminated against either directly or indirectly on the basis of the discrimination criteria listed therein. Due to the technology-neutral wording of the AGG, this may also cover decisions made by AI applications. In the event of discrimination, the disadvantaged party is therefore entitled to compensation and damages under the AGG.
Employees are already using AI applications at work, such as DeepL for translating or Copilot / ChatGPT for composing texts. However, employees must perform their work personally, i.e., they are generally not allowed to delegate their work to third parties. It is therefore questionable whether the delegation of work tasks to an AI application constitutes the impermissible use of an auxiliary person or the permissible use of an auxiliary tool.
The decisive factor in this respect is likely to be that the work result is sufficiently checked by the employee for errors and is not passed off unchecked as their own work product. It is therefore advisable to draw up internal guidelines for dealing with AI applications and to train employees in this regard.
The AI Act’s coming into effect is to be welcomed and is also necessary in view of the risk of discrimination, as it is to be expected that the importance of AI in labor law practice will increase considerably. The interfaces between AI and labor law have already been the subject of court decisions.
The AI Regulation’s requirements affect all companies. It is to be expected that the topic of artificial intelligence will increase significantly in the coming years. This is due to several aspects: Most recently, the data protection organization Noyb filed a complaint against OpenAI claiming the generation of inaccurate information about individuals violated Art. 5
GDPR and the data subject’s right to rectification and erasure was not guaranteed.
The use of AI undoubtedly offers great opportunities for companies. Therefore, it is all the more worthwhile to implement it on a solid legal basis right from the start. We will be happy to advise you on all practical questions relating to data protection and compliance requirements.
Further information on the AI Act will be explained in more detail as part of the “Data Protection Law Update” series of events.
Our experts Dr Christian Engelhardt and Boris Ortolf will be happy to help you.