At the beginning of February, the member states of the European Union approved the European Regulation laying down harmonized rules on Artificial Intelligence (AI Act) and the final text of the regulation is now available. The AI Regulation for the first time, creates a regulatory framework for the use of artificial intelligence, which will be of considerable importance for both providers and users of AI solutions due to its broad scope of application.

Even though most of the provisions of the AI Act are not expected to apply until spring 2026, companies should familiarize themselves with these future regulations today. It should also be noted that the use of AI regularly raises difficult questions regarding data protection and confidentiality, which companies should examine to correctly assess their liability and business risks.

What does the AI Act regulate and when does it apply?

In particular, the AI Act will impose compliance and information obligations on providers and deployers of AI systems, the scope of which will depend on the risk category to which the respective AI system is assigned.

Certain AI applications (e.g. manipulative AI, social scoring) will be generally prohibited (Art. 5 of the AI Act). For so-called high-risk AI, the Regulation introduces in particular the obligation to implement a risk management system as well as far-reaching transparency and documentation obligations, including a risk impact assessment (Art. 8 to 29a of the AI Act). According to Annex III of the Regulation, high-risk systems include, for example, diagnostic devices and software in the medical field, but also recruitment or evaluation systems, credit rating systems and systems for biometric identification or authentication where AI is used. However, AI systems that are not high-risk AI may also be subject to information and labeling obligations (Art. 52 of the AI Act). The term AI itself is defined very broadly in the regulation and is open to all technologies, i.e. it is not limited to conventional applications of artificial intelligence such as machine learning and deep learning.

The AI Act still has to be adopted by the European Parliament and the Council of the EU, but this is considered certain. Most of the regulations will be applicable two years after the AI Act comes into force, i.e. probably in spring 2026. For certain AI systems that have already been or will be launched on the market, a transitional period applies until the end of 2030 (Art. 83 of the AI Act). Some regulations, such as the bans on AI practices or the labeling requirements, will become mandatory just six months after the entry into force of the AI Act.

What is the relationship between the AI Act and data protection law?

Even if the provisions of the AI Act (e.g. the transparency and documentation obligations, but also the fines under Art. 71 AI Act) are often reminiscent of the General Data Protection Regulation (GDPR), the AI Act is not a data protection law. The AI Act is applicable regardless of whether personal data is processed in the AI system. Conversely, the AI Act does not regulate the data protection requirements under which the processing of personal data in an AI system is permitted. However, the German data protection supervisory authorities have recently published various recommendations on the data protection requirements for AI systems.

AI systems raise various, sometimes difficult, data protection issues. In particular, when training AI with personal data, it may be questionable whether the necessary legal basis exists and whether the companies involved are already jointly responsible instead of processing the data as a mere data processor within the meaning of Art. 28 GDPR. When using AI, transparency and non-discrimination must also be ensured, the fulfillment of data subjects’ rights (e.g. to information or deletion) must be guaranteed and the prohibition of automated individual decision-making (Art. 22 GDPR) must be observed. The requirement of data minimization also regularly poses particular difficulties, in the context of which it must be carefully examined to what extent the training or fine-tuning of an AI system is possible using anonymized, synthetic or at least pseudonymous data.

Recommendation for action

Companies should already familiarize themselves with the future regulations of the AI Act. The future restrictions can be essential for investment decisions and the documentation required by the AI Act for both providers and users – for example with regard to the corpus and training data used – must typically already be ensured at the start of the development of the AI system.

From a data protection perspective, it is particularly important to select the right technical setup or the right provider for the use of AI solutions at an early stage, as the technical and contractual control options differ greatly between the various providers and AI products. In addition to data protection aspects, this also concerns issues of confidentiality.

To reduce their liability and business risks, companies should regulate the use and training of AI by their employees in a binding manner and take quality assurance measures to prevent AI-typical risks.  

Feel free to contact us if we can help you with this or if you have any questions.

planit legal christian putzar

Christian Putzar

Lawyer

Email: christian.putzar@planit.legal
Phone: +49 (0) 40 609 44 190