Back in April 2021, the EU Commission presented the first proposal for an AI Act to regulate artificial intelligence. After some changes to the original draft, the EU Parliament agreed on a position on 14 June 2023 and adopted proposed amendments to the Commission’s draft. The AI Act is currently in the trilogue negotiations and thus in the final phase of the legislative process, after which the AI Act shall be adopted in its final form. In light of the rapid proliferation of artificial intelligence in more and more areas, especially since the introduction of ChatGPT, regulation of artificial intelligence seems to be urgently needed to counteract possible dangers. If enacted, the AI Act would make Europe a pioneer in the regulation of artificial intelligence – no other country currently has similarly comprehensive regulatory plans. This article aims to shed light on the regulation that companies using or distributing artificial intelligence will need to prepare for under the current draft and when the AI Act can be expected to come into force.
Background on the regulation of artificial intelligence
There is probably no topic that is currently being discussed as much as the increasing use of artificial intelligence. Especially after the release of the AI-based chatbot ChatGPT at the end of November 2022, companies and users are discovering new areas of application that can be supported by artificial intelligence on an almost daily basis. It looks like artificial intelligence could be the next big technological step that will shape our social coexistence in the coming years and decades.
To date, there is no fundamental and comprehensive regulation of artificial intelligence applications. Companies that use artificial intelligence or plan to do so in the near future are therefore uncertain whether and under what conditions they are allowed to use artificial intelligence. With the AI Act, the European legislator now wants to create binding rules.
The course of the legislative process so far
The European legislator reacted early to the increasing use of artificial intelligence and presented a proposal for a regulation of artificial intelligence (officially also known as the “Artificial Intelligence Act” or “AI Act”) in April 2021. As a European Regulation, the harmonized law would be directly applicable in all member states without the need for a national implementing act. A first plenary session in the EU Parliament did not take place until October 2022 due to numerous changes. By the end of 2022, various committees and member states had submitted comments and proposed amendments, which were incorporated into a revised draft of the AI Act presented in December 2022. In particular, the proliferation of general-purpose AI capable of generating new content (such as ChatGPT and GPT-4) – also known as “generative AI” – delayed the process. On 11 May 2023, members of the EU Parliament introduced further amendments to the draft (available here). These amendments were approved by the plenary session of the European Parliament on 14 June 2023. The trilogue negotiations between the Parliament, the Council and the EU Commission are currently taking place, at the end of which the AI Act is to be adopted. Most recently, the EU Commission drafted a new compromise text on 19 November 2023. The final text is not expected to be adopted until 2024 at the earliest. According to the current draft, companies would have to implement and comply with the AI Act after an implementation period of two years, i.e. from 2026 at the earliest.
Summary of the proposed regulation by the AI Act
The aim of the AI Act is to create safe and trustworthy AI applications. All providers and users in the EU are covered by the scope of application (so-called horizontal approach). The AI Act follows a risk-based approach, as has been the case since the beginning of the regulatory process. The riskier the AI technology used, the more requirements have to be met. According to the draft, AI applications will be classified into one of the following four risk groups:
- AI applications that pose an unacceptable risk will be prohibited (prohibited AI applications). These are applications that violate fundamental rights, in particular because they manipulate human behavior, circumvent the free will of users, can cause psychological or physical harm, or classify people according to their social behavior or ethnic characteristics (e.g., “social scoring” systems).
- High-risk AI systems: AI systems that pose a high risk to the health and safety or fundamental rights of natural persons may only be operated under high transparency obligations: Known and foreseeable risks must be documented, quality criteria for training data must be met, and AI operations must be recorded during operation. In addition, human oversight of the systems must be ensured. The draft provides eight areas of so-called high-risk AI systems:
- Biometric and biometric-based systems
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, workers, management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
- Limited-risk AI: Limited-risk systems must meet certain transparency requirements. In particular, users must be informed that artificial intelligence is being used.
- Minimal risk AI: Systems with minimal risk can be deployed without additional legal obligations, as long as general legal requirements are met.
Changes to the original draft from the AI Act and current point of contention
The Parliament had already agreed on some changes compared to the original draft of April 2021 in 14 June 2023. In particular, the list of prohibited AI applications has been expanded. This means that automated facial recognition (e.g. by the police) by so-called biometric recognition systems for real-time recognition in public spaces, as we know them from totalitarian states such as China, will be prohibited. AI systems for the automated recognition of emotions, e.g. during the interrogation of suspects, are also to be prohibited in the EU.
The original draft of the AI Act also included liability provisions. These sections of the AI Act have now been removed. The EU Commission is working in parallel on a directive on product liability and AI liability, which will include these provisions. Artificial intelligence is therefore likely to be regulated by a complex of laws, but in which the AI Act will play the central role.
The handling of AI foundation models remains particularly controversial. AI foundation models are the technologies behind generative AI such as ChatGPT, GPT-4 or PaLM. Countries such as Germany, France and Italy reject strict regulation of AI foundation models and only want to oblige providers to self-regulate through a code of conduct, thus weakening the existing draft. Compromises will now have to be found in further trilogue negotiations.
What should companies be aware of?
It is not foreseeable whether and to what extent the draft AI law will be revised in the course of the further trilogue negotiations. As there is still a lot of discussion about some details (especially the regulation of AI base models), it is likely that it will take some time before the AI Act is passed. Companies that use AI systems should keep themselves regularly informed about the current status of the legislative process and check in advance whether the systems can continue to be used under the new regulation and whether they can comply with the relevant obligations, in particular the transparency obligations.