Artificial intelligence (AI) has been on the rise for years. Self-driving cars, human-like digital assistants and other revolutionary applications are ready for the market or within reach. However, decisions – made by human or machine – carry risks. Critics of AI fear discrimination, violations of personal rights, dangers to life and limb, and even apocalyptic scenarios. Now, the EU wants to regulate “high-risk” AI through a Directive. An overview of the draft and the state of discussion.

Background

AI hardly played a role in the EU Commission’s Digital Agenda for the period up to 2020. The Digital Compass (the EU’s political vision until 2030) presented in March, however, is quite different: the EU is now striving for a pioneering role in “ethical AI”. The term shows that besides opportunities, risks are seen, as well. The “ethical” quality of AI is therefore supposed to be ensured through regulation.

In April 2021, the EU Commission presented the draft of an EU regulation on artificial intelligence (AI law) for this purpose. Preliminary work in the form of ethical guidelines for trustworthy AI was presented by a group of experts in April 2019.

The efforts for a legal regulation do not come too soon. Recently, more and more associations and organisations had issued recommendations on the use of artificial intelligence (such as the BDI, the ZVEI, the Forum Privatheit, Bitkom and the DAV). However, relying on self-regulation seems neither appropriate to the risks nor does it create sufficient legal certainty for the developers and users of AI. The AI law takes a middle path between a largely free market (such as in the US) and an authoritarian approach (such as in China).

Wider Scope (Marketing and Use of AI)

The upcoming AI law concerns the marketing, commissioning and use of AI. AI is defined broadly in the draft and refers to software that implements machine learning techniques, knowledge-based systems, statistical search or optimisation methods.

The draft is a horizontal regulation whose application is not limited to certain sectors. All providers and users of AI will be covered. This applies even if they are located outside the EU but their AI systems impact individuals in the EU. The EU’s General Data Protection Regulation (GDPR) already has a similar scope of extraterritorial application. And this is not the only parallel to the GDPR: AI providers outside the EU might have to appoint a representative in the EU.

Prohibited AI Systems

The draft classifies AI systems, with some use cases being prohibited altogether. These prohibitions are sometimes formulated very vaguely, such as:

  • Adversely affecting people
  • Exploiting weaknesses
  • General surveillance: AI systems used indiscriminately to monitor “all natural persons”, for example in the form of large-scale surveillance and tracking with personal data in digital or physical environments,
  • Social scoring: AI systems used to generally assess the trustworthiness of individuals based on their behaviour or personality traits, if this leads to systematic disadvantages also in other contexts or to disproportionate disadvantages overall.

For the purposes mentioned, AI may only be used by authorities in exceptional cases if a law permits this in the interest of public safety. Whether the goal of public security can constitutionally justify the use of AI systems for the purposes mentioned is, however, very questionable. Furthermore, one misses a statement on controversial use scenarios such as autonomous, lethal weapon systems.

High-Risk AI Systems

The draft mainly deals with regulations for “high-risk AI systems”. These are to include:

  • Safety components of vehicles (road, rail, water, air)
  • Safety components of regulated products (for which EU law requires conformity testing and CE certification).
  • Other High-Risk AI (with a conformity assessment): The draft also requires testing and CE certifications for AI systems for remote biometric identification in publicly accessible spaces and AI systems as security components for essential public infrastructure networks (such as water, gas, electricity).
  • Other HR AI (with self-assessment): A self-assessment of conformity should be sufficient for AI systems used for
    • Prioritisation of emergency services such as fire brigade or medical assistance,
    • Determining access to education or training facilities,
    • Assessment in application procedures, promotions, dismissals, as well as the allocation of tasks and performance and behavioural control in the employment relationship,
    • Determination of creditworthiness,
    • Determining access to public services,
    • prevention, investigation, detection or prosecution of criminal offences or in connection with measures restricting freedom,
    • prediction of crime or social unrest for the purpose of planning patrols and local surveillance,
    • Processing of asylum and visa applications and appeals and admission to the territory of the EU,
    • Supporting judges (except for auxiliary tasks).

Requirements for the Development of High-Risk AI

High-Risk AIs are to be subject to CE certification, meaning that they would have to be tested for conformity with the requirements of the AI Regulation before being placed on the market. These requirements are, in particular:

  • “Good” training data: Training data should be of “high quality”. This means that they are free of biases that can lead to prejudices and discriminatory decisions. In this respect, the failed attempt by Amazon to filter job applications via AI is often cited. The system was fed with application documents and the decisions of the recruiters. Since the HR department had given preference to hiring men, the AI also learned this preference and consistently sorted out women. The AI only learned the gender indirectly by using terms like “Women’s Chess Club” or “Women’s College”. After unsuccessful attempts to make the AI non-discriminatory despite the prejudiced training data, Amazon abandoned the project.
  • Non-discrimination: The preparation of training data will therefore require considerable care. In turn, testing will require the use of new datasets that have not already been used for training the AI. According to the regulation, there is a weighty public interest in non-discriminatory High-Risk AI. For tests that ensure this, sensitive data on health, religion or sexual orientation, for example, may therefore also be used – if necessary (cf. Art. 9(2)(g) GDPR).
  • Traceability: A particular challenge is the requirement to design High-Risk AIs in such a way that their output is traceable and explainable. For example, neural networks, which take up the functional principle of the human brain, are not based on a symbolic representation of knowledge. It is therefore not trivial to convert their “thought processes” into a symbolic form and make them logically comprehensible. However, this is required under the draft AI regulation.
  • Transparency: The AI should be transparent not only for experts but also for users. To this end, user documentation must be included in concise and, if possible, non-technical language. This must describe the purposes and properties of the system, design and logic, as well as the requirements for use and ongoing maintenance.
  • Human monitoring: High-Risk AIs should be suitable for monitoring by humans – e.g. through interfaces. This should reduce possible risks.
  • Conformity assessment, CE certification, standardisation and testing procedures: The conformity of High-Risk AIs with the AI Regulation will become a prerequisite for marketing in the EU. Conformity can be demonstrated through CE certification. The EU will also adopt standards, compliance with which will lead to a presumption of conformity with the Regulation. For the comprehensive tests that will be necessary under the AI Regulation, the competent authorities are to develop “sandboxing schemes”, i.e. specifications for safe test environments. The conformity assessment for AI is based on an ex ante view, but nevertheless has similarities with the data protection impact assessment under the GDPR.

General Requirements for All AI Systems

For AI systems that are not High-Risk AIs, only basic requirements apply. If intended to interact with humans, it must be recognisable that it is an AI. Individuals must be explicitly informed when their data is processed by an AI to detect emotions and categorise individuals. So-called deepfakes, i.e. images, videos or sound recordings generated with the help of AI that give the impression of showing real persons, must always be labelled as such.

Criticism and Outlook

The draft AI regulation is a bold attempt. It would be the first time in the world that AI is regulated horizontally to this extent. However, this also leads to considerable hurdles for the development and use of AI. As might be expected, the draft has met with considerable criticism from various directions. Major points of criticism include:

  • White-washing: Critics see the terms “trustworthy” or “ethical” AI as a pure marketing narrative. Only humans could be trustworthy per se, but not AI (Thomas Metzinger in the taz on the preliminary work).
  • Barrier to innovation: Associations dealing with innovation, such as the Center for Data Innovation, see a danger that the AI Regulation could stifle the development of the early-stage AI industry in Europe.
  • Lack of quality: Researchers from Great Britain and the Netherlands attest to the draft’s grand ambitions, but weak implementation (article as PDF). They claim that some of the laws from different areas of law have been put together without making sense, which jeopardises their practicability. In addition, the draft contains too many ambiguities.
  • Protection gaps; lack of alignment with the GDPR: In a joint opinion (PDF), the European Data Protection Supervisor and the European Data Protection Board have welcomed the draft in principle. However, they see dangerous gaps in the protection it claims to offer, due to exemptions and prohibitions that are either too broad or not broad enough. They also complain about the lack of coordination of the planned CE certification with the requirements of the GDPR.

Against this background, further developments can be awaited with excitement. The Commission’s draft will certainly undergo considerable changes in the legislative process through input from the Parliament and the Council.