The first-ever legal framework on AI: EU Commission proposes a new Regulation

DORDA Rechtsanwälte GmbH | View firm profile

Artificial intelligence (AI) will shape our digital future. Self-driving cars, smart cities, digital factories and more precise medical diagnoses – the opportunities of AI are endless. However, its use also triggers significant risks. Algorithms or deep learning tools which are not accurate might severely affect people. In specific cases it might even raise complex ethical questions, for example when it comes to decision about life or death. The EU legislator realized the chances, but also the risks of AI at a very early stage. It thus aims to implement a legal framework for trustworthy and safe AI, hile strengthening AI uptakes, investments and innovations across the EU.

A new AI Regulation shall lay the foundation in order to achieve these goals. A few days ago, the provisional draft proposal of the planned AI Regulation was leaked. It already provided some interesting insights. On April 21st 2021, the proposal of the European Commission has been officially published. It deviates in some areas from the leaked previous version:

The key points of the proposal:

Aim and scope

The Regulation harmonizes the rules for AI systems, prohibits certain practices, provides specific requirements for high-risk AI systems and obligations for operators and new transparency rules. AI systems is defined broadly and future-proof. It shall cover all AI-based technologies which uses one of the techniques mentioned in Annex I – both stand-alone or incorporated into hard- or software:

  • Machine learning;
  • Logic- and knowledge-based approaches;
  • Statistical approaches, Bayesian estimations, search and optimization methods.

The Regulation further follows a (in comparison to the leaked document new) risk-based approach, differentiating between uses of AI that create (i) an unacceptable, (ii) high or (iii) low resp minimal risk. Depending on the classification of the AI, providers placing the specific system on the market, its users located within the Union or third
country-providers/users of AI systems, which are used in the Union, must meet different requirements.

Banned AI applications

The proposal lists a number of prohibited applications. This includes for example social scoring or applications that manipulate human behavior and circumvent users’ free will (eg toys using voice assistance encouraging minors to conduct dangerous behavior).

Strict regime for high-risk AI applications

High-risk AI systems include, for example, automatic facial recognition in public
spaces, credit rating scoring systems, robot-assisted surgery, biometric identification, AI enabling transportation or CV sorting software for recruitment purposes.

The provision of high-risk AI systems is subject to the following obligations:

  • Implementation of a documented quality management system including written policies, procedures, instructions in order to ensure an accurate analysis;
  • Technical documentation of the high risk AI system;
  • Logging of the activities generated by the AI in order to ensure traceability of results;
  • Conformity check and labelling with a CE marking prior to putting it on the market;
  • Clear and adequate information to users;
  • Human oversight measures in order to minimize risks;
  • High level of security and quality of datasets feeding the system in order to minimize discriminatory outcomes.

In addition, providers of high-risk AI are subject to numerous other obligations: They must set up a quality management system, fulfill information obligations and document AI’s mode of operation. In addition, the competent EU or national authorities need to be notified of the specific applications.

Soft regime for AI applications with limited risks

For some AI systems which trigger limited risks, only, minimum transparency obligations shall apply. This covers eg chatbots on e-commerce platforms or deep fakes manipulating content. With regard to such systems, provider need to disclose the fact that end-users are interacting with a machine or content has been artificially generated or manipulated, only. This shall enable customers to take an informed decision or step back from using the tools.

AI with minimal risks out of scope

AI applications that pose just minimal risks for citizen’s rights or safety, such as video games or spam filters, might be freely used. The draft Regulation does exclude such systems explicitly.

Penalties

Compliance with the new obligations shall be ensured by high penalties. Violations of
most of the requirements under the Regulation are subject to a fine of up to a maximum 4 % of global annual turnover or 20 million euros (whichever is higher). It is evident, that the sanction model is based on the GDPR approach which has also been used for the Omnibus Directive.

Supporting start-ups

On the upside, the EU also intends to deliberately boost innovation. It is thus planned to enable the testing and training of AI systems in regulatory sandboxes under supervision of national authorities. Startups shall have priority access to sandbox programs.

Similar initiatives are already in place for FinTech applications. In the long term, this shall also help increase the public’s trust in AI and robotics.

Conclusion and next steps:

The EU Commission’s proposal creates a certain minimum standard for the development, distribution, and use of AI. However, many questions are still open, such as particularly the relationship of this regime with the GDPR and its provisions on profiling, the purpose limitation principle, the information obligations, and the rights of data subjects. In addition, the regulation does not cover all relevant aspects such as liability. Therefore, further rounds of negotiations and additional regulations are required to fully cover the topic.

As a next step, the European Parliament and the Member States will need to adopt the proposal. If this huge step is made, the Regulation will become directly applicable across the EU. We do, thus, except some more negotiations, changes and twists take place before this happens. However, we do hope, that the Commission will continue to prioritize the issue so that the legal framework will be in place prior to AI systems having their breakthrough. Up to now, technology has usually been a few years ahead of a proper legal framework. With regard to AI, the politicians are at least trying to be quick and ahead of time.

More from DORDA Rechtsanwälte GmbH