In a milestone achievement, the European Parliament, Council and Commission have recently reached a provisional agreement on the European Union Artificial Intelligence Act (the “Act”),a comprehensive regulatory framework designed to bring legal certainty to the realm of AI regulation within the EU. Originating from the European Commission’s proposal in April 2021, this Act aims to propel the development of a unified market for safe and dependable AI applications, setting the stage for transformative advancements in the digital landscape.

Definition & Scope

Central to the Act is the expansive definition of an “AI System,” echoing the Organisation for Economic Cooperation and Development’s broad characterization. Said definition reads, “…a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments….”. Importantly, the Act’s scope extends beyond domestic borders, encompassing all actors in the AI supply chain and deployers, even if situated outside the EU, as long as their AI system impacts individuals within EU Member States.

Classification & Structure

The Act introduces the concept of classification of the risks posed by the use of an AI system. There are four tiers of risks, the highest of which applies to those uses that are considered unacceptable for people’s security and fundamental rights, such as social scoring, biometric categorisations of individuals according to race, gender, sexuality, political or religious beliefs, which are consequently banned. The second, and perhaps most significant, tier identifies “high-risk” AI systems, subjecting them to a myriad of ongoing obligations, including testing, risk mitigation, human oversight, data governance, cyber security, accuracy, robustness, detailed documentation and notification. The third and fourth tiers consist of limited-risk systems where minimal transparency requirements are imposed to strike a balance between innovation and regulatory oversight; and minimal/no-risk AI systems, where voluntary codes of conduct are encouraged.

High risk AI systems must undergo a pre-market conformity assessment to certify their adherence to EU-approved technical standards. Whilst most high-risk AI systems can be self-assessed by their providers in this regard, there are instances where third-party conformity assessment by accredited bodies would be required.

Annexes II & III list specific high-risk AI systems, and the EU Commission holds the power to continually update these lists. Annex II includes all AI systems that are incorporated as a safety component within products that are regulated under separate EU legislation, such as medical devices, toys, civil aviation and motor vehicles. Annex III lists AI systems that, overall, pose a substantial risk to people’s security, health or fundamental human rights, such as AI systems that determine access to and the enjoyment of essential private services, including those systems that evaluate creditworthiness and pricing of life and health insurance, AI systems that determine access to education and assessment of students, biometric identification and management of critical infrastructure.

In response to the meteoric rise of general-purpose AI models (GPAI Models) such as OpenAI’s ChatGPT and Google’s Bard, the Act also caters for the systemic risks that may result from GPAI models. Whereas all developers of GPAI Models would need to provide relevant information to downstream providers, developers of GPAI Models that carry significant systemic risks, are subject to comprehensive pre- and post-market obligations that should ensure responsible deployment of the systems. These include the obligation to perform routine model evaluations and to conduct adversarial training of said models to better understand their strengths and weaknesses and report serious incidents.

Penalties

The Act establishes a robust penalty framework, delineating three categories of financial charges based on the severity of breaches or noncompliance. Fines are calibrated to reflect the nature of the violation, ranging from €7.5 million to €35 million or 1.5% to 7% of the noncompliant subject’s total annual turnover, with the higher of the two amounts applied.

Timeline

While the exact timeline for implementing the Act’s provisions is pending confirmation, expectations point towards a gradual entry into force, likely commencing between the second and third quarters of 2024. The prohibition on banned AI systems is slated to be enforced six months later, with the Act anticipated to be fully operationalized by the second and third quarters of 2026, reflecting a measured and deliberate approach to ensure effective implementation and industry adaptation.

In summary, the European Union Artificial Intelligence Act stands as a pioneering regulatory endeavour, poised to navigate the complexities of the evolving AI landscape. By addressing varying risk levels, promoting transparency, and establishing a framework for accountability, the Act seeks to strike a delicate balance between fostering innovation and safeguarding fundamental rights and security within the EU’s digital ecosystem.

Notwithstanding the Act’s centric nature, it will not exist in a regulatory vacuum. In other words, AI developers and deployers’ adherence to existing EU legislation concerning, inter alia, cybersecurity, product liability, data protection and privacy, will persist even after the Act comes into force.


Author: Paul Micallef Grimaud & Matthias Grech


This article was first published in the Times of Malta on 24/01/2024.

 

More from Ganado Advocates