Artificial intelligence (AI) in pop culture have never actually been depicted in a trust-building environment, but have always been shown in the light of disruptions, if not catastrophes. In a recent survey, more than 72% of Americans expressed worry about a future in which machines perform many human jobs.
AI thrives on the processing of large volumes of data to be able to deliver focused and targeted solutions. Last year in April, the European Commission (EC) unveiled a legal framework for AI, the Artificial Intelligence Act (AI Act), the first of its kind. The AI Act aims to implement an ecosystem of trust by proposing a legal framework within which people use AI-based solutions while encouraging businesses to develop them.
When it comes to technology, Europe has made no secret of its desire to export its values across the world, at least at a principle level. Similar to the General Data Protection Regulation (GDPR), which has become the conventional norm, the AI Act could also become a global precedent, determining to what extent AI may seep into our general day-to-day functioning, or whether it will be limited to automated use by larger entities only. The AI Act is already making waves internationally. In late September, Brazil’s Congress passed a bill that creates a legal framework for artificial intelligence.
Needless to say, AI is expected to bring a wide array of economic and societal benefits, across sectors. However, the implications of AI systems for the protection of fundamental rights under the EU Charter of Fundamental Rights, as well as the safety risks for users when AI technologies are embedded in products and services instigated the need to develop a ‘human-centric’ legal framework which focuses on the specific utilization of AI systems and attributed risks.
The AI Act defines mandatory requirements applicable to the design and development of AI systems before they are placed on the market. The AI Act is applicable to providers of AI systems established within the EU or in a third country placing AI systems on the EU market or putting them into service in the EU, as well as to users of AI systems located within the EU. It is pertinent to note that it also applies to providers and users of AI systems located in a third country where the output produced by those systems is used in the EU. However, the provisions of the AI Act do not apply to AI systems developed or used exclusively for military purposes, to public authorities in a third country, or to international organisations, or authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation.
As far as the definition of AI is concerned, the EC instead of defining the same has chosen to define AI systems instead. As per the AI Act, an AI system is a software that is developed with one or more of the techniques and approaches (machine learning, logic, and knowledge-based approaches and statistical approaches) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
The AI Act aims to implement the objective of the development of an ecosystem of trustworthy AI. In order to address the risks of potential biases, errors, and opacity which can adversely affect a number of fundamental rights, the AI Act follows a risk-based approach whereby legal intervention is tailored to mitigate the level of risk. The AI Act differentiates between AI systems signifying (i) unacceptable risk, (ii) high risk, (iii) limited risk, and (iv) low or minimal risk. Under this approach, AI systems would be regulated only as strictly necessary to address specific levels of risk.
Unacceptable risk: Prohibited AI practices
Article 5 of the AI Act explicitly bans harmful AI practices that are considered to be a clear threat to people’s safety, livelihoods and rights, because of the ‘unacceptable risk’ they create. Accordingly, it prohibits to place on the market, put into services or use in the EU: AI systems that deploy harmful manipulative ‘subliminal techniques’; AI systems that exploit specific vulnerable groups (physical or mental disability); AI systems used by public authorities, or on their behalf, for social scoring purposes; ‘Real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases.
High risk: Regulated high-risk AI systems
Article 6 of the AI Act regulates ‘high-risk’ AI systems that create an adverse impact on people’s safety or their fundamental rights. The AI Act distinguishes between two categories of high-risk AI systems:
- High-risk AI systems used as a safety component of a product or as a product falling under Union health and safety harmonisation legislation (e.g. toys, aviation, cars, medical devices, lifts); and
- High-risk AI systems deployed in eight specific areas identified in Annex III of the AI Act, which can be updated as necessary by way of a delegated act: biometric identification and categorisation of natural persons; management and operation of critical infrastructure; education and vocational training; employment, worker management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control management; administration of justice and democratic processes.
Providers of high-risk AI systems are required to register their systems in an EU-wide database managed by the EC before placing them on the market or putting them into service and abide by other obligations as stipulated under the AI Act.
Limited Risk: Transparency obligations
The AI systems presenting ‘limited risk’, refer to systems that interact with humans (chatbots), emotion recognition systems, biometric categorisation systems, and AI systems that generate or manipulate an image, audio, or video content (i.e. deepfakes). Such AI systems are subject to a limited set of transparency obligations.
Low or minimal risk: No obligations
AI systems with low or minimal risk could be developed and used in the EU and have minimal obligations of information. However, the AI Act envisions the creation of a code of conduct to reassure the trustworthiness of AI systems.
Governance and Penalty Regime
The AI Act requires the appointment of National Competent Authorities for the implementation of the AI Act. Further, a National Supervisory Authority is to be designated among them to act as a market surveillance authority. The consistent applicability of the AI Act is to be ensured by a European Artificial Intelligence Board, chaired by the EC.
Should providers act in contravention of the AI Act, EC is empowered to fine up to EUR 30,000.000 or, if the offender is a company, up to 6 % of its total worldwide annual turnover for the preceding financial year.
Criticism by Relevant Stakeholders
In September 2021, the European Economic and Social Committee had published its opinion and recommended that third-party assessments should be made obligatory for all high-risk AI (as opposed to the suggested self-assessments) and that any complaints and redressal mechanism for organizations and citizens which have suffered from the AI system should fall within the scope of the AI Act. The European Data Protection Supervisor (“EDPS”), who is set to become the new AI regulator for the EU, also called for a moratorium on the use of remote biometric identification systems in publicly accessible spaces. The EDPS emphasized a stricter approach to automated recognition in public spaces of human features – such as faces, fingerprints, DNA, voice, keystrokes, and other biometric or behavioural signals.
Evolving Issues and Challenges
On December 8, 2021, the interested stakeholders – the International Federation of Robotics, the VDMA Robotics + Automation Association, EUnited Robotics and REInvest Robotics – called on European policymakers to revisit and amend proposed regulations. In October 2021, MedTech Europe, the European trade association representing the medical technology industry, also called for a review of the proposed law. Major European and U.S. companies individually and through trade associations such as Digital Europe have voiced their concerns about the AI Act. According to a report by the Center for Data Innovation, the regulations enumerated under the AI Act will not only limit AI development and use in the EU but impose significant costs on EU businesses and consumers.
The definition of an AI system is an area of concern for businesses as it is very broad and covers far more than what is subjectively understood as AI, including the simplest search, sorting, and routing algorithms, which would consequently be subject to new rules. Further, there is a lack of any recourse in the AI Act for people to raise complaints about the impact of an AI system upon them, which stands in contrast to GDPR. So, one can only assume that the consumer can make claims under the privacy laws, consumer protection laws, or competition, as the case may be, or how the concern finds its genesis from. There are concerns that the AI Act is not observant of the requirements of small-scale providers and start-ups and that the obligations associated with the High-Risk AI which includes extensive mandatory third-party certifications could be troublesome for these enterprises.
Additionally, the powers conferred on the market surveillance authority are also very wide wherein they will be able to take all appropriate corrective actions to bring the AI system into compliance including but not limited to withdrawing it from the market, recalling it commensurate with the nature of the risk as it may prescribe.
The AI Act has attracted both applause as well as heavy criticism. While critics make a note of the ambiguous, complex regulatory and technical requirements, and broad territorial scope, the supporters praise the AI Act for prohibiting some AI systems and for the obligatory rules imposed on providers and users of AI including obligations for risk assessment.
AI is omnipresent and omnipotent. It influences information one sees online by predicting what content is engaging to you, it captures and analyses data from faces to enforce laws or personalise advertisements. From Apple’s Siri, Google Now, Amazon’s Alexa, to Microsoft’s Cortana: AI is involved in every aspect of life! The extensive AI Act is an attempt to address the risks stemming from the various uses of AI systems and aims to promote innovation in the field of AI.
Ms. Shubhangi Agarwal, Senior Associate
She graduated from ILS Law College, Pune in 2018 and joined Obhan & Associates where she got acquainted with practical aspects of corporate laws and intellectual property laws. Additionally, she was engaged in advising an enterprising and entrepreneurial organization, Devise Electronics Private Limited, Pune.
She has experience with multiple due diligence ranging across industries and also worked on transactional matters in the past involving joint ventures and M&A. She has even appeared before the Trademarks Office, Mumbai, and defended clients’ interest against the show-cause notices issued post the examination of the trademark applications. She has cleared several manuscripts (non-fiction) before publication.
 AI Act, Title I – General Provisions, Article 2.1
 AI Act, Title I – General Provisions, Article 2.3
 AI Act, Title I – General Provisions, Article 2.4
 AI Act, Title I – General Provisions, Article 3.1
 AI Act, Explanatory Memorandum, Clause 5.2.2
 AI Act, Annex III – High Risk AI systems Referred to in Article 6(2)
 AI Act, Title IV – Transparency Obligations For Certain AI Systems, Article 52
 AI Act, Explanatory Memorandum, Clause 5.2.2
 AI Act, Title VI- Governance, Chapter 2, Article 59
 AI Act, Title VI- Governance, Chapter 2, Article 59
 AI Act, Title VI- Governance, Chapter 1, Article 56
 AI Act, Title X, Confidentiality and Penalties, Article 71