-
What are your countries legal definitions of “artificial intelligence”?
The Italian draft bill on AI, which has been approved by the Senate (Senato della Repubblica) and is currently under review by the Chamber of Deputies (Camera dei Deputati), adopts the definition set out in Article 3(1) of Regulation (EU) 2024/1689 (“AI Act”). Accordingly, the applicable definition is the one provided by the AI Act, which defines an AI system as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
The European Commission provides an interpretation of this definition in its “Guidelines on the Definition of an Artificial Intelligence System under the AI Act”.
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
Yes, Italy has developed a national strategy for AI. In April 2024, AgID (Agenzia per l’Italia Digitale, i.e., the Italian authority with competence on various digital matters) and the Department for Digital Transformation (a branch pertaining to the Office of the President of the Council of Ministers) published an executive summary of the Italian AI Strategy, which outlines the framework of Italy’s AI strategy for the period stemming from 2024 to 2026, structured around four key pillars:
- Scientific Research: The strategy emphasizes enhancing the national AI research ecosystem by fostering collaboration among universities, research centers, and businesses. It aims to support the development of innovative startups, attract and retain talent, and promote advanced AI research.
- Public Administration: The plan includes using AI to improve the efficiency of public administration and provide better services to citizens. This involves developing AI systems for interoperability, ensuring proper data management, and training public personnel in AI.
- Business and Industry: The strategy aims to integrate AI into Italy’s industrial and entrepreneurial sectors, especially within SMEs, to boost competitiveness and innovation. It supports collaboration between ICT companies and research institutions, enhances regulatory and certification processes, and promotes AI adoption among SMEs through funding and development of AI solutions.
- Education and Training: Addressing the shortage of AI skills, the strategy proposes enhancing AI education across all levels, from schools to PhD programs. It includes initiatives for upskilling and reskilling workers in both the public and private sectors and promoting AI literacy among the general population.
The strategy also underscores the importance of ethical AI, focusing on privacy, security, gender issues, and environmental sustainability. It aims to ensure AI development and deployment adhere to these principles.
The strategy is supported by public investments and involves multiple stakeholders, including the Ministry of Enterprises and Made in Italy, the Ministry of University and Research, and the Ministry of Technological Innovation and Digital Transition.
The document provides a comprehensive understanding of the Italian government’s AI objectives and prepares the groundwork for an impending domestic AI bill, which is designed to complement the AI Act by addressing specific sectors within Italy.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Italy has taken steps to address the regulation of AI through a combination of national guidelines and alignment with EU initiatives, in particular in the context of the AI Act.
National Rules and Guidelines
1. National Strategy for Artificial Intelligence (AI) 2024-2026
As mentioned in the answer above, AgID published the executive summary of the National Strategy for AI, which outlines the strategic vision for the development and use of AI in Italy, focusing on 4 pillars (scientific research, public administration, business and education). The strategy includes non-binding guidelines and policy measures, including ethical recommendations on AI use, focus on transparency and fairness, and public-private partnerships. This strategy is not legally binding, but aims to guide national efforts and funding allocations (e.g., via the Italian declination of the Next Generation EU, i.e., the Piano Nazionale di Ripresa e Resilienza (“PNRR”) – the National Recovery and Resilience Plan).
2. Guidelines by the Italian Data Protection Authority
The Italian Data Protection Authority has issued guidelines on the ethical and legal use of AI, emphasizing the importance of data protection and privacy. In particular, the Authority has published a Decalogue for the implementation of national health services through AI systems, and the Guidelines for defending personal data from web scraping. Furthermore, the Authority has conducted investigations into ChatGPT, DeepSeek and other generative AI models for compliance with the GDPR. The investigation and subsequent sanction against OpenAI — one of the first enforcement actions in the EU for GDPR violations by a large language model (LLM) provider — has been highly influential. The case demonstrated the determination of the Italian Authority to enforce the GDPR’s applicability in the complex context of generative AI, drawing significant public and regulatory attention across the EU and setting an early precedent for enforcement in this area.
Existing Laws applicable to AI
1. AI Act
The EU AI Act, which applies in Italy given its status as an EU Member State, aims to establish a harmonized legal framework for AI across the EU. The AI Act proposes a risk-based approach to regulation, categorizing AI applications based on their potential risk to individuals and society. High-risk AI systems will face stringent requirements, including mandatory risk assessments, data governance standards, and transparency obligations.
2. GDPR and Italian Data Protection Law
GDPR, which is applicable across the European Economic Area, including Italy, and Legislative Decree 196/2003 (so-called “Privacy Code”), set a comprehensive framework for data protection that applies also to the processing of personal data in the context of AI systems such as during training, implementation and/or deployment of the system, as long as a processing of personal data is taking place.
3. Consumer Protection and Product Liability Laws
Italian consumer protection laws, which transpose applicable EU consumer protection directives, apply to AI systems to ensure that they do not deceive or harm consumers. These laws mandate transparency and fairness in services and products offered to consumers, including by means of AI systems, or when the product or service itself is an AI system.
4. Anti-Discrimination Laws
Italy boasts extensive anti-discrimination laws, as part of the transposition of existing EU anti-discrimination directives. This body of law applies to AI systems used for taking decisions which affect natural persons, in case where such decision unlawfully discriminates against said person, with impact on one or more of the so-called “protected grounds” set forth by the law (e.g., race, sexual orientation, age), which may vary according to the sector where the AI system is deployed (e.g., in the context of a labour relationship, in the context of service provision, etc.).
Main Difficulties in applying existing Laws to AI
Traditional legal frameworks are often ill-suited to address the autonomous and complex nature of AI systems, which can operate without direct human intervention. Applying existing laws (e.g., Italy’s Civil Code and Consumer Code which provide general provisions that can be applied to AI, in cases of liability for damages) for determining accountability and liability for actions taken by AI systems is challenging.
Furthermore, existing laws were not designed with AI in mind, leading to ambiguities in their application to AI-specific scenarios, such as decision-making transparency and to address biased or discriminatory outputs from AI systems.
Draft Laws and Legislative Initiatives
Italian AI Bill
The Italian AI Bill represents Italy’s most comprehensive legislative effort to regulate AI in alignment with the AI Act. Approved by the Senate on 20 March 2025 and pending full parliamentary adoption, the Bill outlines a normative framework encompassing general principles, sector-specific rules, supervisory mechanisms, and delegated powers to the government for further legislative development.
The legislation is built around an anthropocentric approach to AI, promoting responsible innovation while protecting fundamental rights. It mandates compliance with EU rules and integrates core values such as transparency, non-discrimination, human oversight, and cybersecurity. In particular, it mandates that AI systems must be developed and deployed in accordance with constitutional freedoms, the EU Charter of Fundamental Rights, and key democratic principles. The Bill also establishes that the Italian government will support AI-driven economic growth, encourage innovation ecosystems, and facilitate access to high-quality datasets for research and industry.
Sector-specific applications are addressed. For instance, in healthcare, AI is framed as a support tool to enhance diagnostics, treatment, and patient care, without replacing human decision-making. Special safeguards are imposed for minors and individuals with disabilities. In the labour sector, AI must uphold workers’ rights, and a new monitoring body will monitor its impact on employment. The use of AI by liberal professions (e.g., lawyers, architects, medical doctors, etc.) is further regulated, by including disclosure obligations vis-à-vis the client, where the professional wishes to leverage AI in the context of their work. The Bill also sets boundaries for the use of AI in the judiciary, reserving decision-making powers exclusively to judges, and in public administration, where it is intended as a tool to streamline services without displacing human accountability.
To implement and oversee the AI framework, the Bill designates AgID (Agenzia per l’Italia Digitale) and ACN (Agenzia per la Cybersicurezza Nazionale) as national authorities. These bodies are charged, respectively, with fostering AI innovation and with supervising cybersecurity and market compliance. Other sector-specific authorities are also foreseen, in particular Banca d’Italia and CONSOB in the banking sector, and IVASS in the insurance field. Furthermore, the law provides for experimental AI use in areas such as foreign affairs and justice and empowers the government to issue legislative decrees to define legal regimes for algorithmic training data, ensure criminal and civil liability in case of harm caused by AI, and promote AI education and literacy.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
In Italy, the legal framework governing defective AI systems is influenced by several areas of law, including consumer protection, product liability, and special regulations.
In Italy, AI systems which are embedded as components in products, or AI systems which are products themselves, are subject to the Italian product liability framework, when such AI systems produce damages to individuals. Today, the framework is governed by Legislative Decree No. 206 of 2005, the “Consumer Code”, which implements both the EU Product Liability Directive (Directive 85/374/EEC) and the EU General Product Safety Directive (Directive 2001/95/EC).
However, it should be noted that in November 2024, such framework has been revamped at the EU level by the entry into force of Directive (EU) 2024/2853. EU Member States, including Italy, are required to transpose at the national level the rules set forth by the new Directive, by 9 December 2026 at the latest. This upcoming framework is meant to modernize the Union’s product liability rules, adapting them to the complexities in the chain of responsibility that derives from the data-driven economy. As such, the new framework expressly tackles software, AI systems and connected products (IoT), setting forth new rules that are more favorable to the claimant, in terms of both the burden of proof and disclosure rights vis-à-vis the manufacturer.
Still, the current framework – pending the transposition of the new Directive – holds producers and suppliers liable for any damage caused by defective products, which may include AI systems. Consumers can seek compensation from the producer if the AI system does not meet the safety standards they are entitled to expect.
AI systems are not explicitly covered by current liability regulations, but they can cause damage when used as they are, or when integrated into a broader product (e.g., an AI system used as a safety component of a vehicle). However, existing laws may not fully address AI-related complexities. For example, the liability exemption for development risks might not apply to AI, and identifying the liable party can be difficult since the focus is mainly on the producer. To address these issues, the EU Commission has proposed revising Directive 85/374/EEC which should include AI systems and AI-enabled goods as “products.” The current version of the Directive has several shortcomings concerning the burden of proof (i.e., the need, in order to obtain compensation, to prove the product was defective and that this caused the damage suffered) which is challenging for injured persons to satisfy in complex cases (e.g. when the damage is caused by AI-embedded products). The revision will therefore encourage the roll-out and uptake of such new technologies, including AI, while ensuring that claimants can enjoy the same level of protection irrespective of the technology involved.
Aside from civil liability, AI systems providers and deployers may face administrative fines in case where they fail to comply with applicable rules.
In this respect, although not yet applicable in full, the EU AI Act creates a comprehensive regulatory framework for AI across the EU, including Italy. This regulation adopts a risk-based approach, classifying AI systems into different risk categories and imposing varying levels of obligations and requirements on providers and deployers to ensure safety and compliance. The regulation provides for pecuniary sanctions up to 7% of the global annual turnover of a company; however, it does not establish an ad-hoc right for compensation in favor of individuals that have suffered damages from AI systems.
Data protection remedies can also be invoked by the individual (as “data subject”) that has suffered damages from AI systems as a result of data protection violations taking place where the AI system has processed their personal data. Pecuniary sanctions under the GDPR may reach up to 4% of the global annual turnover of a company. In case where the individual has suffered damages as a result of AI data protection violations, the GDPR provides for a right to compensation.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
In Italy, while there is not a specific liability regime for AI systems, both civil and criminal liability rules may apply when damage is caused by such systems. A combination of traditional legal principles and rules (concepts such as the special liability for defective products, responsibility from hazardous activity, fault-based liability and liability for things) and recent legislative developments — particularly the 2025 Italian draft AI Bill — shapes the current and future liability framework. Below is an overview.
Civil liability for damages caused by AI systems in Italy has traditionally been governed by a combination of contractual and extra-contractual rules, notably:
- Art. 1218 of the Italian Civil Code (contractual liability): Establishes the debtor’s liability for failure to perform exactly as promised, unless the debtor can prove that the non-performance was due to an event beyond their control (force majeure).
- Art. 2043 of the Italian Civil Code (general clause for tort-based liability): Outside of a contractual relationship, any person who causes unjust harm to another through intentional misconduct or negligence is obliged to compensate for the damage.
- Art. 2050 of the Italian Civil Code (liability for dangerous activities): Applies when the activity involving AI systems is considered inherently dangerous due to the technology’s complexity and potential risks. The operator is liable unless they prove they took all appropriate measures to prevent harm.
- Art. 2051 of the Italian Civil Code (liability for things in custody): May be applied to AI systems considered as “things” under custody, whereby the custodian (e.g., developer, deployer, or user with effective control) is liable for damages unless they prove that the damage resulted from an unforeseeable and unavoidable event (force majeure).
- Product liability rules under Legislative Decree No. 206/2005 (Consumer Code), implementing Directive 85/374/EEC (under revision in the context of Directive (EU) 2024/2853: The producer (including software and AI component manufacturers) is strictly liable for damages caused by product defects, regardless of fault.
The 2025 draft AI Bill introduces a delegation to the Government to create new rules for both:
- The use of data, algorithms, and training techniques (Art. 16);
- Liability for unlawful development or use of AI systems (Art. 24, para. 3).
In exercising the delegation, the Government shall comply with the following principles and guiding criteria:
- Provision of instruments, including precautionary measures, aimed at preventing the dissemination and ensuring the removal of unlawfully generated content, including that created using artificial intelligence systems, supported by a system of effective, proportionate, and dissuasive sanctions;
- Introduction of distinct criminal offences, punishable by intent or negligence, focused on the failure to adopt or update security measures for the production, distribution, and professional use of artificial intelligence systems, where such omissions result in a concrete danger to life, public or individual safety, or national security;
- Clarification of the criteria for attributing criminal liability to natural persons and administrative liability to entities for offences relating to artificial intelligence systems, taking into account the actual level of control exercised by the agent over said systems;
- In cases of civil liability, provision of mechanisms to protect the injured party, including through specific rules on the allocation of the burden of proof, taking into account the classification of artificial intelligence systems and the corresponding obligations as identified by Regulation (EU) 2024/1689.
Concerning criminal liability, the draft Italian AI bill seeks to introduce a new criminal offence within the Italian Criminal Code (Art. 612-quarter, titled “Illicit dissemination of artificially generated or manipulated content”), which provides as follows (authors’ translation):
Anyone who causes unjust harm to a person by disclosing, publishing, or otherwise disseminating – without their consent – images, videos, or audio recordings that have been falsified or altered through the use of artificial intelligence systems and are capable of misleading others about their authenticity, shall be punished with imprisonment from one to five years […].
In this regard, to trigger the criminal provision it is arguably necessary that harm to others (e.g., economic, reputational, moral) occurs through the sending, delivery, assignment publication or dissemination of AI-generated audio or audiovisual material. Also, in order to protect other constitutionally guaranteed rights (e.g., freedom of expression, right to satire), a constitutive element of the offense concerns the suitability of the material to mislead as to its genuineness or origin.
Similar provisions have been included for the criminal provision related to corporate or banking market rigging (Article 2637 Civil Code).
The draft AI Bill also explicitly seeks to introduce new aggravating circumstance under Art. 61 of the Penal Code, when AI is used as an “insidious means” in the commission of a crime. The aggravating circumstance can apply to all existing criminal offences, thus leading to a higher sentence for the offender(s) in case when AI is used as an “insidious means” to commit a crime.
As of the time of writing, there have been no landmark court decisions from Italian courts that comprehensively address the liability of AI developers, deployers, or users.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
In case of non-contractual liability, responsibility for damages is generally determined based on fault or negligence attributed to the party whose actions or omissions contributed directly or indirectly to the harm. Responsibility can thus fall upon the producer, developer, deployer, supplier, or user – any entity within the so-called “liability chain”, depending on the case. Specifically, both the providers (developers or suppliers) and deployers (operators integrating the AI system into products or processes) can be responsible for harm caused by an AI system. Within the “liability chain”, a harmed party may seek compensation from entities higher in the chain (e.g., the user may seek compensation from the supplier, the supplier from the developer or producer).
Under existing civil law principles, the developer of an AI system may be liable if the AI system is defective according to product liability regulations (as implemented in Italian law by Legislative Decree No. 206/2005, the Consumer Code), i.e., when AI systems are included as components of a product, or where the AI system is itself a product that is sold on the market. Such liability applies particularly if the AI system fails to meet legitimate safety expectations, or if the system was negligently designed, trained, tested, or validated, leading to foreseeable and preventable harm. Typically, the burden of proof initially rests on the victim, who must demonstrate the existence of a defect, the damage sustained, and a causal link between the defect and damage.
The deployer (e.g., a business or organization that integrates and operates the AI system) could be held liable for negligence related to the deployment or improper integration of the AI system, insufficient supervision, inadequate human oversight, or failure to apply necessary safety measures as indicated by standards or guidelines.
Liability may also attach to the user in cases where the AI system is utilized contrary to its prescribed terms, guidelines, or intended purpose, or when the user overrides safety mechanisms, fails to monitor adequately, uses the AI recklessly, or contributes directly through negligent or intentional misconduct.
Currently, the victim bears the evidentiary burden of establishing causation, except where strict liability provisions explicitly apply. Establishing causality is notably challenging with autonomous and opaque AI systems, whose limited explainability significantly complicates determining fault and responsibility. This critical difficulty constitutes a central rationale behind forthcoming regulatory reforms. In particular, in November 2024, the applicable product liability framework has been revamped at the EU level by the entry into force of Directive (EU) 2024/2853. EU Member States, including Italy, are required to transpose at the national level the rules set forth by the new Directive, by 9 December 2026 at the latest. This upcoming framework is meant to modernize the Union’s product liability rules, adapting them to the complexities in the chain of responsibility that derives from the data-driven economy. As such, the new framework expressly tackles software, AI systems and connected products (IoT), setting forth new rules that are more favorable to the claimant, in terms of both the burden of proof and disclosure rights vis-à-vis the manufacturer.
The Italian draft AI Bill seeks to introduce significant delegation powers to the Italian Government, enabling it to adopt a more structured, balanced, and effective liability framework specifically tailored to address the complexities and peculiarities associated with AI-driven damages.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In civil liability scenarios, several key conditions must typically be met for someone to be held responsible for damages. These conditions include proving fault, demonstrating that damage occurred, and establishing a direct causal relationship between the fault and the damage. Generally, the burden of proof falls on the victim who suffered the damage. When dealing with cases involving defective products, this changes.
According to Art. 114 of the Italian Consumer Code the producer is liable for the damage caused by defects in their product. Liability is therefore not absolutely strict, but only presumed. In other words, liability is not based on fault but on the causal traceability of the damage to the presence of a defect in the product.
To hold a manufacturer accountable for damages caused by their defective products, the injured party must prove the defect, the damage, and the causal connection between them, as stated in Article 120(1) of the Consumer Code. The manufacturer, on the other hand, must demonstrate facts to exclude their liability, such as that the defect did not exist when the product was marketed, or that it arose from the need to comply with mandatory regulations. They can also refute the claims by showing that, based on technical knowledge available at the time of commercialization, the defect could not have been detected. If the dispute concerns a component, the manufacturer can prove that the defect is entirely due to the design of the product in which the part was incorporated or that it complied with instructions from the manufacturer who used it (Article 118).
National jurisprudence holds that liability for defective products is presumed and not strict, meaning it does not depend on proving the manufacturer’s fault but does require proof of the product’s defect and its causal link to the damage. The Consumer Code recognizes mixed and limited strict liability, easing the burden of proof for the injured buyer, while the manufacturer must show diligence in the design phase. The Supreme Court has clarified that even simple presumptions can be used to prove defectiveness if they are serious, precise, and consistent. Mere occurrence of damage does not suffice to prove defectiveness or the product’s danger in normal use conditions.
There are instances where the manufacturer’s liability can be excluded if the injured party fails to provide sufficient proof. For example, damage must occur under normal usage conditions and following the provided instructions.
The right to compensation for damage caused by a defective product cannot be excluded or limited by any prior agreement. If the injured party is partially at fault, compensation is assessed according to Article 1227 of the Civil Code, and it is not due if the product was used with awareness of its defect and the associated risk. The right to compensation is subject to a three-year prescription period from the date the injured party knew or should have known about the damage, the defect, and the identity of the responsible party. If the damage worsens, the prescription period does not begin until the injured party becomes aware of the sufficiently serious damage. The right to compensation expires ten years after the product causing the damage was put into circulation, with the expiration prevented only by legal action against one of the responsible parties, without affecting others (Articles 124-126 of the Consumer Code).
Article 123 of the Consumer Code identifies two main types of damages caused by defective products: death or personal injury and the destruction or deterioration of property other than the defective product, provided it is typically intended for private use or consumption and was primarily used by the injured party.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Currently, in Italy there is no specific insurance coverage tailored on the use of AI systems. Traditional insurance policies designed for civil liability, cybersecurity, product liability, and directors’ liability can be customized to address the risks associated with AI systems by providing further coverage.
Under the draft AI Bill, in exercising the delegation powers, the Government, in addition to the general principles and guidelines set out in Article 32 of Law No. 234 of 24 December 2012, can make the necessary amendments, additions, and repeals to the current legislation, including that concerning insurance, in order to ensure proper and full compliance with the AI Act.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
When the invention is not the result of an activity of a human subject making mere use of the AI tool, and when it is also not the result of a complex and organized research and development activity, the following has to be considered. AI systems lack the requirement of “legal subjectivity“, which is necessary to recognize an authorship of the invention in the head of a “subject”: no matter how technologically innovative, the AI tool has to be considered as a “machine” without legal personality. Accordingly, and taking into account the difficulty of recognizing the paternity of the invention to an AI system, the Italian and EU legal systems (e.g., the Italian Industrial Property Code) expressly provide that the inventor must be designated in the patent application, indicating first and last name (this further confirms the need for the inventor to be a person).
All decisions made by patent offices around the world, which have been invested with the issue, have ruled out the possibility of patenting inventions designating an AI system as the author. The invention made by AI always constitutes the result of the investment of a company or at any rate of the person who created and/or instructed the AI system to behave and operate in a certain way.
To date, there is as yet no evidence for the existence of an AI capable of creating independently of its initial programming.
Recent debates and legal cases, such as the “DABUS” case, have highlighted this issue internationally. These cases have sparked discussions on whether AI should be recognized as an inventor given its increasing role in innovation. However, until legislative changes are made to explicitly allow for AI inventorship, to date only natural persons can be named as inventors in patent applications in Italy.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
The Italian draft AI Bill sets forth in Article 25 a set of rules that should introduce some significant changes to the Italian Copyright Law (Law n. 633/1941). A major proposed modification to Article 1 of Copyright Law seeks to allow works generated with the assistance of AI to be protected by copyright, provided that (i) the works originated from human intellect, and that (ii) the forms of expression generated with the assistance of AI are the result of the author’s intellectual work. While this provision aims to ground a human-centric view of AI, it raises concerns that brings interpretive doubts, as it shifts the burden of proving the creativity and relevance of one’s contribution on the author. Furthermore, the wording of the text is arguably unclear, as it does not provide any guidance on how to evaluate author’s intellectual work, whether quantitatively or qualitatively.
The second point of interest in the draft Bill relating to copyright concerns the introduction of a new Article 70-septies into the Copyright Law, which mirrors the provisions of the AI Act on the extraction of text and data from third-party content for AI training. If this provision were to be adopted, authors wishing to opt-out and reserve rights over their content by prohibiting its use for AI system training would need to do so in a way that makes their choice machine-readable, i.e., understandable to web-crawlers, allowing them to automatically block the extraction of data, content, and information.
Beside the new draft Bill, the case law provides some insights on new regulatory trends in Italy. More precisely, in Ruling No. 1107 of 09.01.2023, the Italian Supreme Court of Cassation ruled that the reproduction of an image constitutes infringement of the creator’s copyright, even in cases where the creative process was perfected by making use of a software. According to the Supreme Court, in fact, the use of digital technology for the realization of a work does not in itself preclude the possibility of recognizing the work as the fruit of the intellect, except in cases where the use of technology has not predominantly absorbed the creative elaboration of the artist. The protection of the work, therefore, would be guaranteed in the case where the creative elaboration of a human is significant, while in the case where the creative contribution of a human is marginal, traditional protections cannot be evoked. While in the former case the rights to the economic exploitation of the work and the moral right to be recognized as an author are guaranteed, in the latter case the protection is more controversial.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
Beside all the already mentioned issues related to the use of AI, when deploying AI systems in the workplace several labor law provisions must be considered to ensure compliance.
Workplace surveillance & privacy
Italian labor law imposes strict limits on employers’ possibility to monitor employees. Article 4 of the Workers’ Statute (Statuto dei Lavoratori) regulates the use of surveillance equipment, which may include AI systems which process workers’ personal data. The provision prohibits the use of any tool which is directly intended to monitor or surveil the performance of workers. Furthermore, it mandates that installing systems which may indirectly entail the monitoring of workers may take place only for specific and documented purposes (which must be related to organizational, production, occupational, safety, or protection of company assets). Moreover, the deployment of such tools must always subject to a prior agreement with trade union representatives or, when those are not present or where an agreement is nor reached, to a prior authorization from the comptentent Labour Inspectorate (Ispettorato del Lavoro), at the local or national level depending on the circumstances.
Discrimination and Bias
AI systems must not be used in a way that discriminates against employees based on protected grounds such as race, gender, age, or religion, pursuant to Italian anti-discrimination law in the workplace, which transposes the relevant EU directives. Ensuring that AI algorithms do not perpetuate unlawful biases is crucial. AI systems used for performance evaluations must be fair, transparent, and non-discriminatory. Employers must ensure that such systems do not lead to discriminatory treatment of the worker.
Health and Safety
The use of AI should not create new health and safety risks for workers. Employers are responsible for ensuring a safe working environment, which includes assessing and mitigating risks associated with new technologies. The introduction of AI can lead to increased stress, anxiety or other kinds of negative emotions amongst employees, so employers should consider the psychological impact and provide appropriate support.
AI Act Compliance
The AI Act classifies as “high-risk” the AI systems used in the context of employment, workers’ management and access to self-employment, pursuant to Article 6(2) and Annex III, point 4.
As such, those AI systems must, inter alia, comply with strict requirements including comprehensive risk assessments, robust human oversight to ensure human intervention when necessary, thorough documentation and record-keeping obligations, accuracy, robustness, cybersecurity standards, transparency obligations regarding the functioning and decision-making processes of the AI, as well as ongoing monitoring and appropriate corrective actions when risks or malfunctions are identified.
The 2025 draft of the Italian AI Bill introduces a dedicated regulatory framework for the use of AI in the workplace (Articles 11-12), establishing clear principles aimed at ensuring that AI enhances, rather than undermines, workers’ rights. It mandates that AI systems be used to improve working conditions, productivity, and the protection of workers’ psychophysical integrity, while explicitly prohibiting practices that compromise human dignity or violate data protection and non-discrimination norms. The draft Bill requires employers to inform workers transparently when AI is used in employment contexts, including hiring and performance evaluation, and emphasizes the necessity of maintaining human oversight over all automated decision-making processes. Furthermore, the legislation establishes a national Observatory on AI in the labor market, tasked with monitoring the effects of AI deployment across sectors, promoting training initiatives, and guiding future labor strategies in alignment with evolving technological standards.
Transparency and Accountability
Employers must ensure that AI systems are transparent and that workers understand how these systems make decisions affecting them.
In Italy, Legislative Decree No. 104/2022 (the so-called “Transparency Decree”), transposing Directive (EU) 2019/1152, introduced specific transparency obligations for employers regarding the use of automated decision-making and monitoring systems. Employers must provide the worker with clear information about how these systems function, their purposes, the types of decisions they make, and their potential implications for employees.
In October 2024, the EU has finalized the adoption the Platform Work Directive (Directive (EU) 2024/2831) which provides new rules aimed at ensuring that Gig Economy platforms must ensure inter alia appropriate human oversight of AI systems used to manage platform workers, as well as a right to explanation for important decisions that directly affect the workers. Platforms are also forbidden from processing certain types of personal data belonging to workers, such as data on someone’s emotional or psychological state and personal beliefs. The Directive must be transposed by Member States, including Italy, by 2 December 2026.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
Italy, as a member of the European Union, is subject to the GDPR. The use of AI must therefore comply with GDPR requirements, and with the complementary Italian privacy provisions (D.lgs 196/2003, the “Privacy Code”).
Starting from the above, it should be noted that not all AI systems leverage personal data. For those that do, however, the privacy issues can be categorized into several areas:
- Security: AI systems often require large datasets, which may include personal information. Ensuring the security of this data is paramount to prevent breaches that could lead to unauthorized access, misuse, or loss of personal data.
- Legal basis for processing data: AI applications must have a legal basis for data processing (from training to use). Ensuring that consent is genuinely informed and voluntary is a significant challenge.
- Data Minimization and Purpose Limitation: According to GDPR principles, data collected should be adequate, relevant, and limited to what is necessary for the purposes for which it is processed. AI systems should not collect excessive data and must clearly define the purpose of data collection to avoid misuse.
- Transparency and Accountability: AI systems have to be transparent in their operations. This includes providing explanations about how AI decisions are made, which can be challenging given the complexity of many AI algorithms. Ensuring accountability involves keeping detailed records of data processing activities and decisions made by AI systems.
- Right to Rectification and Erasure: Under GDPR, individuals have the right to have inaccurate personal data rectified and to request the erasure of their data. Implementing these rights within AI systems can be complex, especially if the data has been integrated into decision-making processes.
- Automated Decision-Making and Profiling: GDPR restricts decisions based solely on automated processing, including profiling, that produce legal effects or similarly significantly affect individuals.
- Surveillance and Tracking: AI technologies, such as facial recognition and predictive analytics, can be used for extensive surveillance and tracking of individuals, leading to concerns about invasion of privacy.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Data scraping is not specifically regulated in Italy, but it is regulated by several different bodies of law, especially the legislation on personal data protection, unfair competition, intellectual property, including copyright and sui generis right on the protection of databases. For instance, EU law grants a sui generis protection to database creators based on a substantial investment criterion in obtaining, verifying, or presenting database contents. Directive 96/9/EC grants exclusive rights to database makers, allowing them to charge for database use and select licensees. In this scenario, legal disputes often arise regarding whether a scraped website constitutes a protected database, with courts assessing investment and extraction substantiality.
Moreover, data scraping may be lawful under exceptions, most notably the text and data mining exception (see Directive (EU) 2019/790 on copyright in the Digital Single Market), which Italy has incorporated in 2021 within Article 70-ter of the Italian Copyright Law (Law 22 April 1941, no. 633).
The text and data mining (TDM) exception allows the use of copyrighted works and other materials to automatically analyze large amounts of text and data to extract information, patterns, or trends, .
Under EU and Italian law, there is a mandatory exception for scientific research by research organizations and a broader optional exception for other purposes, provided rights holders have not expressly reserved their rights.While Italy has not produced landmark judgments exclusively on data scraping for AI training. However, in May 2024 the Italian Data Protection Authority has published a document titled “Guidance to Protect Personal Data from Web Scraping”, addressed at website owner (in their vest of data controllers) and suggesting several measures to protect their content against web-scraping.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
A significant challenge in assessing the legality of data scraping lies in interpreting website’s terms of service and whether they constitute enforceable contracts. Most scraping activities fall under ‘browsewrap’ agreements, raising questions about their enforceability. Courts have grappled with the issue of whether scrapers can be held liable for violating Terms of Service to which they never explicitly agreed, highlighting the complexities of regulating online behaviour.
In general, any ban on web-crawlers made by a website exclusively through the “robots.txt” file is not usually considered to be legally enforceable, per se.
In any case, pursuant to Article 70-ter of the Italian Copyright Law (Law 22 April 1941, no. 633), in line with Directive (EU) 2019/790, website owners can legally prevent crawlers to extract data from their website by expressly reserving such right.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes. The Italian Data Protection Authority has issued guidelines on the ethical and legal use of AI, emphasizing the importance of data protection and privacy. In particular, the Authority has published a Decalogue for the implementation of national health services through AI systems. Moreover, in May 2024 the Italian Data Protection Authority has published a document titled “Guidance to Protect Personal Data from Web Scraping”, addressed at website owner (in their vest of data controllers) and suggesting several measures to protect their content against web-scraping.
The Garante has actively participated in international discussions on AI governance. Notably, during the G7 Data Protection and Privacy Authorities Roundtable held in Rome in October 2024, the Authority coordinated discussions focusing on the role of data protection authorities in AI governance. The roundtable emphasized the importance of ensuring that AI development aligns with shared ethical and legal principles, particularly concerning the protection of minors and the need for trustworthy AI systems.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
The Italian Data Protection Authority (Garante per la protezione dei dati personali) has taken several measures against OpenAI to ensure compliance with privacy regulations concerning its famous generative AI system ‘ChatGPT’. Here is a summary of the main actions taken:
- Temporary Suspension of ChatGPT: In March 2023, the Garante temporarily suspended the use of ChatGPT in Italy, citing concerns about the processing of personal data and the lack of adequate information provided to users and non-users whose data was being collected.
- Compliance Request: The Garante required OpenAI to provide more information on how personal data is collected and processed, and to adopt measures to ensure compliance with the General Data Protection Regulation (GDPR). This included the need to verify the age of users to prevent access by those under 13 years old.
- Changes to the Privacy Policy: Following the Garante’s requests, OpenAI implemented changes to its privacy policy to make it more transparent and understandable.
- Service Restoration: at the end of April 2023, after adopting the changes requested, the ChatGPT service was restored in Italy. Amongst others, OpenAI introduced measures to facilitate the exercise of data subjects’ rights, implemented tools for age verification and improved the privacy policy.
- Final Decision and Fine: On 2 November 2024, the Garante issued its final decision, imposing a € 15 million fine on OpenAI and requiring it to carry out a six-month public awareness campaign to inform the public about data collection for GPT training and individuals’ data protection rights. The fine was based on several GDPR violations ascertained since the first provisional decision of 2023, including lack of age verification, inaccurate outputs, processing without legal basis, inadequate privacy information, and failure to implement a prior awareness campaign. OpenAI was given 30 days to pay and 60 days to propose the campaign. During the investigation, OpenAI moved its EU headquarters to Ireland, making the Irish Data Protection Commissioner the lead authority for future matters. After the fine, OpenAI appealed to the Court of Rome, which provisionally suspended the Garante’s decision and fine on 21 March 2025, pending final judgment.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
To date, Italian courts have had a few occasions to deal with cases involving AI systems.
In the public sector, the Council of State examined a case concerning an administrative decision automated through an algorithm. In the specific case, the Council of State’s ruling No. 2270 of 2019 concerned a teacher hiring procedure handled entirely by an algorithm, which led to irrational outcomes, such as assignments that did not conform to preferences and appointments in distant provinces. The plaintiffs challenged the automated procedure for lacking transparency and reasoning, and for the absence of individual evaluation by a public official.
The Council of State upheld the appeal, pointing out the violation of the principles of impartiality and transparency. The court said that although the use of algorithms in administrative decisions can improve efficiency, this must be done within well-defined limits, ruling out automation in the presence of administrative discretion.
The Council of State also ruled that the algorithm must be “knowable” and understandable, and the responsibility for the decision must remain with the administration, which must ensure the transparency of the decision-making process. Finally, the Court emphasized the importance of legal oversight of algorithmic decisions, requiring a multidisciplinary approach and ensuring the “human in the loop” principle at every stage of the decision-making cycle.
In the private sector, on the Court of Bologna upheld an appeal filed by workers’ unions against Deliveroo. The object of dispute was the algorithm used by the platform to organize the work performance of its employees. This algorithm, known as “Frank,” determined the distribution of work among riders in an automated manner, based on a system of booking work sessions and a score given to each rider. However, the algorithm penalized riders who did not comply with the booked work sessions, making it difficult for them to exercise their right to strike. The court ruled that the application of this algorithm was unlawful and ordered Deliveroo to pay damages to the plaintiff unions.
As of the time of writing, there are no reported criminal cases in Italy where AI was the core subject of the prosecution. However, the Italian AI Bill foresees new criminal offenses – including those involving unlawful dissemination of AI-generated content (deepfakes) or omission of security safeguards in the development and deployment of high-risk AI systems. These legislative changes are expected to lead to criminal liability court cases in the coming years.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
According to the draft AI Bill complementing the EU AI Act, Italy will have two agencies responsible for supervising the use and development of AI, which are going to be the AgID (Agenzia per l’Italia Digitale) and ACN (Agenzia per la Cybersicurezza Nazionale). In particular, AgID will act as notified body, with competence for the evaluation and appointment of entities tasked with auditing the conformity assessment of high-risk Ai providers, where required by AI Act. ACN, on the other hand, will act as the Italian market surveillance authority, except in the banking sector, where this competence will be exercised by the Bank of Italy and by CONSOB, and except in the insurance sector, where this competence will be exercised by IVASS.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
As of 2025, the adoption of AI among Italian businesses is growing, though it remains uneven across different company sizes and sectors. According to the Italian National Institute of Statistics (ISTAT), only 8% of Italian enterprises were utilizing AI technologies in 2024, a figure that lags behind other European countries such as Germany, where the adoption rate was nearly 20%. This disparity is particularly evident among small and medium-sized enterprises (SMEs), which often face challenges such as limited digital skills and resources.
Conversely, larger Italian companies are more actively embracing AI. A study by Minsait revealed that 63% of large enterprises in Italy have already adopted or are planning to adopt AI technologies, with expectations of significant productivity gains. Sectors leading in AI adoption include manufacturing, finance, healthcare, and retail, where AI is applied for automation, predictive analytics, and customer engagement.
The Italian government is actively promoting AI development through substantial investments. Notably, Microsoft announced a €4.3 billion investment to enhance AI and cloud infrastructure in northern Italy, aiming to strengthen the country’s digital capabilities. Additionally, the establishment of the Italian Institute of Artificial Intelligence for Industry (AI4I) in Turin underscores the national commitment to advancing AI research and industrial application.
In summary, while AI adoption in Italy is progressing, particularly among large enterprises and specific sectors, there remains a significant gap among SMEs. Continued efforts in digital education, infrastructure development, and supportive policies are essential to foster broader AI integration across the Italian business landscape..
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
Italian law firms and legal departments are increasingly recognizing the benefits of integrating AI into their workflows. Some notable examples include:
- Legal Research: AI platforms can assist lawyers by providing comprehensive and accurate legal research. These tools can search through vast legal databases, case law, statutes, and regulations to find pertinent information, saving time and improving the quality of research. For instance, there are AI-powered tools which synthesize the ruling of courts.
- Automation of Routine Tasks: Routine legal tasks such as drafting standard documents and organizing case files can be automated using AI, freeing up lawyers to focus on more complex and value-added activities. AI-powered tools can quickly review large volumes of documents, identify relevant information, and highlight potential risks or inconsistencies. This is particularly useful in due diligence processes in the context of mergers and acquisitions.
- Contract Analysis and Management: AI solutions can analyze and manage contracts by extracting key terms, identify obligations, and flag non-standard clauses. This helps in streamlining contract lifecycle management and ensuring compliance with legal standards.
The use of AI in the Italian legal sector raises significant regulatory and ethical issues, particularly concerning data protection, confidentiality, accuracy, and accountability. Lawyers must comply with the GDPR and uphold professional secrecy, especially when using AI systems that process or store sensitive client data externally. There is concern about the accuracy and reliability of AI-generated outputs, as errors or “hallucinations” could expose professionals to liability.
Importantly, Article 13 of the 2025 draft Italian AI Bill addresses these issues in the context of intellectual professions, including legal services. It states that AI may only be used to support and assist professional activities, without replacing the predominant human intellectual input. To preserve the trust-based relationship between professional and client, the draft Bill requires that professionals clearly inform clients — using plain and clear language — about any AI systems used in the delivery of services. This provision reinforces transparency and professional responsibility in the adoption of AI tools, but raises questions of compatibility with the AI Act, which already regulates AI transparency without requiring similar disclosures.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Challenges:
- Job displacement and role changes are concerns, with the potential for AI to automate routine tasks, leading to job displacement or changes in job roles, and requiring preparation for and management of this transition in job functions, necessitating adaptation and upskilling.
- Overreliance on potentially wrong outputs: Dependence on these outputs without critical assessment can lead to erroneous legal advice and flawed strategies, jeopardizing client advice. Balancing AI insights with human expertise is crucial to maintaining reliable legal practice and mitigating AI’s limitations.
- Data Protection and cybersecurity: data privacy and security is another major challenge, as lawyers will face several cases in which data is processed by AI systems, while needing to have strict data privacy and security standards, also taking into account increased cybersecurity risks inherent to AI.
- Legal liability: lawyers will address responsibility and accountability issues, including allocating liability among human operators, AI developers, and AI systems.
- Discrimination: bias and fairness pose challenges, as AI algorithms may have inherent biases that could lead to unfair or discriminatory outcomes, requiring mechanisms to ensure transparency and explainability in AI decisions, and to correctly address the accountability on the stakeholders involved in the processing.
Opportunities:
- Enhanced Efficiency and Productivity: AI can automate routine and time-consuming tasks such as document review, legal research, and contract analysis, allowing lawyers to focus on more complex and strategic work and enhancing overall productivity while reducing operational costs.
- Predictive Analytics: AI enables lawyers to analyze legal data, extract valuable insights, and make informed predictions. For instance, AI can assist in determining the likelihood of success for actions, predicting rulings, and automating legal research and drafting. This can increase efficiency, reduce costs, and broaden access to legal services.
- Document Automation and Contract Analysis: AI-powered tools can help automating the drafting of legal documents, saving time and reducing certain human errors.
- Legal Research and Analysis: AI can assist lawyers in conducting legal research, analyzing precedents, and identifying relevant case laws or regulations. Legal research platforms can use AI to enhance the accuracy of legal research outcomes.
- New Practice Areas: AI creates new practice opportunities for lawyers in areas like legal tech consulting, AI policy and ethics, compliance automation, and intellectual property related to AI innovations. These areas allow lawyers to diversify their expertise and lead in shaping the legal landscape amidst technological advancements
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
In the next 12 months, significant legal developments in AI in Italy are likely to focus on several key areas:
1. EU AI Act Applicability and Italian Draft AI Bill
Italian stakeholders already need to comply with the AI Act, as it is already directly applicable as regards prohibited practices, since February 2025. Other provisions of the AI Act will become directly applicable in the next few months (regarding general-purpose AI models) and years (concerning rules on high-risk AI systems).
In the meantime, Italy is in the process of approving its draft Bill on AI. AI operators established or operating in Italy will need to pay close attention to this Bill, as it will lay down some nuances regarding the Italian implementation of the AI Act.
2. Intellectual Property (IP) and AI
Questions around the ownership of AI-generated works and inventions are already becoming more pressing. Italian law may see updates or new interpretations in this area, especially concerning the protection of IP created by, or with the assistance of, AI.
3. AI in Employment and Labor
The impact of AI on the workforce will continue to be a significant area of legal concern. This includes issues related to AI-driven automation and worker rights. Legal frameworks might be updated to address the challenges and opportunities presented by AI in the labor market.
In this respect, in the upcoming months AI operators established or operating in Italy should monitor the Italian transposition of the Platform Work Directive (Directive (EU) 2024/2831) which provides new rules aimed at ensuring that Gig Economy platforms must ensure inter alia appropriate human oversight of AI systems used to manage platform workers, as well as a right to explanation for important decisions that directly affect the workers. Platforms are also forbidden from processing certain types of personal data belonging to workers, such as data on someone’s emotional or psychological state and personal beliefs. The Directive must be transposed by Member States, including Italy, by 2 December 2026.
4. Sector-Specific Regulations
Different sectors such as finance and public administration may see specific AI regulations tailored to their unique needs and challenges. This could involve guidelines for the use of AI in financial services to prevent fraud, or regulations to ensure safety and compliance with traffic laws.
Italy: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Italy.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?