-
What are your countries legal definitions of “artificial intelligence”?
Currently, there is no universally recognized legal definition for “Artificial Intelligence” in PRC Laws, administrative regulations, or departmental rules. However, there are recommended national standards and local regulations that provide definitions of “artificial intelligence” based on its functionalities. For instance:
- Artificial intelligence systems are a class of engineering systems that are designed with specific goals defined by humans, generating outputs such as content, predictions, recommendations, or decisions.(GB/T 41867-2022 Information technology – Artificial intelligence – Terminology)
- Artificial Intelligence refers to the theoretical framework, methods, techniques, and application systems that utilize computers or computer-controlled machines to simulate, extend, and enhance human intelligence. It involves perceiving the environment, acquiring knowledge, and using the knowledge to achieve optimal results (Article 2 of the Shanghai Municipal Regulation on Advancing the Development of the Artificial Intelligence Industry)
From a substance perspective, the application of “Artificial Intelligence” has extensive applications and may be involved in a variety of emerging scenarios. In light of this, China adopts a legislative strategy of combining “urgent needs and system construction.” Apart from regulating application scenarios such as “automated decision-making” and “personal identity recognition in public places” in general legislation such as the Personal Information Protection Law, specific regulatory rules have been formulated for key domains of artificial intelligence applications. These include algorithmic recommendations (e.g., personalized content recommendations within short video platforms), deep synthesis (e.g., Deepfake), and generative artificial intelligence (e.g., ChatGPT). Such rules encompass the definition of artificial intelligence applications.
-
Has your country developed a national strategy for artificial intelligence?
China attaches great importance to the development of the AI industry, which has now been elevated to the level of national strategy. In July 2017, the central government promulgated the New Generation Development Plan for Artificial Intelligence, which for the first time stipulated a systematic deployment of the development of the artificial intelligence industry as a nation-wide strategy. This plan pointed out that AI has become an emerging engine for economic development, and the state will vigorously develop the emerging industry of AI; corresponding to the level of national institutional arrangements, including: the establishment of AI-related legislations, regulations and ethical norms, the establishment of AI safety monitoring and assessment system, and the establishment of AI technical standards and intellectual property rights system.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Currently, China’s legal framework for Artificial Intelligence is undergoing a transition from prioritizing the “urgent needs” to “constructing a comprehensive system.” The Artificial Intelligence Law has recently been included in the legislative agenda. In the current stage, regulation of key realms and important issues in Artificial Intelligence applications primarily relies on specialized legislation.
In the field of cybersecurity and data protection, under the framework established by the Cybersecurity Law, Data Security Law, and the Personal Information Protection Law, China has formulated specialized regulations for particular high-risk artificial intelligence applications. For instance:
- The Administrative Provisions on Recommendation Algorithms in Internet-based Information Services (“Algorithms Recommendations Regulations”): The regulations established guidelines for information security management, management of user models and tags, user rights and interests. Importantly, algorithmic recommendation service providers with characteristics of public opinion attributes or social mobilization capabilities are required to file their algorithms with regulatory authorities and perform security assessments in compliance with relevant laws and regulations.Because the scope of “algorithmic recommendation technology” defined in the regulations is broad, including but not limited to generation and synthesis technology, personalized pushing technology, ranking and selection technology, retrieval and filtering technology and dispatching and decision-making technology. These regulations are generally considered the fundamental law for AI/algorithm governance in China at this stage.
- The Administrative Provisions on Deep Synthesis in Internet-based Information Services (“Deep Synthesis Regulations”): These regulations primarily focused on the governance of deep synthesis technologies such as Deepfake. They provide provisions for information security accountability, management of deep synthesis information content labeling, and other related aspects.
- The Provisional Measures for the Administration of Generative Artificial Intelligence Services (“Measures for Generative AI”): These measures primarily focus on the regulation of model applications such as ChatGPT. The measures provide provisions regarding the legitimacy of training data processing activities, requirements for generated content, and operational management of the services, etc.
In the realm of ethics of science and technology, within the framework established by laws and regulations such as the Law on the Promotion of Science and Technology Progress and the Opinions on Strengthening the Governance of Ethics of Science and Technology, China has proceeded to develop specific ethical norms for artificial intelligence. For instance:
- Measures for Ethical Review of Science and Technology (Trial Implementation) (Draft for Comments): This draft explicitly listed artificial intelligence as an applicable area for ethical review, and provides regulations on the subjects, procedures, contents, and standards of ethical review. It also includes specific AI research and development activities in the List of Scientific and Technological Activities Requiring Expert Review.
- Ethical Guidelines for Next-Generation Artificial Intelligence: This guideline established fundamental ethical norms for various AI activities and set ethical requirements for the management, research and development, supply, and utilization of artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
China has not yet developed specific regulations targeting defective artificial intelligence systems. That said, in certain cases, artificial intelligence systems may be deemed as products, thereby subjecting them to the requirements of product liability as outlined in the Civil Code and the Product Quality Law.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Currently, China has few explicit special rules regarding civil liability for harm/damage caused by artificial intelligence systems. Damages resulting from artificial intelligence systems are generally handled under traditional civil tort rules, applying general principles of fault liability for most cases. However, in certain areas such as product liability, motor vehicle accidents, and medical malpractice, where there are specialized tort rules, those rules would be applied.
China has not yet established specific criminal rules targeting harm caused by artificial intelligence systems. However, if artificial intelligence systems are used as tools for criminal activities, the unlawful use of such systems may potentially constitute multiple criminal offenses under the Criminal Law, including but not limited to crimes such as “unlawfully obtaining computer information system data” and “infringing upon citizens’ personal information.”
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
As mentioned in the response to Question 5, harm caused by artificial intelligence systems is typically addressed using traditional civil tort rules. Under tort rules, liability is attributed to the party at fault, while special tort rules may impose different forms of liability, such as strict liability, depending on the specific domain.
In particular, certain local regulations have stipulated further provisions regarding liability attribution in the field of intelligent-connected vehicles. For example:
- Article 53 of the Regulations on the Management of Intelligent-Connected Vehicles in Shenzhen Special Economic Zone: If a traffic accident occurs involving an intelligent-connected vehicle with a human driver, and the responsibility lies with that intelligent-connected vehicle, the driver shall bear the liability of compensation. If a fully autonomous intelligent-connected vehicle, operating without a human driver, causes damage in a traffic accident, and the responsibility lies with that intelligent-connected vehicle, the owner and operator of the vehicle then shall bear the liability of compensation.
- Article 43, Clause 2 of the Measures for the Testing and Application Management of Intelligent-Connected Vehicles in Shanghai: If an intelligent-connected vehicle operating in autonomous driving mode causes damage in a traffic accident, and it is determined that the responsibility lies with the intelligent-connected vehicle, the entity conducting the testing and application activities of intelligent-connected vehicles shall bear the corresponding liability of compensation in accordance with the law, and may seek recovery from the relevant liable party in accordance with the law.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
China has not yet established specific rules for allocating the burden of proof of liability regarding artificial intelligence. In legal disputes related to artificial intelligence, typically the general rule of “the burden of proof lies with the party making the claim” is applied. However, there may also instances where the burden of proof is reversed as explicitly stipulated by law. For example, according to the Personal Information Protection Law, if a personal information processor causes harm to the personal information rights and interests of individuals, and they cannot prove their lack of fault, they shall bear the liability for damages and other infringement responsibilities.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
China attaches great importance to the development of the artificial intelligence insurance system. According to the Development Plan for the New Generation of Artificial Intelligence issued by the central government in July 2017, insurance is considered one of the effective approaches to address the social issues brought about by artificial intelligence. Local regulations such as the Shanghai Municipal Regulation on Promoting the Development of the Artificial Intelligence Industry and the Regulation on Promoting the Development of the Artificial Intelligence Industry in Shenzhen Special Economic Zone have further stipulated encouraging provisions for the development of artificial intelligence insurance.
In addition, considering that accidents involving intelligent-connected vehicles may result in severe personal and property damages, intelligent-connected vehicles, as a high-risk scenario, are subject to mandatory requirements of insurance. In July 2021, the Ministry of Industry and Information Technology and other regulatory authorities jointly promulgated the Management Specifications for Intelligent-Connected Vehicle Road Testing and Demonstration Applications (for Trial Implementation), which stipulates that for the operation of intelligent-connected vehicles in passenger demonstration applications, it’s so required to purchase necessary commercial insurance for the occupants. Additionally, local regulations such as the Beijing Municipality Implementation Rules for the Management of Autonomous Vehicle Road Testing (for Trial Implementation) and the Regulations on the Management of Intelligent-Connected Vehicles in Shenzhen Special Economic Zone have also specifically stipulated mandatory insurance requirements.
It should be noted that according to the Regulations on the Supervision of Liability Insurance Business, the scope of coverage for liability insurance should not include risks or losses related to criminal fines, administrative penalties, etc. in order to preserve the punitive effect of appropriate actions.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
It is commonly believed that in China, artificial intelligence shall not be recognized as a valid inventor for patent applications. According to the Guidelines for Patent Examination published by the China National Intellectual Property Administration (“CNIPA”), the inventor shall be a natural person and must provide their true name for the application. In 2019, Stephen L Thaler submitted a patent application to the CNIPA, naming the AI system “DABUS” as the inventor. However, the current status of the examination for this particular application cannot be located through public inquiries.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Currently, it remains unclear whether AI-generated images can be protected by copyright under Chinese law. Regarding the copyright issues of AI-generated content (including but not limited to images), Chinese courts have shown inconsistent rulings in a limited number of cases:
- “Feilin Case”: In this case, the court held that the involvement of a natural person is a necessary condition for the creation of a work. Although there were natural persons involved in the development and use of the software in question, the generated article did not reflect the original expression of a natural person’s thoughts or emotions. Since the generated article was not created by a natural person, even if it had originality, it did not qualify as a work under the Copyright Law.
- “Tencent Case”: In this case, the court held that the processes involved in the generation of the articles in question, such as “inputting data types and processing data formats, setting trigger conditions, selecting article framework templates and language corpus, training intelligent verification algorithm models, etc.,” reflected the choices and arrangements made by the editing, product, and technical development teams of the Dreamwriter software. Even if there was a certain time gap between these choices and the actual generation of the articles, it did not hinder them from constituting written works protected under the Copyright Law.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
The use of artificial intelligence systems in workplaces may involve a variety of issues, including but not limited to:
- Issue of Trade Secrets. The system may collect confidential information from enterprises during its operation, which poses a risk of information leakage. In April 2023, the Payment and Clearing Association of China issued a notice advocating for caution among professionals in the payment industry while using intelligent tools (i.e., ChatGPT). They emphasized the importance of not uploading critical sensitive information and strengthening information security internal control management.
- Issue of Personal Information Protection. The collection and processing of employee personal information shall comply with corresponding laws and regulations such as the Personal Information Protection Law. Specifically, personal information processing activities shall be based on an appropriate legal basis (e.g., obtaining consent of individuals, or being necessary for human resource management as per labor regulations and collective contracts signed in accordance with the law, etc.).
- Issue of Fairness. Artificial intelligence systems may exhibit discriminatory behavior in the application process, particularly in work scenarios such as recruitment and evaluation of performance, which significantly influence the rights and interests of individuals. Thus, it’s significant to ensure transparency in the process and fairness in outcomes. According to Article 24 of the Personal Information Protection Law, individuals have the right to reject decisions that significantly affect their rights and interest and are generated solely by means of automated decision-making.
- Compliance Obligations for Specialized Industries. The use of artificial intelligence systems in specialized industries, such as the news industry, requires adherence to compliance obligations of the specialized industry.
-
What privacy issues arise from the use of artificial intelligence?
During the use of artificial intelligence, if it involves the processing of personal information, it is necessary to comply with applicable data privacy regulations, such as the Personal Information Protection Law, which stipulates general requirements, including but not limited to informing data subjects and obtaining the legal basis by means of individual consent. In addition to these general requirements, specific application scenarios may involve additional considerations, for instance.
- Generative AI: Training data, user input information, and content generated may all involve personal information. The Measures for Generative AI further clarifies the obligations of service providers in terms of personal information protection, including the legality of training data processing activites and the content generated. It also stipulates specific requirements on service providers regarding the protection of user input information and usage records.
- Personalized Recommendations: Activities like advertising marketing and product recommendations based on user profiles can impose a significant impact on user rights and interests. The E-Commerce Law, the Personal Information Protection Law, and the Administrative Provisions on Recommendation Algorithms require service providers to implement measures to safeguard users’ rights of decision-making.
- Facial Recognition: The misuse of facial recognition technology has attracted considerable attention from regulatory authorities and the public in China, resulting in the introduction of multiple related regulations. For instance, Article 26 of the Personal Information Protection Law stipulates that the installment of any image-capturing or personal identification equipment in a public place shall be necessary to maintain public security and be accompanied by a prominent sign indicating the equipment. Additionally, Article 10 of the Provisions of the Supreme People’s Court on Several Issues concerning the Application of Law in the Trial of Civil Cases involving the Processing of Personal Information Using Facial Recognition Technology stipulates that the owners or users of properties have the right to refuse facial recognition as the sole means for verifying entry into services areas of the property.
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
Using personal data for training artificial intelligence systems, even if the intention is not to identify specific natural persons, is generally considered “personal information processing” according to the provisions of the Personal Information Protection Law and therefore subject to the provisions of the Personal Information Protection Law, including the need to obtain the legal basis for processing personal information.
Common legal bases that can be used for model training include “individual consent” and “processing publicly available personal information within a reasonable scope.” However, it is crucial to note that “legitimate interests” is not considered a legal basis under Chinese law. Additionally, if personal information can be anonymized before being used for model training, the requirements of the Personal Information Protection Law may not apply.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
A variety of departments, including the offices of the Cyberspace Administration, share the responsibility of regulating personal information protection. The Cyberspace Administration of China is responsible for coordinating and overseeing the overall efforts in personal information protection and relevant supervision. Other departments such as the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation also have regulatory responsibilities within their respective areas of expertise. However, until present, none of the aforementioned departments have issued specific guidelines regarding artificial intelligence.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
As one of the primary regulatory authorities for personal information protection, the offices of the Cyberspace Administration currently regulate the application of artificial intelligence through various means, including pre-assessment and filing, as well as post-incident enforcement inspections. For instance:
- As of June 2023, the Cyberspace Administration of China has issued four batches of domestic internet information service algorithm filing lists (containing 262 algorithms) as well as a filing list for domestic deepfake synthesis services (containing 41 algorithms). The filing process includes disclosing information such as the fundamental principles of algorithms, operating mechanisms, application scenarios, and intended purposes of such algorithms.
- In March 2021, the offices of the Cybersecurity Administration and the Ministry of Public Security held regulatory talk with 11 enterprises, including Alibaba, Tencent, and ByteDance, regarding voice-based social networking apps and applications involving “deepfake” technology that had not undergone security assessment procedures. Regulatory authorities urged those enterprises to conduct thorough security assessments, enhance risk prevention and control mechanisms, and promptly address any security vulnerabilities identified during the assessment by implementing effective corrective measures.
-
Have your national courts already managed cases involving artificial intelligence?
In the typical cases published by the Supreme People’s Court, there are several specific applications of artificial intelligence (AI), such as:
- “AI Companion Software” Infringing Personality Rights Case: In this case, the court recognized that projecting a comprehensive image of a natural person, including their name, likeness, and personality traits, onto an AI character involved the individual’s personality rights. The inclusion of features like “training” also touched upon the individual’s dignity and the interest in having their personality respected.
- Privacy Infringement by Facial Recognition Device Case: In this case, the court ruled that the defendant’s installed video doorbell, which had dual modes of facial recognition and backend control, permitted the collection of private information and activities inside the plaintiff’s residence. Therefore, it infringed upon the plaintiff’s privacy rights. The case emphasized the priority protection of privacy rights when conflicts arise between the use of AI devices and the enjoyment of privacy.
- Guo Bing vs. Hangzhou Wildlife World Service Contract Dispute (“First Facial Recognition Dispute”): In this case, the court recognized that biometric information belongs to sensitive personal information, and operators can only collect and use it with the informed consent of consumers, following the principles of legality, legitimacy, and necessity. The unilateral change of the verification method from fingerprint recognition to facial recognition by the operator constituted a breach of contract. Consumers had the right to request the deletion of personal information corresponding to the breach.
Additionally, the two AI-generated content copyright disputes mentioned in question 10 were also recognized by the Supreme People’s Court as exemplary cases.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
China currently lacks a unified regulatory department for AI, and relevant regulatory authorities are divided among various departments, including the Cyberspace Administration, the Ministry of Industry and Information Technology, the Ministry of Public Security, the Ministry of Science and Technology, the State Administration for Market Regulation and the National Development and Reform Commission. Here is a breakdown of their respective roles:
- Cyberspace Administration: The Cyberspace Administration takes the lead in formulating regulations such as the Regulations on Algorithm Recommendations and the Regulations on Deepfake Technologies. They are responsible for coordinating and supervising governance and regulatory work related to these regulations, playing an important role in artificial intelligence regulation in China.
- Ministry of Industry and Information Technology: The Ministry of Industry and Information Technology is primarily responsible for industry management in the fields of telecommunications, internet services, network security, electronic information manufacturing, software industry, and other related areas.
- Ministry of Public Security: The Ministry of Public Security is mainly responsible for public security management and combating crime. For example, the Gansu Public Security Bureau cracked the first domestic case of using artificial intelligence technology to fabricate false information in May 2023.
- Ministry of Science and Technology: The Ministry of Science and Technology has drafted the Measures for Ethical Review of Science and Technology (Draft for Public Comments), which includes ethical review requirements for AI technology activities.
- State Administration for Market Regulation: The State Administration for Market Regulation is primarily responsible for comprehensive market supervision and enforcement. For instance, the use of algorithmic recommendation services to engage in monopolistic or unfair competition practices falls within their regulatory purview.
- National Development and Reform Commission: The National Development and Reform Commission is mainly responsible for formulating and implementing strategies on national economic and social development, medium and long-term development plans and annual plans. The National Data Administration is one of its subordinate agencies, tasked with coordinating and promoting the construction of basic data systems, integrating and sharing data resources, and facilitating the development and utilization of data.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
Currently, artificial intelligence has been widely applied in business scenarios in China and the amount and type of filed algorithms can be a proof. According to the requirements of the Regulations on Algorithm Recommendations, providers of algorithm recommendation services with public opinion attributes or social mobilization capabilities are obligated to file their algorithms with regulatory authorities. Similarly, according to the requirements of the Regulations on Deepfake Technologies, technology providers of deepfake services are also required to fulfill the obligation of algorithm filing.
According to the list of filed algorithms published by the Cyberspace Administration, as of June 2023, a total of 303 algorithms have been filed. The main filing entitles are internet companies such as Alibaba and Tencent, and the application scenarios cover areas such as information recommendation, ranking and sorting, information retrieval, order delivery, image and video editing, content filtering, and intelligent customer service.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
Due to the strict requirements for authenticity, accuracy, and logical reasoning in the legal industry, people tend to approach the application of artificial intelligence in the legal field with caution. Currently, AI applications in the legal industry primarily focus on providing services such as legal research and question answering, as well as document generation and translation.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
For the legal profession, artificial intelligence technology is a double-edged sword, with both pros and cons to its influence. The key lies in how lawyers understand the role of AI correctly and make reasonable use of it to maximize its benefits:
- Employment and Workforce: The use of artificial intelligence to replace basic legal tasks may lead to an overall contraction in the legal job market, but it also offers the potential for law firms to reduce labor costs.
- Legal Application and Interpretation: The emergence of new legal issues brought about by the development of artificial intelligence presents challenges to lawyers’ learning abilities and knowledge transfer, but it also opens up possibilities for expanding into emerging practice areas.
- Information Acquisition and Discernment: Utilizing artificial intelligence tools like ChatGPT for obtaining answers to legal questions can significantly reduce lawyers’ information-gathering costs. However, it may also expose them to the risk of being misled by “illusory” information.
- Service Risks and Benefits: Providing legal services to clients using artificial intelligence technology can involve information security risks, but it also offers the potential to reduce service costs and enhance the overall client experience.
- Work Capabilities and Efficiency: The use of artificial intelligence as an assistant in legal work may lead to over-reliance on artificial intelligence and a decline in fundamental legal skills. However, it can also help lawyers improve work efficiency and focus their energy on creative tasks.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
The legislative progress of the Artificial Intelligence Law in China is worth paying close attention to in the upcoming year. In May 2023, the central government released the Legislative Work Plan of the State Council for 2023, which includes the preparation to submit the draft of the Artificial Intelligence Law for review by the Standing Committee of the National People’s Congress.
The Artificial Intelligence Law is expected to serve as a foundational legal framework for the governance of artificial intelligence in China, aiming to establish a unified framework for the classification and regulation of artificial intelligence in the country.
China: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in China.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?