Focus on…
Data Protection Law in China – China’s First Generative Artificial Intelligence Regulation

The explosive growth of generative AI technology since late 2022 has left regulators worldwide racing to adopt legislation in what was a largely unregulated area, with an attempt to balance robust and adequate legal safeguards with sufficient breathing space for creativity, innovation and commercialization. In line with the global efforts, on July 10, 2023, China’s top internet watchdog, the Cyberspace Administration of China (“CAC”), in consultation with six other regulators, issued its highly-anticipated generative artificial intelligence (“AI”) regulation, the Interim Measures on the Management of Generative Artificial Intelligence Services (the “Measures”).
The Measures, which will take effect from 15 August 2023, follow a previous draft that was released for public consultation in April 2023. The Measures strike a softer regulatory tone than their previous draft, suggesting that regulators will support and encourage the development of generative AI technology in China, while still mandating a range of stringent security obligations.
This article aims to introduce the Measures in a nutshell and discuss their implications for companies operating in the generative AI space in China.
1. Regulatory Approach
Responsive and prudent regulation
In China, as in other nations, the generative AI industry is still in the early stages of its technological evolution. There is yet to be a consensus on the ultimate trajectory of the development of AI technology, or the boundary of its applications across diverse industrial sectors and daily life. Consequently, the Measures outline a more inclusive and prudent regulatory framework, leaving a certain flexibility in scope, institutional design, and compliance requirements, thereby allowing generative AI in China the space to adapt to the developing trends of the global AI industry.
Simultaneously, the term “interim” indicates that the Measures were formulated in response to the swift advancement of generative AI and are somewhat transitory in nature. This implies that, once the Measures are in effect, the Chinese government can conduct evaluations of the Measures and make revisions as required. The State Council of China has included a Draft Artificial Intelligence Law in its deliberations for the 2023 legislative work plan for the National People's Congress Standing Committee. In the future, generative AI in China may be governed by a holistic legal framework based on an umbrella legislative law.
Classified and graded supervision
Much like the European Union’s proposed Artificial Intelligence Law - which categorizes AI systems into unacceptable risk, high risk, limited risk, and low/minimal risk, each having different compliance requirements and obligations - the Measures have recognized that generative AI services have differences in model structure, training data, and possible application, which may bring different levels and types of risk. As such, the Measures have instituted a system of risk classification, a relatively common approach of Chinese regulators when dealing with emerging fields of technology.
Under this system, different types of generative AI services will be subject to different regulatory requirements. Depending on the type of services provided, these requirements could include applying for a security assessment with the CAC, and/or registration of the proprietary algorithms used in the technology with the regulator (both requirements will be elaborated below).
2. Scope of Application
Generative AI providers
The Measures target generative AI services, with the providers of these services being the primary focus of supervision. According to the Measures, generative AI service providers include not only entities that offer services directly using generative AI technology, but also those who support other entities indirectly in generating text, images, sounds, and other services through, for example, the provision of programmable interfaces (APIs).
“Public-facing” element
To fall under the regulatory scope of the Measures, the generative AI services must be provided to the public in China. Therefore, entities engaged in research and development, or those not providing generative AI technology to the public in China, are not subject to the Measures.
Extraterritorial reach
The Measures have regulatory implications for foreign generative AI products originating from outside of China. As long as the services are provided to the Chinese public, regardless of the location of the service providers, they will be subject to the Measures. The CAC is empowered to suspend or terminate network access to offshore providers of generative AI services to the public in China if such services violate Chinese laws, administrative regulations, or the provisions of the Measures.
3. Compliance Obligations of Generative AI Service Providers
Pre-trained data and foundation models
According to Article 7 of the Measures, during pre-training and optimization training, generative AI providers must ensure that the data used for training and foundation models are comprised of information obtained exclusively from legal sources. Thus, generative AI providers will need to examine the legality and legitimacy of the training data sources from multiple perspectives, such as:
Content management obligations
Article 9 of the Measures explicitly provides that generative AI providers should assume responsibilities as “network information content producers” and fulfill obligations related to network information security. Although the Measures do not specify these responsibilities or obligations, it can be inferred (by reference to similar wording in related Chinese regulations) that generative AI providers are responsible for the content they generate and shall ensure that it does not contain illegal or harmful information explicitly prohibited by relevant laws or regulations.
Further, Article 14 of the Measures requires generative AI providers to monitor and self-correct illegal content. Upon detection of illegal content, necessary steps should be taken to prevent its generation and dissemination to avoid further harm, and generative AI providers should correct their algorithms, optimize their models, and report such steps to the regulators.
Data protection obligations
Article 11 of the Measures requires generative AI providers to protect the input information entered by the users when using their services, and the usage records. As input information may contain trade secrets, personal information or other sensitive information, generative AI providers must not unlawfully provide such information or records to others, or unlawfully retain input information or records that can be used to identify users.
When collecting personal information, generative AI providers should also fulfill the responsibilities of personal information controllers according to China’s Personal Information Protection Law. Generative AI providers must inform and explain their data collection and processing activities, secure consent from the individuals whose information is collected, ensure security of the data, promptly respond to and address requests from individuals to access, copy, correct, supplement, or delete their personal information, and adhere to the principle of data minimization.
The images below summarize the obligations of generative AI service providers in terms of algorithm and service compliance.
4. Security Assessment and Algorithm Filing
Security assessment
It is noteworthy that the Measures do not indiscriminately require submission of security assessments for all generative AI services. They only necessitate a security assessment and algorithm filing for “generative AI services with attributes of public opinion or social mobilization.” This rule echoes the risk classification approach discussed earlier, and similar requirements for security assessment and algorithm filing can be found in, for example, China’s Provisions on the Administration of Deep Synthesis of Internet-based Information Services (the “Deep Synthesis Provisions”), which have become commonly known outside China (and massively oversimplified) as China’s deepfake rules.
In practice, “generative AI services with attributes of public opinion or social mobilization” likely refer to information services such as forums, blogs, microblogs, chat rooms, communication groups, public accounts, short videos, live streaming, information sharing, and mini-programs. It may also include other internet information services that provide channels for public opinion and expression or have the ability to mobilize the public for specific activities.
In terms of the key aims of the security assessment outlined under the Measures, the focus is on ensuring the compliance of training data, data security, and content management. When preparing to submit documentation for the security assessment, generative AI providers need to prepare sufficient evidence and explanation in these areas and then submit a security assessment report to the municipal or higher-level CAC and public security authorities. Multinationals operating in China who have recently been engaged in regulatory security assessments of their outbound cross-border data transfers will likely have an understanding of the potential complexity and detail of the reports and submissions that will be required.
Algorithm filing
The algorithm filings required under the Measures must adhere to China’s existing algorithm rules. China’s Regulations on the Management of Internet Information Service Algorithm Recommendations stipulate that providers of algorithm recommendation services with attributes of public opinion or social mobilization shall complete the filing procedures through the official online filing system within ten working days after providing the relevant service.
To complete the required filing, generative AI providers need to establish an algorithm security institution within their organizations, formulate sufficient internal rules and regulations, conduct self-assessment of algorithm security, and prepare a series of technical materials for submission. Algorithm recommendation service providers who have completed the filing should also prominently display the filing number on their websites. Thus far, the CAC has published the list of algorithm filings on three occasions, in August and October 2022, and January 2023, with over two hundred filings in total.
Further, on June 20, 2023, the CAC issued a public announcement about the algorithm filing information under the Deep Synthesis Provisions, thereby disclosing the algorithm filing information for domestic deep synthesis services. According to the announcement, a total of 41 deep synthesis service algorithms were included, covering popular applications from Chinese tech-focused giants such as Baidu, Meituan, and Alibaba.
5. Conclusion
In response to the explosive growth of AI technology in recent years, China has taken proactive regulatory endeavors on generative AI services through the Measures. The Measures aim to strike a balance between the need for legal safeguards and the encouragement of development and innovation of generative AI technology. Despite the rigorous security obligations, there is considerable room for innovation and commercialization. This presents a wealth of exciting opportunities for businesses to delve into the burgeoning generative AI sector in China. For companies aiming to maintain a competitive edge in China's generative AI space, gaining a thorough understanding of and compliance with the Measures will be crucial in navigating this evolving and intricate market.
***