China’s Interim Measures for the Management of Generative AI Services: A Comparison Between the Final and Draft Versions of the Text
Authors: Yirong Sun and Jingxian Zeng
Edited by Josh Lee Kok Thong (FPF) and Sakshi Shivhare (FPF)
The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech and Jingxian Zeng, research fellow at the University of Hong Kong Philip K. H. Wong Centre for Chinese Law. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.
On August 15, 2023, the Interim Measures for the Management of Generative AI Services (Measures) – China’s first binding regulation on generative AI – came into force. The Interim Measures were jointly issued by the Cyberspace Administration of China (CAC), along with six other agencies, on July 10, 2023, following a public consultation on an earlier draft of the Measures that concluded in May 2023.
This blog post is a follow-up to an earlier guest blog post, “Unveiling China’s Generative AI Regulation” published by the Future of Privacy Forum (FPF) on June 23, 2023, that analyzed the earlier draft of the Measures. This post compares the final version of the regulation with the earlier draft version and highlights key provisions.
Notable changes in the final version of the Measures include:
- A shift in institutional dynamics, with the CAC playing a less prominent role;
- Clarification of the Measures’ applicability and scope;
- Introduction of responsibilities for users;
- Introduction of additional responsibilities for providers, such as taking effective measures to improve the quality of training data, signing service agreements with registered users, and promptly addressing illegal content;
- Assignment of responsibilities to government agencies to strengthen the management of generative AI services; and
- Introduction of a transparency requirement for generative AI services, in addition to the existing responsibilities for providers to increase the accuracy and reliability of generated content.
Introduction
The stated purpose of the Measures, a binding administrative regulation within the People’s Republic of China (PRC), is to promote the responsible development and regulate the use of generative AI technology, while safeguarding the PRC’s national interests and citizens’ rights. Notably, the Measures should be read in the context of other Chinese regulations addressing AI and data, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Law on Scientific and Technological Progress.
Central to the Measures is the principle of balancing development and security. The Measures aim to encourage innovation while also addressing potential risks stemming from generative AI technology, including manipulation of public opinion and disseminate sensitive or misleading information at scale. The Measures also:
- Address a range of societal concerns, including data breaches, fraudulent activities, privacy violations, and intellectual property infringements,
- Provide mechanisms for oversight inspections, the right to file complaints, and penalties for non-compliance, and
- Coordinate different stakeholders involved in generative AI.
The next section provides some context on the finalization process of the Measures.
The final Measures were shaped significantly by private and public input
The initial draft of the Measures was released for public consultation on April 11, 2023. Following the conclusion of the consultation period on May 10, 2023, the final version of the Measures received internal approval from the CAC on May 23, 2023, and were subsequently made public on July 10, 2023 before formally coming into force on August 15, 2023.
Several significant changes in the final version of the Measures appear attributable to feedback from various from industry stakeholders and legal experts. These industry stakeholders and legal experts include leading tech and AI companies such as Baidu, Xiaomi, SenseTime, YITU, Megvii, and CloudWalk, as well as research institutes affiliated with authorities such as the MIIT. The stakeholders’ input, including public statements on the draft Measures (which were referred to in FPF’s earlier guest blog) appear to have played a role in influencing the revisions made in the final version of the Measures.
In addition, certain changes may also have been influenced by industry policies and standards at the central and local government levels. In particular, between May 2023 and July 2023, China’s National Information Security Standardization Technical Committee (also known as “TC260”) published two “wishlists” (here and here), outlining 48 upcoming national recommended standards. Among these standards, three were specifically focused on generative AI, with the aim of shaping the enforcement of the requirements specified in the final version of the Measures.
The next few paragraphs highlight changes to the overall contours of the Measures.
A key change in the final Measures is the allocation of regulatory responsibility for generative AI
A major difference between the draft and final versions of the Measures is in the allocation of administrative responsibility for generative AI. The final version of the Measures allowed for greater collaboration amongst public institutions compared to the draft version, with the CAC playing a less prominent role. The other six agencies involved in issuing the final version of the Measures are the National Development and Reform Commission (NDRC); the Ministry of Education; the Ministry of Science and Technology (MoST); the Ministry of Industry and Information Technology (MIIT); the Ministry of Public Security; and the National Radio and Television Administration.
Notably, the task to promote AI advancement amid escalating concerns is to be overseen by authorities other than the CAC, such as MoST, MIIT, and NDRC.
Another significant difference is the inclusion of three pro-business provisions – namely, Articles 3, 5, and 6 – in the final version of the Measures. These Articles provide as follows:
- Article 3: “The state is to adhere to the principle of placing equal emphasis on development and security, merging the promotion of innovation with governance in accordance with law; employing effective measures to encourage innovation and development in generative AI, and carrying out tolerant and cautious graded management by category of generative AI services.” [emphasis added]
- Article 5: “Encourage the innovative application of generative AI technology in each industry and field, generate exceptional content that is positive, healthy, and uplifting, and explore the optimization of usage scenarios in building an application ecosystem.
Support industry associations, enterprises, education and research institutions, public cultural bodies, and relevant professional bodies, etc. to coordinate in areas such as innovation in generative AI technology, the establishment of data resources, applications, and risk prevention.” [emphasis added]
- Article 6: “Encourage independent innovation in basic technologies for generative AI such as algorithms, frameworks, chips, and supporting software platforms, carry out international exchanges and cooperation in an equal and mutually beneficial way, and participate in the formulation of international rules related to generative AI.
Promote the establishment of generative AI infrastructure and public training data resource platforms. Promote collaboration and sharing of algorithm resources, increasing efficiency in the use of computing resources. Promote the orderly opening of public data by type and grade, expanding high-quality public training data resources. Encourage the adoption of safe and reliable chips, software, tools, computational power, and data resources.” [emphasis added]
These provisions impose fewer obligations on generative AI service providers than those in the draft version of the Measures. They emphasize the balance between development and security in generative AI, the promotion of innovation while ensuring compliance with the law, support for the application of AI across industries to generate positive content, and collaboration among various entities. They also emphasize independent innovation in AI technologies, international cooperation, and the establishment of infrastructure for sharing data resources and algorithms.
These shifts may be attributed to the above-mentioned feedback received on the draft version of the Measures from industry stakeholders and legal experts.
This article now turns to changes in specific provisions in the final Measures and their implications.
1. The Measures see significant changes in respect of their domestic and extraterritorial applicability
The Measures narrow the scope of “public” by excluding certain entities and service providers not providing services in PRC
The Measures apply to organizations that provide generative AI services to “the public in the territory of the People’s Republic of China”. While the Measures do not define “generative AI services”, Article 2 clarifies that the Measures apply to services that use models and related technologies to generate text, images, audio, video, and other content.
The Measures appear to address some concerns raised in the previous article about the ambiguity surrounding the undefined term “public”. For example, one of the questions raised in the previous article (in respect of the draft Measures) was whether a service licensed exclusively to a Chinese private entity for internal use would fall within the scope of the Measures, considering scenarios where a generative AI service might be made available only to certain public institutions or customized for individual customers. The Measures appear to partially address this ambiguity by removing certain entities from the scope of “the public”. Specifically, Article 2 now clarifies that the Measures do not apply to certain entities (industrial organizations, enterprises, educational and scientific research institutions, public cultural institutions, and related specialized agencies) if they research, develop, and use generative AI technologies but do not provide generative AI services to the public in the PRC. Further clarification may be found in an expert opinion published on the CAC’s public WeChat account supporting the internal use of generative AI technologies and the vertical supply of generative AI technologies among these entities.
This change also significantly narrows the scope of the Measures compared with other existing Chinese technology regulations. In comparison, the rules on deep synthesis and recommendation algorithms apply to any service that uses generative AI technologies, regardless of whether these services are used by individuals, enterprises or “the public”.
Future AI regulation in China may not share the Measures’ focus on “the public”. For instance, the recent China AI Model Law Proposal, an initiative of the Chinese Academy of Social Sciences (CASS) and a likely precursor to a more comprehensive AI law, does not appear to have such a limitation on its scope.
The Measures now have extraterritorial effect to address foreign provision of generative AI services to PRC users
The Measures also appear to have been tweaked to apply extraterritorially. Specifically, Article 2 provides that the Measures apply to a generative AI service so long as it is accessible to the public in the PRC, regardless of where the service provider is located.
This change appears to have been prompted by users trying to circumvent the application of the Measures on generative AI service providers based overseas. Specifically, to avoid compliance with Chinese regulators, several foreign generative AI service providers have limited access to their services from users in the PRC, such as by requiring foreign phone numbers for registration or requiring international credit cards during subscription. In practice, however, users have been able to access the services of these foreign generative AI service providers by following online tutorials or purchasing foreign-registered accounts on the “black market“. For example, though ChatGPT does not accept registrations from users in China, ChatGPT logins were available for sale on Taobao shortly after its initial release. Such activity has drawn the attention of the Chinese government, which had to take enforcement action against such platforms even before the Measures were formulated.
In practice, CAC is expected to adopt a “technical enforcement” strategy against foreign generative AI services. Article 20 of the Measures empowers the CAC to take action against foreign service providers that do not comply with relevant Chinese regulations, including the Measures. Under this provision, the CAC may notify relevant agencies to take “technical measures and other necessary actions” to block Chinese users’ access to these services. A similar provision is found in the Article 50 of the Cybersecurity Law, which addresses preventing the spread of illegal information outside of the PRC.
2. The Measures relax providers’ obligations while assigning users with new responsibilities
As elaborated below, the CAC adjusted the balance of obligations between generative AI service providers and users in the final version of the Measures. To recap, Article 22 of the final version of the Measures defines “providers” as companies that offer services using generative AI technologies, including those offered through application programming interfaces (APIs). It also defines “users” as organizations and individuals that use generative AI services to generate content.
The Measures adopt a more relaxed stance on generative AI hallucination
The Measures seek to address hallucinations of generative AI in three ways.
First, the Measures shift focus from outcome-based to conduct-based obligations for providers. Previously, the draft version of the Measures adopted a strict compliance approach, while the final version of the Measures adopted an approach focused on actions taken by generative AI service providers to address hallucinations, a more flexible approach focusing on the duty of conduct. In the draft version of the Measures, Article 7 required providers to ensure the authenticity, accuracy, objectivity and diversity of the data used for pre-training and optimization training. However, the final version of the Measures has softened this stance, expecting providers simply to “take effective measures to improve” the quality of data. This revision recognizes the technical challenges of developing generative AI, including the heavy reliance on data made available on the Internet (which makes ensuring the authenticity, accuracy, objectivity and diversity of the training data practically impossible).
Second, the Measures no longer require generative AI service providers to prevent “illegal content” (which is not defined in Article 14, but is likely to refer to “content that is prohibited by laws and administrative regulations” under Article 4.1) from being re-generated within three months. Instead, Article 14.1 of the Measures merely requires providers to immediately stop the generation of illegal content, cease its transmission, and remove it. The Measures also require generative AI service providers to report the illegal content to the CAC (Article 14).
The Measures relax penalties for generative AI service providers, but mandate other regulatory requirements
The Measures relax penalties for violations, notably removing all references to service termination or fines. Specifically, Article 20.2 of the draft Measures had provided for suspension or termination or generative AI services and the imposition of fines between 10,000 to 100,000 yuan where generative AI service providers refused to cooperate or committed serious violations. However, Article 21 of the Measures merely provides for suspension of services.
The relaxed penalty regime, however, appears to be balanced against the imposition of mandatory security assessment and algorithm filings in certain cases. Article 17 of the Measures requires generative AI service providers providing generative AI services “with public opinion properties or the capacity for social mobilization” to carry out security assessments and file their algorithms based on the requirements set out under the “Provisions on the Management of Algorithmic Recommendations in Internet Information Services” (which regulate algorithmic recommendation systems in, inter alia, social media platforms). This targeted approach thus avoids a blanket requirement for all services to undergo a security assessment based on a presumption of potential influence on the public.
While the practical impact of this added assessment and filing requirement remains unclear, it is notable that by September 4, 2023 (less than a month after the Measures came into force), it was reported that eleven companies had completed algorithmic filings and “received approval” to provide their generative AI services to the public. Given that these filings are usually also tied to a security assessment, his development suggests that the companies had also passed their security assessments. From the report, however, it is unclear whether these companies were required under the Measures to file their generative AI services; some may have voluntarily completed these processes to reduce future compliance risks.
The Measures also adopt narrower, albeit more stringent, inspection requirements. Under Article 19, when subject to “oversight inspections”, generative AI service providers are required to cooperate with the relevant competent authorities and provide details of the source, scale and types of training data, annotation rules and algorithmic mechanisms. They are also required to provide the necessary technical and data support during the inspection. This appears to have been narrowed from its corresponding provision in the draft Measures (specifically, Article 17 of the draft Measures), which also required generative AI service providers to provide details such as “the description of the source, scale, type, quality, etc. of manually annotated data, foundational algorithms and technical systems” on top of those required under Article 19. However, Article 19 introduces greater stringency by explicitly requiring vendors to provide the actual training data and algorithms, as opposed to the draft version under the draft Article 17, which only required descriptions. Article 19 also introduces a section outlining the responsibilities of enforcement authorities and staff in relation to data protection.
The Measures also introduce provisions that impact users of generative AI services
The Measures introduce provisions that impact the balance of obligations between generative AI service providers and their users in three main areas:
1. Use of user input data to profile users: Article 11 contains a notable difference between the final and draft version of the Measures as regards the ability for generative AI service providers to profile users based on their input data. Specifically, while the draft Measures had strictly prohibited providers from profiling users based on their input data and usage patterns, this restriction is noticeably absent in the final Measures. The implication appears to be that generative AI service providers now have greater leeway to utilize users’ data input to profile them.
2. Providers to enter into service agreements with users: The second paragraph of Article 9 requires generative AI service providers to enter “service agreements” with users that clarify their respective rights and obligations. While the introduction of this provision may indicate a stance towards allowing private risk allocation, it is still subject to several limitations. First, this provision should be read in conjunction with the first paragraph of Article 9, which states that providers ultimately “bear responsibility” for producing online content and handling personal information in accordance with the law. Thus, the Measures do not permit providers to fully shift liability to users via service agreements. Second, even when the parties outline their respective rights and obligations, whether they can allocate their rights and obligations fairly and efficiently will depend on various factors, such as the resources available to them and the existence of information asymmetries between parties.
3. Responsibilities of Users: Article 4(1) appears to extend obligations to users to ensure that generative AI services “(u)phold the Core Socialist Values”. This means that users must also refrain from creating or disseminating content that incites subversion, glorifies terrorism, promotes extremism, encourages ethnic discrimination or hatred, and any content that is violent, obscene, pornographic, or contains misleading and harmful information. This provision is significant given that the draft Measures did not initially include the obligations of users.
3. The Measures assign responsibility to generative AI service providers as producers of online information content, although the scope of obligation remains unclear
Under Article 9, the Measures state that generative AI service providers shall bear responsibility as the “producers of online information content (网络信息内容生产者)”. This terminology aligns with the CAC’s 2019 Provisions on the Governance of the Online Information Content Ecosystem (2019 Provisions), in which the CAC outlined an online information content ecosystem consisting of content producers, content service platforms, and service users, each with shared but distinct obligations in relation to content. In its ‘detailed interpretation’ of the 2019 Provisions, the CAC defined content producers as entities (individuals or organizations) that create, reproduce, and publish online content. Service platforms are defined as entities that offer online content dissemination services, while users are individuals who engage with online content services and may express their opinions through posts, replies, messages, or pop-ups.
This allocation of responsibility as online information content producers under the Measures can be contrasted with the position under the draft Measures, which referred to generative AI service providers as “generated content producers (生成内容生产者)”. This designation was legally unclear, as it was a new and undefined term.
However, the legal position following this allocation of responsibility under the Measures is still unclear. Unlike content producers defined under the 2019 Provisions, generative AI service providers have a less direct relationship with the content produced by their generative AI services (given that content generation is not prompted by these service providers, but by their users)
To further complicate matters, Article 9 also imposes “online information security obligations” on generative AI service providers. These obligations are set out in Chapter IV of China’s Cybersecurity Law. This means that the scope of generative AI service providers’ online information security obligations can only be determined by jointly reading the Cybersecurity Law, the Measures, the 2019 Provisions, as well as user agreements between generative AI service providers and their users.
In sum, while there is slightly greater legal clarity on generative AI service providers’ responsibilities as regards content generated by their services, more clarity is needed on the exact scope of these obligations. It may only become clearer when the CAC carries out an investigation under the Measures.
Conclusion: While clearer than before, the precise impact of the Measures will only be fully understood in the context of other regulations and global developments.
Notwithstanding the greater clarity provided in the Measures, their full significance cannot be understood in isolation. Instead, they need to be read closely with existing laws and regulations in China. These include existing regulations introduced by the CAC on recommendation algorithms and deep synthesis services. Nevertheless, the Measures will give the CAC additional regulatory firepower to deal with prominent societal concerns around algorithmic abuses, youth Internet addiction, and issues such as deepfake- related fraud, fake news, and data misuse.
Further, while China’s AI industry contends with the Measures and its implications, they may soon have to contend with another regulation: an overarching comprehensive AI law. In May 2023, China’s State Council discreetly announced plans to draft an AI Law. This was followed by the release of a draft model law by the Chinese Academy of Social Sciences, a state research institute and think tank. Key features of the model law include a balanced approach to development and security through an adjustable ‘negative list,’ the establishment of a National AI Office, adherence to existing technical standards and regulations, and a clearer delineation of responsibilities within the AI value chain. In addition, the proposed rules indicate strong support for innovation through the introduction of preemptive regulatory sandboxes, broad ex post non-enforcement exemptions, and various support measures for AI development, including government-led initiatives to promote AI adoption.
In addition, the impact of the Measures will need to be studied alongside international developments, such as the EU AI Act and the UK’s series of AI Safety Summits. Regardless of how these international developments unfold, it is clear that the Measures – and other regulations introduced by the CAC on AI – are helping it build a position of thought leadership globally, as seen from the UK’s invitation to China to its inaugural AI Safety Summit. As governments around the world rush to comprehend rapid generative AI developments, China has certainly left an impression for being the first jurisdiction globally to introduce hard regulations on generative AI.