China’s Interim Measures for the Management of Generative AI Services: A Comparison Between the Final and Draft Versions of the Text

Authors: Yirong Sun and Jingxian Zeng

Edited by Josh Lee Kok Thong (FPF) and Sakshi Shivhare (FPF)

The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech and Jingxian Zeng, research fellow at the University of Hong Kong Philip K. H. Wong Centre for Chinese Law. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.

On August 15, 2023, the Interim Measures for the Management of Generative AI Services (Measures) – China’s first binding regulation on generative AI – came into force. The Interim Measures were jointly issued by the Cyberspace Administration of China (CAC), along with six other agencies, on July 10, 2023, following a public consultation on an earlier draft of the Measures that concluded in May 2023. 

This blog post is a follow-up to an earlier guest blog post, “Unveiling China’s Generative AI Regulation” published by the Future of Privacy Forum (FPF) on June 23, 2023, that analyzed the earlier draft of the Measures. This post compares the final version of the regulation with the earlier draft version and highlights key provisions.

Notable changes in the final version of the Measures include:

Introduction

The stated purpose of the Measures, a binding administrative regulation within the People’s Republic of China (PRC), is to promote the responsible development and regulate the use of generative AI technology, while safeguarding the PRC’s national interests and citizens’ rights. Notably, the Measures should be read in the context of other Chinese regulations addressing AI and data, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Law on Scientific and Technological Progress. 

Central to the Measures is the principle of balancing development and security. The Measures aim to encourage innovation while also addressing potential risks stemming from generative AI technology, including manipulation of public opinion and disseminate sensitive or misleading information at scale. The Measures also: 

The next section provides some context on the finalization process of the Measures.

The final Measures were shaped significantly by private and public input

The initial draft of the Measures was released for public consultation on April 11, 2023. Following the conclusion of the consultation period on May 10, 2023, the final version of the Measures received internal approval from the CAC on May 23, 2023, and were subsequently made public on July 10, 2023 before formally coming into force on August 15, 2023. 

Several significant changes in the final version of the Measures appear attributable to feedback from various from industry stakeholders and legal experts. These industry stakeholders and legal experts include leading tech and AI companies such as Baidu, Xiaomi, SenseTime, YITU, Megvii, and CloudWalk, as well as research institutes affiliated with authorities such as the MIIT. The stakeholders’ input, including public statements on the draft Measures (which were referred to in FPF’s earlier guest blog) appear to have played a role in influencing the revisions made in the final version of the Measures. 

In addition, certain changes may also have been influenced by industry policies and standards at the central and local government levels. In particular, between May 2023 and July 2023, China’s National Information Security Standardization Technical Committee (also known as “TC260”) published two “wishlists” (here and here), outlining 48 upcoming national recommended standards. Among these standards, three were specifically focused on generative AI, with the aim of shaping the enforcement of the requirements specified in the final version of the Measures.

The next few paragraphs highlight changes to the overall contours of the Measures.

A key change in the final Measures is the allocation of regulatory responsibility for generative AI

A major difference between the draft and final versions of the Measures is in the allocation of administrative responsibility for generative AI. The final version of the Measures allowed for greater collaboration amongst public institutions compared to the draft version, with the CAC playing a less prominent role. The other six agencies involved in issuing the final version of the Measures are the National Development and Reform Commission (NDRC); the Ministry of Education; the Ministry of Science and Technology (MoST); the Ministry of Industry and Information Technology (MIIT); the Ministry of Public Security; and the National Radio and Television Administration. 

Notably, the task to promote AI advancement amid escalating concerns is to be overseen by authorities other than the CAC, such as MoST, MIIT, and NDRC. 

Another significant difference is the inclusion of three pro-business provisions – namely, Articles 3, 5, and 6 – in the final version of the Measures. These Articles provide as follows:

Support industry associations, enterprises, education and research institutions, public cultural bodies, and relevant professional bodies, etc. to coordinate in areas such as innovation in generative AI technology, the establishment of data resources, applications, and risk prevention.” [emphasis added]

Promote the establishment of generative AI infrastructure and public training data resource platforms. Promote collaboration and sharing of algorithm resources, increasing efficiency in the use of computing resources. Promote the orderly opening of public data by type and grade, expanding high-quality public training data resources. Encourage the adoption of safe and reliable chips, software, tools, computational power, and data resources.” [emphasis added]

These provisions impose fewer obligations on generative AI service providers than those in the draft version of the Measures. They emphasize the balance between development and security in generative AI, the promotion of innovation while ensuring compliance with the law, support for the application of AI across industries to generate positive content, and collaboration among various entities. They also emphasize independent innovation in AI technologies, international cooperation, and the establishment of infrastructure for sharing data resources and algorithms.

These shifts may be attributed to the above-mentioned feedback received on the draft version of the Measures from industry stakeholders and legal experts. 

This article now turns to changes in specific provisions in the final Measures and their implications.

1. The Measures see significant changes in respect of their domestic and extraterritorial applicability

The Measures narrow the scope of “public” by excluding certain entities and service providers not providing services in PRC 


The Measures apply to organizations that provide generative AI services to “the public in the territory of the People’s Republic of China”. While the Measures do not define “generative AI services”, Article 2 clarifies that the Measures apply to services that use models and related technologies to generate text, images, audio, video, and other content. 

The Measures appear to address some concerns raised in the previous article about the ambiguity surrounding the undefined term “public”. For example, one of the questions raised in the previous article (in respect of the draft Measures) was whether a service licensed exclusively to a Chinese private entity for internal use would fall within the scope of the Measures, considering scenarios where a generative AI service might be made available only to certain public institutions or customized for individual customers. The Measures appear to partially address this ambiguity by removing certain entities from the scope of “the public”. Specifically, Article 2 now clarifies that the Measures do not apply to certain entities (industrial organizations, enterprises, educational and scientific research institutions, public cultural institutions, and related specialized agencies) if they research, develop, and use generative AI technologies but do not provide generative AI services to the public in the PRC. Further clarification may be found in an expert opinion published on the CAC’s public WeChat account supporting the internal use of generative AI technologies and the vertical supply of generative AI technologies among these entities.

This change also significantly narrows the scope of the Measures compared with other existing Chinese technology regulations. In comparison, the rules on deep synthesis and recommendation algorithms apply to any service that uses generative AI technologies, regardless of whether these services are used by individuals, enterprises or “the public”. 

Future AI regulation in China may not share the Measures’ focus on “the public”. For instance, the recent China AI Model Law Proposal, an initiative of the Chinese Academy of Social Sciences (CASS) and a likely precursor to a more comprehensive AI law, does not appear to have such a limitation on its scope.

The Measures now have extraterritorial effect to address foreign provision of generative AI services to PRC users

The Measures also appear to have been tweaked to apply extraterritorially. Specifically, Article 2 provides that the Measures apply to a generative AI service so long as it is accessible to the public in the PRC, regardless of where the service provider is located. 

This change appears to have been prompted by users trying to circumvent the application of the Measures on generative AI service providers based overseas. Specifically, to avoid compliance with Chinese regulators, several foreign generative AI service providers have limited access to their services from users in the PRC, such as by requiring foreign phone numbers for registration or requiring international credit cards during subscription. In practice, however, users have been able to access the services of these foreign generative AI service providers by following online tutorials or purchasing foreign-registered accounts on the “black market“. For example, though ChatGPT does not accept registrations from users in China, ChatGPT logins were available for sale on Taobao shortly after its initial release. Such activity has drawn the attention of the Chinese government, which had to take enforcement action against such platforms even before the Measures were formulated.

In practice, CAC is expected to adopt a “technical enforcement” strategy against foreign generative AI services. Article 20 of the Measures empowers the CAC to take action against foreign service providers that do not comply with relevant Chinese regulations, including the Measures. Under this provision, the CAC may notify relevant agencies to take “technical measures and other necessary actions” to block Chinese users’ access to these services. A similar provision is found in the Article 50 of the Cybersecurity Law, which addresses preventing the spread of illegal information outside of the PRC.

2. The Measures relax providers’ obligations while assigning users with new responsibilities

As elaborated below, the CAC adjusted the balance of obligations between generative AI service providers and users in the final version of the Measures. To recap, Article 22 of the final version of the Measures defines “providers” as companies that offer services using generative AI technologies, including those offered through application programming interfaces (APIs). It also defines “users” as organizations and individuals that use generative AI services to generate content. 

The Measures adopt a more relaxed stance on generative AI hallucination

The Measures seek to address hallucinations of generative AI in three ways.

First, the Measures shift focus from outcome-based to conduct-based obligations for providers. Previously, the draft version of the Measures adopted a strict compliance approach, while the final version of the Measures adopted an approach focused on actions taken by generative AI service providers to address hallucinations, a more flexible approach focusing on the duty of conduct. In the draft version of the Measures, Article 7 required providers to ensure the authenticity, accuracy, objectivity and diversity of the data used for pre-training and optimization training. However, the final version of the Measures has softened this stance, expecting providers simply to “take effective measures to improve” the quality of data. This revision recognizes the technical challenges of developing generative AI, including the heavy reliance on data made available on the Internet (which makes ensuring the authenticity, accuracy, objectivity and diversity of the training data practically impossible). 

Second, the Measures no longer require generative AI service providers to prevent “illegal content” (which is not defined in Article 14, but is likely to refer to “content that is prohibited by laws and administrative regulations” under Article 4.1) from being re-generated within three months. Instead, Article 14.1 of the Measures merely requires providers to immediately stop the generation of illegal content, cease its transmission, and remove it. The Measures also require generative AI service providers to report the illegal content to the CAC (Article 14). 

The Measures relax penalties for generative AI service providers, but mandate other regulatory requirements

The Measures relax penalties for violations, notably removing all references to service termination or fines. Specifically, Article 20.2 of the draft Measures had provided for suspension or termination or generative AI services and the imposition of fines between 10,000 to 100,000 yuan where generative AI service providers refused to cooperate or committed serious violations. However, Article 21 of the Measures merely provides for suspension of services. 

The relaxed penalty regime, however, appears to be balanced against the imposition of mandatory security assessment and algorithm filings in certain cases. Article 17 of the Measures requires generative AI service providers providing generative AI services “with public opinion properties or the capacity for social mobilization” to carry out security assessments and file their algorithms based on the requirements set out under the “Provisions on the Management of Algorithmic Recommendations in Internet Information Services” (which regulate algorithmic recommendation systems in, inter alia, social media platforms). This targeted approach thus avoids a blanket requirement for all services to undergo a security assessment based on a presumption of potential influence on the public. 

While the practical impact of this added assessment and filing requirement remains unclear, it is notable that by September 4, 2023 (less than a month after the Measures came into force), it was reported that eleven companies had completed algorithmic filings and “received approval” to provide their generative AI services to the public. Given that these filings are usually also tied to a security assessment, his development suggests that the companies had also passed their security assessments. From the report, however, it is unclear whether these companies were required under the Measures to file their generative AI services; some may have voluntarily completed these processes to reduce future compliance risks. 

The Measures also adopt narrower, albeit more stringent, inspection requirements. Under Article 19, when subject to “oversight inspections”, generative AI service providers are required to cooperate with the relevant competent authorities and provide details of the source, scale and types of training data, annotation rules and algorithmic mechanisms. They are also required to provide the necessary technical and data support during the inspection. This appears to have been narrowed from its corresponding provision in the draft Measures (specifically, Article 17 of the draft Measures), which also required generative AI service providers to provide details such as “the description of the source, scale, type, quality, etc. of manually annotated data, foundational algorithms and technical systems” on top of those required under Article 19. However, Article 19 introduces greater stringency by explicitly requiring vendors to provide the actual training data and algorithms, as opposed to the draft version under the draft Article 17, which only required descriptions. Article 19 also introduces a section outlining the responsibilities of enforcement authorities and staff in relation to data protection. 

The Measures also introduce provisions that impact users of generative AI services

The Measures introduce provisions that impact the balance of obligations between generative AI service providers and their users in three main areas:

1. Use of user input data to profile users: Article 11 contains a notable difference between the final and draft version of the Measures as regards the ability for generative AI service providers to profile users based on their input data. Specifically, while the draft Measures had strictly prohibited providers from profiling users based on their input data and usage patterns, this restriction is noticeably absent in the final Measures. The implication appears to be that generative AI service providers now have greater leeway to utilize users’ data input to profile them. 

2. Providers to enter into service agreements with users: The second paragraph of Article 9 requires generative AI service providers to enter “service agreements” with users that clarify their respective rights and obligations. While the introduction of this provision may indicate a stance towards allowing private risk allocation, it is still subject to several limitations. First, this provision should be read in conjunction with the first paragraph of Article 9, which states that providers ultimately “bear responsibility” for producing online content and handling personal information in accordance with the law. Thus, the Measures do not permit providers to fully shift liability to users via service agreements. Second, even when the parties outline their respective rights and obligations, whether they can allocate their rights and obligations fairly and efficiently will depend on various factors, such as the resources available to them and the existence of information asymmetries between parties.

3. Responsibilities of Users: Article 4(1) appears to extend obligations to users to ensure that generative AI services “(u)phold the Core Socialist Values”. This means that users must also refrain from creating or disseminating content that incites subversion, glorifies terrorism, promotes extremism, encourages ethnic discrimination or hatred, and any content that is violent, obscene, pornographic, or contains misleading and harmful information. This provision is significant given that the draft Measures did not initially include the obligations of users.

    3. The Measures assign responsibility to generative AI service providers as producers of online information content, although the scope of obligation remains unclear

    Under Article 9, the Measures state that generative AI service providers shall bear responsibility as the “producers of online information content (网络信息内容生产者)”. This terminology aligns with the CAC’s 2019 Provisions on the Governance of the Online Information Content Ecosystem (2019 Provisions), in which the CAC outlined an online information content ecosystem consisting of content producers, content service platforms, and service users, each with shared but distinct obligations in relation to content. In its ‘detailed interpretation’ of the 2019 Provisions, the CAC defined content producers as entities (individuals or organizations) that create, reproduce, and publish online content. Service platforms are defined as entities that offer online content dissemination services, while users are individuals who engage with online content services and may express their opinions through posts, replies, messages, or pop-ups.

    This allocation of responsibility as online information content producers under the Measures can be contrasted with the position under the draft Measures, which referred to generative AI service providers as “generated content producers (生成内容生产者)”. This designation was legally unclear, as it was a new and undefined term.

    However, the legal position following this allocation of responsibility under the Measures is still unclear. Unlike content producers defined under the 2019 Provisions, generative AI service providers have a less direct relationship with the content produced by their generative AI services (given that content generation is not prompted by these service providers, but by their users)

    To further complicate matters, Article 9 also imposes “online information security obligations” on generative AI service providers. These obligations are set out in Chapter IV of China’s Cybersecurity Law. This means that the scope of generative AI service providers’ online information security obligations can only be determined by jointly reading the Cybersecurity Law, the Measures, the 2019 Provisions, as well as user agreements between generative AI service providers and their users. 

    In sum, while there is slightly greater legal clarity on generative AI service providers’ responsibilities as regards content generated by their services, more clarity is needed on the exact scope of these obligations. It may only become clearer when the CAC carries out an investigation under the Measures. 

    Conclusion: While clearer than before, the precise impact of the Measures will only be fully understood in the context of other regulations and global developments. 

    Notwithstanding the greater clarity provided in the Measures, their full significance cannot be understood in isolation. Instead, they need to be read closely with existing laws and regulations in China. These include existing regulations introduced by the CAC on recommendation algorithms and deep synthesis services. Nevertheless, the Measures will give the CAC additional regulatory firepower to deal with prominent societal concerns around algorithmic abuses, youth Internet addiction, and issues such as deepfake- related fraud, fake news, and data misuse.  

    Further, while China’s AI industry contends with the Measures and its implications, they may soon have to contend with another regulation: an overarching comprehensive AI law. In May 2023, China’s State Council discreetly announced plans to draft an AI Law. This was followed by the release of a draft model law by the Chinese Academy of Social Sciences, a state research institute and think tank. Key features of the model law include a balanced approach to development and security through an adjustable ‘negative list,’ the establishment of a National AI Office, adherence to existing technical standards and regulations, and a clearer delineation of responsibilities within the AI value chain. In addition, the proposed rules indicate strong support for innovation through the introduction of preemptive regulatory sandboxes, broad ex post non-enforcement exemptions, and various support measures for AI development, including government-led initiatives to promote AI adoption. 
    In addition, the impact of the Measures will need to be studied alongside international developments, such as the EU AI Act and the UK’s series of AI Safety Summits. Regardless of how these international developments unfold, it is clear that the Measures – and other regulations introduced by the CAC on AI – are helping it build a position of thought leadership globally, as seen from the UK’s invitation to China to its inaugural AI Safety Summit. As governments around the world rush to comprehend rapid generative AI developments, China has certainly left an impression for being the first jurisdiction globally to introduce hard regulations on generative AI.

    Explaining the Crosswalk Between Singapore’s AI Verify Testing Framework and The U.S. NIST AI Risk Management Framework

    On October 13, 2023, Singapore’s Infocomm Media Development Authority (IMDA) and the U.S.’s National Institute of Standards and Technology (NIST) published a “Crosswalk” of IMDA’s AI Verify testing framework and NIST’s AI Risk Management Framework (AI RMF). Developed under the aegis of the Singapore–U.S. Partnership for Growth and Innovation, the Crosswalk is a mapping document that guides users on how adopting one framework can be used to meet the criteria of the other. Similar to other crosswalk initiatives that NIST has done with other leading AI frameworks (such as with the ISO/IEC FDIS 23894 and the proposed EU AI Act, OECD Recommendation on AI, Executive Order 13960 and the Blueprint for an AI Bill of Rights), this Crosswalk aims to harmonize “international AI governance frameworks to reduce industry’s cost to meet multiple requirements.”

    The aim of this blog post is to provide further clarity on the Crosswalk and what it means for organizations developing and deploying AI systems. The blog post is structured into four parts. 

    AI Verify – Singapore’s AI governance testing framework and toolkit

    AI Verify is an AI governance testing framework and toolkit launched by the IMDA and the Personal Data Protection Commission of Singapore (PDPC). First announced in May 2022, AI Verify enables organizations to conduct a voluntary self-assessment of their AI systems through a combination of technical tests and process-based checks. In turn, this allows companies who use AI Verify to objectively and verifiably demonstrate to stakeholders their responsible and trustworthy deployment of AI systems.

    At the outset, there are several key characteristics of AI Verify that users should be mindful of. 

    AI Verify comprises two parts: (1) a Testing Framework, which references 11 internationally-accepted AI ethics and governance principles grouped into 5 pillars; and (2) a Toolkit that organizations can use to execute technical tests and to record process checks from the Testing Framework. The 5 pillars and 11 principles under the Testing Framework are:

    1. Transparency on the use of AI and AI systems
      1. Principle  1 – Transparency: Providing appropriate information to individuals impacted by AI systems
    1. Understanding how an AI model reaches a decision
      1. Principle 2 – Explainability: Understanding and interpreting the decisions and output of an AI system
      2. Principle 3 – Repeatability/reproducibility: Ensuring consistency in AI output by being able to replicate an AI system, either internally or through a third party
    1. Ensuring safety and resilience of the AI system
      1. Principle 4 – Safety: Ensuring safety by conducting impact/risk assessments, and ensuring that known risks have been identified / mitigated
      2. Principle 5 – Security: Ensuring the cyber-security of AI systems
      3. Principle 6 – Robustness: Ensuring that the AI system can still function despite unexpected input
    2. Ensuring Fairness
      1. Principle 7 – Fairness: Avoiding unintended bias, ensuring that the AI system makes the same decision even if a certain attribute is changed, and ensuring that the data used to train the model is representative
      2. Principle 8 – Data governance: Ensuring the source and quality of data by adopting good data governance practices when training AI models
    3. Ensuring proper (human) management and oversight of the AI system
      1. Principle 9 – Accountability: Ensuring proper management oversight during AI system development
      2. Principle 10 – Human agency and oversight: Ensuring that the AI system is designed in a way that will not diminish the ability of humans to make decisions
      3. Principle 11 – Inclusive growth, societal and environmental well-being: Ensuring beneficial outcomes for people and the planet.

    As mentioned earlier, FPF’s previous blog post on AI Verify provides more detail on the objectives and mechanics of AI Verify’s Testing Framework and Toolkit. This summary merely sets the context for readers to better appreciate how the Crosswalk document should be understood.

    AI Risk Management Framework – U.S. NIST’s industry-agnostic voluntary guidance on managing AI risks

    The AI RMF was issued by NIST in January 2023. Currently in its first version, the goal of the AI RMF is “to offer a resource to organizations designing, developing, deploying or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.”

    The AI RMF underscores the perspective that responsible AI risk management tools can assist organizations in cultivating public trust in AI technologies. Intended to be sector-agnostic, the AI RMF is voluntary, flexible, structured (in that it provides taxonomies of risks), measurable and “rights-focused”. The AI RMF outlines mechanisms and processes for measuring and managing AI systems and provides guidance on measuring accuracy.

    The AI RMF itself is broken into two parts. The first part outlines various risks presented by AI. The second part provides a framework for considering and managing those risks, with a particular focus on stakeholders involved in the testing, evaluation, verification and validation processes throughout the lifecycle of an AI system.

    The AI RMF outlines several AI-related risks

    The AI RMF outlines the following risks presented by AI: (1) Harm to people – e.g. harm to an individual’s civil liberties, rights, physical or psychological safety or economic opportunity; (2) Harm to organizations – e.g. harm to an organization’s reputation and business operations; and (3) Harm to an ecosystem – e.g. harm to the global financial system or supply chain. It also notes that AI risk management presents unique challenges for organizations, including system transparency, lack of uniform methods or benchmarks, varying levels of risk tolerance and prioritization, and integration of risk management into organizational policies and procedures. 

    The AI RMF also provides a framework for considering and managing AI-related risks

    The “core” of the AI RMF contains a framework for considering and managing these risks. It comprises four functions: “Govern”, “Map”, “Measure”, and “Manage.” These provide organizations and individuals with specific recommended actions and outcomes to manage AI risks.

    The AI RMF also comes with an accompanying “playbook” that provides additional recommendations and actionable steps for organizations. Notably, NIST has already produced “crosswalks” to ISO/IEC standards, the proposed EU AI Act, and the US Executive Order on Trustworthy AI.

    The Crosswalk is a mapping document that guides users on how adopting one framework can be used to meet the criteria of the other

    To observers familiar with AI governance documentation, it should be apparent that there is complementarity between both frameworks. For instance, the AI Verify framework contains processes that would overlap with the RMF framework for managing AI risks. Both frameworks also adopt risk-based approaches and aim to strike a pragmatic balance between promoting innovation and managing risks.

    Similar to other crosswalk initiatives that NIST has already done with other frameworks, this Crosswalk is aimed at harmonizing international AI governance frameworks to reduce fragmentation, facilitate ease of adoption, and reduce industry costs in meeting multiple requirements. Insiders have noted that at the time when the AI Verify framework was released in 2022, NIST was in the midst of organizing public workgroups for the development of the RMF. From there, the IMDA and NIST began to work together, with a common goal of jointly developing the Crosswalk to meet different industry requirements.

    Understanding the methodology of the Crosswalk

    Under the Crosswalk, AI Verify’s testable criteria and processes are mapped to the AI RMF’s categories within the Govern, Map, Measure and Manage functions. Specifically, the Crosswalk first lists the individual categories and subcategories under the aforementioned four functions. As these 4 core functions address individual governance/trustworthiness characteristics (such as safety, accountability and transparency, explainability and fairness) collectively, the second column of the Crosswalk – which denotes the AI Verify Testing Framework – sets out the individual principle, testable criteria, and process and/or technical test that correlates to the relevant core function under the AI RMF. 

    A point worth noting is that the mapping is not “one-to-one”; each NIST AI RMF category may have multiple equivalents. Thus, for instance, AI Verify’s Process 9.1.1 for Accountability (indicated in the Crosswalk as “Accountability 9.1.1”) appears for both “Govern 4” and “Govern 5” under the AI RMF. This is to reflect the differences in nature of both documents – while the AI RMF is a risk management framework for the development and use of AI, AI Verify is a testing framework to assess the performance of an AI system and the practices associated with the development and use of this system. To achieve this mapping, the IMDA and NIST have had to compare both frameworks at a granular level – down to individual elements within the AI Verify Testing Framework – to achieve alignment. This can be seen from the Annex below, which sets out for comparison the “crosswalked” elements, as well as identifies the individual testable criteria and processes in the AI Verify Testing Framework. 

    Other aspects of understanding the Crosswalk document are set out below (in a Q&A format):

    The Crosswalk shows that practical international cooperation in AI governance and regulation is possible 

    The global picture on AI regulation and governance is shifting rapidly. Since the burst of activity around the development of AI ethical principles and frameworks in the late 2010s, the landscape is becoming increasingly complex. 

    It is now defined within the broad strokes of  the development of AI-specific regulation (in the form of legislation, such as the proposed EU AI Act, Canada’s AI and Data Act or Brazil’s AI Bill), the negotiation of an international Treaty on AI under the aegis of the Council of Europe, executive action putting the onus on government bodies when contracting AI systems (with President’s Biden Executive Order as chief example), the provision of AI-specific governance frameworks as self-regulation, and guidance by regulators (such as Data Protection Authorities issuing guidance on how providers and deployers of AI systems can rely on personal data respecting data protection laws). This varied landscape leaves little room for a coherent global approach to govern a quintessentially borderless technology. 

    In this context, the Crosswalk as a government-to-government effort shows that it is possible to find a common language between prima facie different self-regulatory AI governance frameworks, paving the way to interoperability or a cross-border interchangeable use of frameworks. Its practical relevance for organizations active both in the US and Singapore cannot be overstated. 

    The Crosswalk also provides a model for future crosswalks or similar mapping initiatives that will support a more coherent approach to AI governance across borders, potentially opening the path for more instances of meaningful and practical international cooperation in this space.   

    Annex: Crosswork Combined with Description from Individual Elements of the AI Verify Process Checklist

    Navigating Cross-Border Data Transfers in the Asia-Pacific region (APAC): Analyzing Legal Developments from 2021 to 2023

    Today, the Future of Privacy Forum (FPF) published an Issue Brief comparatively analyzing cross-border data transfer provisions in new data protection laws in the Asia-Pacific. Titled Navigating Cross-Border Data Transfers in the Asia-Pacific region (APAC): Analyzing Legal Developments from 2021 to 2023, the Issue Brief outlines key developments in cross-border data transfers in the Asia-Pacific in the last few years, and explores the potential impact on businesses operating in the APAC region.

    Today, cross-border data transfers are pivotal in enabling the global digital economy and facilitating digital trade. These transfers allow businesses to provide services globally, while allowing individuals access to a wide range of digital services and platforms. Yet, cross-border data transfers also raise legitimate concerns regarding the protection of individuals’ privacy and security.

    Amidst this tension, data protection laws attempt to strike a balance by requiring organizations to satisfy certain conditions to ensure that personal data is appropriately protected when it is transferred out of jurisdiction, absent special circumstances. Common conditions include:

    The APAC region has seen a significant acceleration in data protection regulatory activity in recent years, including the enactment of new data protection laws. In particular, since 2021, China, Indonesia, Japan, South Korea, Thailand, and Vietnam have newly enacted or amended their data protection laws and regulations.

    An analysis of the data protection laws and regulations in these six jurisdictions indicates that there is a degree of alignment between Indonesia, Japan, South Korea, and Thailand regarding legal bases for cross-border data transfers, but China and Vietnam appear to be outliers with their own unique requirements. Notably:

    These divergences to regulating cross-border data transfers likely reflect the different policy considerations in every jurisdiction, the tension between enabling cross-border data transfers to facilitate digital trade, and national considerations, such as protecting national security and sovereignty. These divergences could complicate efforts by organizations operating in multiple jurisdictions to align their regional compliance programs. Nonetheless, there are promising avenues for increasing interoperability in the region, such as standardized or model contractual clauses, the growing recognition of regional certification schemes such as the APEC Cross Border Privacy Rules and Privacy Recognition for Processors systems, and to a more limited extent, the possibility that some jurisdictions may obtain adequacy decisions from the European Union in future.

    For deeper analysis of these points and of the cross-border data transfer provisions for each of the six jurisdictions covered, download the Issue Brief here.

    For inquiries about this Issue Brief, please contact Josh Lee Kok Thong, Managing Director (APAC), at [email protected], or Dominic Paulger, Policy Manager (APAC), at [email protected].

    FPF is grateful to the following contributors for their assistance in ensuring the accuracy of this report:

    Please note that nothing in this Issue Brief should be construed as legal advice.
    Further reading: In November 2022, FPF’s APAC office concluded a year-long project  on consent and alternative legal bases for processing data in APAC that culminated in a report comparing relevant requirements in 14 APAC jurisdictions.

    AI Verify: Singapore’s AI Governance Testing Initiative Explained

    In recent months, global interest in AI governance and regulation has expanded dramatically. Many identify a need for new governance and regulatory structures in response to the impressive capabilities of generative AI systems, such as OpenAI’s ChatGPT and DALL-E, Google’s Bard, Stable Diffusion, and more. While much of this attention focuses on the upcoming EU AI Act, there are other significant initiatives around the world proposing different AI governance models or frameworks.

    This blog post covers “AI Verify,” Singapore’s AI governance testing framework and toolkit, announced in May 2022. Our analysis has three key parts. First, we summarize Singapore’s overall approach to AI governance and the key initiatives that the Singapore Government released regarding AI governance, prior to the launch of AI Verify. Second, we explain the key components of AI Verify.. Finally, as we approach the anniversary of AI Verify’s roll-out, we explore what the future may hold for AI Verify and Singapore’s approach to AI governance and regulation. Briefly, the key takeaways are:

    1. Singapore’s overall approach to AI governance

    In Singapore’s high-level strategy for AI, the National AI Strategy (NAIS), the country announced it aims to be “at the forefront of development and deployment of scalable, impactful AI solutions,” hoping to cement itself as “a global hub for developing, test-bedding, deploying, and scaling AI solutions.” Among the five “ecosystem enablers” identified in the strategy to increase AI adoption is the development of a “progressive and trusted environment” for AI  – one that strikes a balance between innovation and minimization of societal risks. 

    To create this “progressive and trusted environment,” Singapore has adopted so far a light-touch and voluntary approach to AI regulation. This approach recognizes two practical realities about Singapore’s AI ambitions. First, the Singapore Government sees AI as a key strategic enabler in developing its economy and improving the quality of life of its citizens. This explains why Singapore is not taking a heavy-handed approach in regulating AI lest it stifles innovation and investment. Second, given its size, Singapore is aware it is also likely to be a price-taker rather than a price-setter as AI governance discourse, frameworks and regulations develop globally. Thus, rather than introducing new AI principles afresh, the current approach is to “take the world where it is, rather than where it hopes the world to be.”

    Before the release of AI Verify in 2022, Singapore’s approach to AI regulation – as overseen by the Personal Data Protection Commission of Singapore (PDPC) – had three pillars: 

    1. The Model AI Governance Framework (Model Framework). 
    2. The Advisory Council on the Ethical Use of AI and Data (Advisory Council).
    3. The Research Programme on the Governance of AI and Data Use (Research Program). 

    As we aim to highlight the substantive aspects of Singapore’s AI regulatory approach, the following paragraphs will focus on the Model Framework. 

    The Model Framework

    The Model Framework, first launched at the World Economic Forum Annual Meeting (WEF) in 2019, is a voluntary and non-binding framework that guides organizations in the responsible deployment of AI solutions at scale, noting that this framework does not concern the development phase of these technologies. As a guide, the Model Framework sets out practical recommendations for AI deployments for private sector entities, as the public sector’s use of AI is governed by internal guidelines and AI and data governance toolkits. The Model Framework is billed as a “living document,” as it is meant to evolve through future editions alongside technological and societal developments. The Model Framework is also technology-, industry-, scale- and business-model agnostic. 

    Substantively, the Model Framework is guided by two fundamental principles to promote trust and understanding in AI. First, organizations using AI in decision-making should ensure that the decision-making process is explainable, transparent and fair. Second, AI systems should be human-centric: the protection of human well-being and safety should be primary considerations in designing, developing and using AI.

    The Framework translates these guiding principles to implementable practices in four key areas of an organization’s decision-making and technology-development processes:

    (a) Internal governance structures and measures;

    (b) Determining the level of human involvement in AI-augmented decision-making;

    (c) Operations management; and

    (d) Stakeholder interaction and communication.

    The table prepared below shows a summary of some suggested considerations, practices, and measures falling under each of these key areas.

    Internal governance structures and measuresHuman involvement in AI-augmented decision-makingOperations managementStakeholder interaction and communication
    Clear roles and responsibilities
    Use existing or set up new corporate governance and oversight processes

    Ensure staff are appropriately trained and equipped

    Internal controls Monitoring and reporting system to ensure awareness at appropriate level of management

    Manage personnel risk
    Periodic reviews
    Appropriate level of human intervention
    Use probability-severity of harm matrix to determine level of human involvement

    Incorporate corporate and societal values in decision-making
    Good data accountability 
    Data lineage, quality, accuracy, completeness, veracity, relevance, integrity, etc.

    Minimizing bias in data / model
    Heterogeneous datasets

    Separate training, testing and validation datasets

    Repeatability assessments, counterfactual testing, etc.

    Regular review and tuning
    General disclosure
    Being transparent when AI is used in products and services

    Use simple language, with communication appropriate to the audience, purpose and context.

    Increased transparency Information on how AI decisions may affect individuals

    Feedback channels
    Avenues for feedback and review of decisions

    Other initiatives accompanying the Model Framework

    When Singapore released the second edition of the Model Framework at the WEF in 2020, it was released alongside two other documents: the Implementation and Self-Assessment Guide for Organisations (ISAGO) and the Compendium of Use Cases (Compendium – Volume 1 and Volume 2). The ISAGO is a checklist helping organizations assess the alignment of their AI governance processes with the Model Framework. The Compendium provides real-life examples of the adoption of the Model Framework’s recommendations across various sectors, use cases, and jurisdictions. 

    Collectively, the Model Framework and its suite of accompanying documents anchored and outlined substantive thinking on AI regulation in Singapore. These initiatives led to Singapore winning a United Nations World Summit on the Information Society Prize in 2019, recognizing its efforts as a frontrunner in AI governance. 

    2. AI Verify in a Nutshell

    January 2020 marked a turning point for global discourse on AI regulation. On January 17, 2020, a leaked white paper from the European Commission brought international attention to the increasing possibility of government regulation of AI technology. In February 2020, the European Commission formally issued a White Paper on Artificial Intelligence, which, among other things, set out plans to create a regulatory framework for AI. In the following months, the European Commission began to make available drafts of a forthcoming AI Act. For the first time, a major government was making a serious attempt to introduce substantive rules to horizontally regulate the development and use of AI systems. Due to the expected extraterritorial nature of the AI Act, companies developing AI systems outside of Europe could potentially be covered by the new law. 

    These developments influenced thinking about the future of Singapore’s AI regulatory and governance landscape. While the PDPC maintained its voluntary and light-touch approach to AI regulation, it acknowledged a future in which AI faces heightened oversight. The PDPC seemed to also be mindful of growing consumer awareness and demand for trustworthiness from AI systems and developers, a need for international standards on AI to benchmark and assess AI systems against regulatory requirements, and an increasing need for interoperability of AI regulatory frameworks. With these in mind, Singapore began developing the framework that eventually coalesced into AI Verify.

    FPF Training: The EU’s Proposed AI Act

    The EU’s Artificial Intelligence (AI) Act is in the final stages of adoption in Brussels, and will be the first piece of legislation worldwide regulating AI. Join us for an FPF Training virtual session to learn about the act’s extraterritorial reach, the legal implications for providers and deployers of AI, and more.

    Register today!

    What is AI Verify?

    Launched by the Infocomm Media Development Authority (IMDA) – a statutory board under the Singapore Ministry of Communications and Information, and the PDPC, AI Verify is an AI governance testing framework and toolkit. By using AI Verify, organizations are able to use a combination of technical tests and process-based checks to conduct a voluntary self-assessment of their AI systems. The system, in turn, helps companies attempt to objectively and verifiably demonstrate to stakeholders that their AI systems have been implemented in a responsible and trustworthy manner. 

    Given that AI testing methodologies, standards, metrics and tools continue to develop, AI Verify is also currently at a “Minimum Viable Product” (MVP) stage. This has two implications. First, there are several technical limitations to the MVP version, and limitations to the types and size of AI models or datasets that it can test or analyze. Second, it is expected that AI Verify will evolve as AI testing capabilities mature. 

    The four aims for developing an MVP version of AI Verify are:

    (a) First, IMDA hopes that organizations are able to use AI Verify to determine performance benchmarks for their AI systems, and demonstrate these claimed benchmarks to stakeholders such as consumers and employees, thereby helping organizations enhance trust.

    (b) Second, given that it was developed with various AI regulatory and governance frameworks, as well as common trustworthy AI principles in mind, AI Verify seeks to help organizations find commonalities across various global AI governance frameworks and regulations. IMDA is also continuing to engage regulators and standards organizations to map AI Verify’s testing framework onto established frameworks. These efforts are aimed at allowing businesses to operate and offer AI-enabled products and services in multiple markets, while allowing Singapore to act as a hub in AI governance and regulatory testing.

    (c) Third, as organizations trial AI Verify and use its testing framework, IMDA will be able to collate industry practices, benchmarks and metrics. These can facilitate input into the development of international standards on AI governance, considering Singapore is participating in global AI governance platforms such as the Global Partnership on AI and ISO/IEC JTC1/SC 42, to contribute valuable perspectives towards the development of international standards on AI governance.

    (d) Fourth, IMDA hopes AI Verify will allow Singapore to create a local AI testing community, consisting of AI developers and system owners (who are seeking to test AI systems), technology providers (who are developing AI governance implementation and testing solutions), advisory service providers (specializing in testing and certification support), and researchers (who are developing testing technologies, benchmarks and practices). 

    It is also important to clarify several potential misconceptions about AI Verify. First, AI Verify is not an attempt to define ethical standards. It also does not attempt to classify AI systems with a clear bright line. Instead, AI Verify provides verifiability, as it allows AI system developers and owners to demonstrate their claims about the performance of their AI systems. Second, an organization’s use of AI Verify does not guarantee that tested AI systems are free from risks or biases, nor that they  are completely “safe” or “ethical.” Third, AI Verify is intended to preclude organizations from unintentionally divulging sensitive information from their AI systems (such as their underlying code or training data); one key safeguard – AI Verify will be used by AI system developers and owners themselves to conduct self-testing. This allows the organization’s data and models to remain within the organization’s operating environment. 

    How does AI Verify work?

    AI Verify consists of two parts. The first is a Testing Framework, which references eleven internationally accepted AI ethics and governance principles, grouped into five pillars. The second is a Toolkit that organizations use to execute technical tests and to record process checks from the Testing Framework.

    AI Verify’s Testing Framework

    The five pillars and eleven principles in AI Verify’s Testing Framework, as well as their expected assessment, are:

    PillarPrinciplesAssessment method(s)
    Transparency on Use of AI and AI systems:
    This pillar is about disclosing to individuals about AI use in a technological system, so that they can be aware and make informed choices on whether to use the AI-enabled system.
    Transparency:
    Providing appropriate information to individuals impacted by AI systems.
    Assessed through process checks of documentary evidence (e.g., company policy and communication collaterals) providing appropriate information to individuals who may be impacted by the AI system.

    The information includes (subject to the need to avoid compromising IP, safety, and system integrity): the use of AI in the system, its intended use, limitations, and risk assessments.
    Understanding how an AI model reaches a decision:
    This pillar is about allowing individuals to understand the factors contributing to an AI model’s output, while also ensuring output consistency and accuracy in similar conditions. 
    Explainability:
    Understanding and interpreting the decisions and output of an AI system.
    Assessed through a combination of technical tests and process checks.

    Technical tests are conducted to identify factors contributing to an AI model’s output.

    Process checks include verifying documentary evidence of considerations given to the choice of models, such as rationale, risk assessments, and trade-offs of the AI model.
    Repeatability / reproducibility: Ensuring consistency in AI output by being able to replicate an AI system, either internally or through a third party.Assessed through process checks of documentary evidence, including evidence of AI model provenance, data provenance, and use of versioning tools.
    Ensuring safety and resilience of the AI system:
    This pillar is aimed at helping individuals understand that the AI system will not cause harm, is reliable, and will perform according to its intended purpose even despite encountering unexpected input.
    Safety:
    Ensuring safety by conducting impact / risk assessments, and ensuring that known risks have been identified / mitigated.
    Assessed through process checks of documentary evidence of materiality assessment and risk assessment, including how known risks of the AI system have been identified and mitigated.
    Security:
    Ensuring the cyber-security of AI systems.
    Presently NA
    Robustness:
    Ensuring that the AI system can still function despite unexpected input.
    Assessed through a combination of technical tests and process checks.

    Technical tests attempt to assess if a model performs as expected even when provided with unexpected inputs.

    Process checks include verifying documentary evidence, review of factors that may affect the performance of AI model, including adversarial attacks.
    Ensuring fairness:
    This pillar is about evaluating whether the data used to train the AI model is sufficiently representative, and testing to ensure that the AI system will not unintentionally discriminate. 
    Fairness:
    Avoiding unintended bias, ensuring that the AI system makes the same decision even if a certain attribute is changed, and ensuring that the data used to train the model is representative.
    Assessing the mitigation of unintended discrimination through a combination of technical tests and process checks.

    Technical tests check that an AI model does not produce biased results based on protected or sensitive attributes specified by the system owner, by checking the model output against the ground truth.

    Process checks include verifying documentary evidence that there is a strategy for the selection of fairness metrics aligned with the desired outcomes of the AI system’s intended application; and the definition of sensitive attributes are consistent with legislation and corporate values.
    Data governance:
    Ensuring the source and quality of data by adopting good data governance practices when training AI models.
    Presently NA
    Ensuring proper (human) management and oversight of the AI system:
    This pillar is about assessing human accountability and control in the development and/or deployment of AI systems, and whether the AI system is aimed at beneficial purposes for general society. 
    Accountability:
    Ensuring proper management oversight during AI system development.
    Assessed through process checks of documentary evidence, including evidence of clear internal governance mechanisms for proper management and oversight of the AI system’s development and deployment.
    Human agency and oversight:
    Ensuring that the AI system is designed in a way that will not diminish the ability of humans to make decisions.
    Assessed through process checks of documentary evidence that the AI system is designed in a way that will not reduce human’s ability to make decisions or to take control of the system. This includes defining the role of humans in the oversight and control of the AI system such as human-in-the-loop, human-over-the-loop, or human-out-of-the-loop.
    Inclusive growth, societal and environmental well-being:
    Ensuring beneficial outcomes for people and the planet. 
    Presently NA

    The actual Testing Framework has several key components:

    (a) Definitions: The Testing Framework provides easy-to-understand definitions for each of the AI principles. For example, explainability is defined as the “ability to assess the factors that led to (an) AI system’s decision, its overall behavior, outcomes and implications.”

    (b) Testable criteria: For each principle, a set of testable criteria is provided. These criteria are a mix of technical and/or non-technical (e.g. processes, procedures, or organizational structures) factors that contribute to the achievement of the desired outcomes of that governance principle.

    Using the example of explainability, two testable criteria are provided. A developer can run explainability methods to help users understand the drivers of the AI model. A developer can also demonstrate a development preference for AI models that can explain their decisions or that are interpretable by default.  

    (c) Testing process: For each testable criteria, AI Verify provides the processes or actionable steps to be carried out. The steps could be quantitative (such as statistical or technical tests) or qualitative (such as producing documented evidence during process checks). 

    For explainability, a technical test could involve empirically analyzing and determining feature contributions to a model’s output. A process-based test would be to document the rationale, risk assessments, and trade-offs of an AI model. 

    (d) Metrics: These are quantitative or qualitative parameters used to measure, or provide evidence for, each testable criterion.

    Using the explainability example above, the metric for determining feature contributions could examine contributing features of a model output as obtained from a technical tool (such as SHAP and LIME). The process-based metric could be documented evidence of evaluations when choosing the final model, such as risk assessments and trade-off weighing exercises.

    (e) Thresholds (where applicable): Where available, the Testing Framework will provide recognized values or benchmarks for selected metrics. Such values or benchmarks could be defined by regulators, industry associations, or other recognized standard-setting organizations. For the MVP model of AI Verify, thresholds are not provided given the rapid evolution of AI technologies, their use cases, as well as methods to test AI systems. Nevertheless, as the space of AI governance matures and the use of AI Verify increases, IMDA intends to collate and develop context-specific metrics and thresholds to be added to the Testing Framework.

    AI Verify’s Toolkit

    While AI Verify’s Toolkit is currently only available to organizations that have successfully registered for AI Verify’s MVP program, IMDA describes the Toolkit as a “one-stop” tool for organizations to conduct technical tests. Specifically, the Toolkit packages widely-used open-source testing libraries. Such tools include SHAP (Shapley Additive ExPlanations) for explainability, the Adversarial Robustness Toolkit for robustness, and AIF360 and Fairlearn for fairness.

    Users of AI Verify can deploy the Toolkit within their internal environment. Users will be guided by a user interface to navigate the testing process. For example, the Toolkit contains a “guided fairness tree” for users to identify fairness metrics relevant for their use case. At the end, AI Verify produces a summary report that helps system developers and owners interpret test results. For process checks, the report provides a checklist stating the presence or otherwise of document evidence specified in the Testing Framework. The test results are then packaged into a Docker® container for easy deployment. 

    3. Conclusion

    When IMDA released AI Verify, the wave of interest in generative AI seen today had yet to materialize. With the wave currently upon us, interest in demonstrating governance, testability and trustworthiness of AI systems has grown significantly. Initiatives like AI Verify appear poised to respond to this interest.

    Singapore has previously demonstrated its ability to contribute to global discourse and thought leadership on AI governance and regulation, namely through the Model Framework. The stakes for AI Verify are high, but so is the global need for such an initiative. To succeed, AI Verify will likely require greater recognition and adoption. This depends on several factors. First, the tool’s accessibility is critical: AI-driven organizations hoping to use AI Verify will need to be able to access it at little or no cost. Second, convincing organizations of its value is key. This will require IMDA to demonstrate that AI Verify is technically and procedurally sound, that it can be effectively used on more (and newer) kinds and sizes of AI models and data sets, that it does not impinge on commercial sensitivities around proprietary AI models and datasets. Third, and perhaps most importantly, it must remain relevant to international regulatory frameworks. IMDA will need to ensure that AI Verify can continue to help organizations address and interoperate within key emerging global AI regulatory frameworks, such as the EU AI Act, Canada’s AI and Data Act, the NIST AI Risk Management Framework in the US, and even Singapore’s own Model Framework.