South Korea’s New AI Framework Act: A Balancing Act Between Innovation and Regulation
On 21 January 2025, South Korea became the first jurisdiction in the Asia-Pacific (APAC) region to adopt comprehensive artificial intelligence (AI) legislation. Taking effect on 22 January 2026, the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness(AI Framework Act or simply, Act) introduces specific obligations for “high-impact” AI systems in critical sectors, including healthcare, energy, and public services, and mandatory labeling requirements for certain applications of generative AI. The Act also includes substantial public support for private sector AI development and innovation through its support for AI data centers, as well as projects that create and provide access to training data, and encouragement of technological standardization to support SMEs and start-ups in fostering AI innovation.
In the broader context of public policies in South Korea that are designed to allow the advancement of AI, the Act is notable for its layered, transparency-focused approach to regulation, moderate enforcement approach compared to the EU AI Act, and significant public support intended to foster AI innovation and development. We cover these in Parts 2 to 4 below.
Key features of the law include:
Broad extraterritorial reach, applying to AI activities impacting South Korea’s domestic market or users;
Government support for AI development through infrastructure (AI data centers) and learning resources;
Focused oversight of “high-impact” AI systems in critical sectors like healthcare, energy, and public services; providers of most AI systems, including all those that are not high-impact, are not regulated. The Act provides express carve-outs for AI used in security or national defense;
Transparency obligations for providers of generative AI products and services, including mandatory labeling of AI-generated content, and
A moderate enforcement approach with administrative fines up to KRW 30 million (approximately USD 21,000).
In Part 5, we provide a comparison below to the European Union (EU)’s AI Act (EU AI Act). We note that while the AI Framework Act shares some common elements with the EU AI Act, including tiered classification and transparency mandates, South Korea’s regulatory approach differs in its simplified risk categorization, including absence of prohibited AI practices, comparatively lower financial penalties, and the establishment of initiatives and government bodies aimed at promoting the development and use of AI technologies. The intent of this comparison is to assist practitioners in understanding and analyzing key commonalities and differences between both laws.
Finally, Part 6 of this article places the Act within South Korea’s broader AI innovation strategy and discusses the challenges of regulatory alignment between the Ministry of Science and IT (MSIT) and South Korea’s data protection authority, the Personal Information Protection Commission (PIPC) in South Korea’s evolving AI governance landscape.
1. Background
On 26 December 2024, South Korea’s National Assembly passed the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or Act).
The AI Framework Act was officially promulgated on 21 January 2025 and will take effect on 22 January 2026, following a one-year transition period to prepare for compliance. During this period, MSIT will assist with the issuance of Presidential Decrees and other sub-regulations and guidelines to clarify implementation details.
South Korea was the first country in the Asia-Pacific region to introduce a comprehensive AI law in 2021: the Bill on Fostering Artificial Intelligence and Creating a Foundation of Trust. However, the legislative process faced significant hurdles, including political uncertainty surrounding the April 2024 general elections, raising concerns that the bill could be scrapped entirely.
However, by November 2024, South Korea’s AI policy landscape had grown increasingly complex, with 20 separate AI governance bills since the National Assembly began its new term in June 2024, each independently proposed by different members. In November 2024, the Information and Communication Broadcasting Bill Review Subcommittee conducted a comprehensive review of these AI-related bills and consolidated them into a single framework, leading to the passage of the AI Framework Act.
At its core, the AI Framework Act adopts a risk-based approach to AI regulation. In particular, it introduces specific obligations for high-impact AI systems and generative AI applications. The AI Framework Act also has extraterritorial reach: it applies to AI activities that impact South Korea’s domestic market or users.
This blog post examines the key provisions of the Act, including its scope, regulatory requirements, and implications for organizations developing or deploying AI systems.
2. The Act establishes a layered approach to AI regulation
2.1 Definitions lay the foundation for how different AI systems will be regulated under the Act
Article 2 of the Act provides three AI-related definitions.
First, AI is defined as “an electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgment and language comprehension.”
Second, AI systems are defined as “an artificial intelligence-based system that infers results such as predictions, recommendations and decisions that affect real and virtual environments for a given goal with various levels of autonomy and adaptability.”
Third, AI technology is defined as “hardware, software technology, or utilization technology necessary to implement artificial intelligence.”
At the core of the Act’s layered approach is its definition of “high-impact AI” (which is subject to more stringent requirements). “High-impact AI” refers to AI systems “that may have a significant impact on or pose a risk to human life, physical safety, and basic rights,” and is utilized in critical sectors identified under the AI Framework Act, including energy, healthcare, nuclear operations, biometric data analysis, public decision-making, education, or other areas that have a significant impact on the safety of human life and body and the protection of basic rights as prescribed by Presidential Decree.
The Act also introduces specific provisions for “generative AI.” The Act defines generative AI as AI systems that create text, sounds, images, videos, or other outputs by imitating the structure and characteristics of the input data.
The Act also defines an “AI Business Operator” as corporations, organizations, government agencies, or individuals conducting business related to the AI industry. The Act subdivides AI Business Operators into two sub-categories (which effectively reflect a developer-deployer distinction):
“AI Development Business Operators” that develop and provide AI systems, and
“AI Utilization Business Operators” that offer products or services using AI developed by AI Development Business Operators.
Currently, as will be covered in more detail below, the obligations under the Act apply to both categories of AI Business Operators, regardless of their specific roles in the AI lifecycle. For example, transparency-related obligations apply to all AI Business Operators, regardless of whether they are involved in the development and/or deployment phases of AI systems. It remains to be seen if forthcoming Presidential Decrees to implement the Act will introduce more differentiated obligations for each type of entity.
While the Act expressly excludes AI used solely for national defense and security from its scope, the Act applies to both government agencies and public bodies when they are involved in the development, provision, or use of AI technology in a business-related context. More broadly, the Act also assigns the government a significant role in shaping AI policy, providing support, and overseeing the development and use of AI.
2.2. The AI Framework Act has broad extraterritorial reach
Under Article 4(1), the Act applies not only to acts conducted within South Korea but also to those conducted abroad that impact South Korea’s domestic market, or users in South Korea. This means that foreign companies providing AI systems or services to users in South Korea will be subject to the Act’s requirements, even if they lack a physical presence in the country.
However, Article 4(2) of the Act introduces a notable exemption for AI systems developed and deployed exclusively for national defense or security purposes. These systems, which will be designated by Presidential Decree, fall outside the Act’s regulatory framework.
For global organizations, the Act’s jurisdictional scope raises key compliance considerations. Companies will likely need to assess whether their AI activities fall under South Korea’s regulatory reach, particularly if they:
Offer AI-powered services to South Korean users;
Process data or make algorithmic decisions affecting South Korean businesses or individuals; or
Indirectly impact the Korean market through AI-driven analytics or decision-making.
This last criterion appears to be a novel policy proposition and differentiates the AI Framework Act from the EU AI Act, potentially making it broader in reach. This is because it does not seem necessary for an AI system to be placed on the South Korean market for the condition to be triggered, but simply for the AI-related activity of a covered entity to “indirectly impact” the South Korean market.
2.3. The Act establishes a multi-layered approach to AI safety and trustworthiness requirements
(i) The Act emphasizes oversight of high-impact AIbut does not prohibit particular AI uses
For most AI Business Operators, compliance obligations under the AI Framework Act are minimal. There are, however, noteworthy obligations – relating to transparency, safety, risk management and accountability – that apply to AI Business Operators deploying high-impact AI systems.
Under Article 33, AI Business Operators providing AI products and services must “review in advance” (this presumably means before the relevant product or service is released into a live environment or goes to market) whether their AI systems is considered “high-impact AI.” Businesses may request confirmation from the MSIT on whether their AI system is to be considered “high-impact AI.”
Under Article 34, organizations that offer high-impact AI, or products or services using high-impact AI, must meet much stricter requirements, including:
1. Establishing and operating a risk management plan.
2. Establishing and operating a plan to provide explanation for AI-generated results within technical limits, including key decision criteria and an overview of training data.
3. Establishing and operating “user protection measures.”
4. Ensuring human oversight and supervision of high-impact AI.
5. Preserving and storing documents that demonstrate measures taken to ensure AI safety and reliability.
6. Following any additional requirements imposed by the National AI Committee (established under the Act) to enhance AI safety and 7. reliability.
Under Article 35, AI Business Operators are also encouraged to conduct impact assessments for high-impact AI systems to evaluate their potential effects on fundamental rights. While the language of the Act (i.e., “shall endeavor to conduct an impact assessment”) suggests that these assessments are not mandatory, the Act introduces an incentive: where a government agency intends to use a product or service using high-impact AI, the agency is to prioritize AI products or services that have undergone impact assessments in public procurement decisions. Legislatively stipulating the use of public procurement processes to incentivize businesses to conduct impact assessments appears to be a relatively novel move and arguably reflects the innovation-risk duality seen across the Act.
(ii) The Act prioritizes user awareness and transparency for generative AI products and services
The AI Framework Act introduces specific transparency obligations for generative AI providers. Under Article 31(1), AI Business Operators offering high-impact or generative AI-powered products or services must notify users in advance that the product or service utilizes AI. Further, under Article 31(2), AI Business Operators providing generative AI as a product or service must also indicate that output generated was generated by generative AI.
Beyond general disclosure, Article 31(3) of the Act mandates that where an AI Business Operator uses an AI system to provide virtual sounds, images, video or other content that are “difficult to distinguish from reality,” the AI Business Operator must “notify or display the fact that the result was generated by an (AI) system in a manner that allows users to clearly recognize it.”
However, the provision also provides flexibility for artistic and creative expressions. It permits notifications or labelling to be displayed in ways intended to not hinder creative expression or appreciation. This approach appears aimed at balancing the creative utility of generative AI with transparency requirements. Technical details, such as how notification or labelling should be implemented, will be prescribed by Presidential Decree.
(iii) The Act establishes other requirements that apply when certain thresholds are met
The following requirements focus on safety measures and operational oversight, including specific provisions for foreign AI providers.
Under Article 32, AI Business Operators that operate AI systems whose computational learning capacity exceeds prescribed thresholds are required to identify, assess, and mitigate risks throughout the AI lifecycle, and establish a risk management system to monitor and respond to AI-related safety incidents. AI Business Operators must document and submit their findings to the MSIT.
For accountability, Article 36 provides that AI Business Operators without a domestic address or place of business and cross certain user number or revenue thresholds (to be prescribed) must appoint a “domestic representative” with an address or place of business in South Korea. The details of the domestic representative must be provided to the MSIT.
These domestic representatives take on significant responsibilities, including:
Submitting safety measure implementation results;
Managing high-impact AI confirmation processes; and
Supporting the implementation of safety and trustworthiness measures.
3. The Act grants the MSIT significant investigative and enforcement powers
3.1 The legislation empowers the MSIT with broad authority to investigate potential violations of the Act
Under Article 40 of the Act, the MSIT is empowered to investigate businesses that it suspects of breaching any of the following requirements under the Act:
Notification and labeling requirements for generative AI outputs;
Implementation of safety measures and submission of compliance results for AI systems exceeding computational thresholds set by Presidential Decree, and
Adherence to safety and reliability standards for high-impact AI systems.
When potential breaches are identified, the MSIT may carry out necessary investigations, including the authority to conduct on-site investigations and to compel AI Business Operators to submit relevant data. During these inspections, authorized officials can examine business records, operational documents, and other critical materials, following established administrative investigation protocols.
If violations are confirmed, the MSIT can issue corrective orders, requiring businesses to immediately halt non-compliant practices and implement necessary remediation measures.
3.2 The Act takes a relatively moderate approach to penalties compared to other global AI regulations
Under Articles 43 of the Act, administrative fines of up to KRW 30 million (approximately USD 20,707) may be imposed for:
Failure to comply with corrective or cease-and-desist orders issued by the MSIT.
Non-fulfillment of notification obligations related to high-impact AI or generative AI systems.
Failure to designate a required domestic representative, as mandated for certain foreign AI providers operating in South Korea.
This enforcement structure caps fines at lower amounts than other global AI regulations.
4. The Act promotes the development of AI technologies through strategic support for data infrastructure and learning resources
The MSIT is responsible for developing comprehensive policies to support the entire lifecycle of AI training data, ensuring that businesses have access to high-quality datasets essential for AI development. To achieve this, the Act mandates government-led initiatives to:
Support the production, collection, management, distribution, and utilization of AI training data.
Select and fund projects that generate and provide training data.
Establish an integrated system for managing and providing AI training data to the private sector.
A key initiative under the Act can be found in Article 25, which provides for the promotion of policies to establish and operate AI Data Centers. Under Article 25(2), the South Korean government may provide administrative and financial support to facilitate the construction and operation of data centers. These centers will provide infrastructure for AI model training and development, ensuring that businesses of all sizes – including small and medium-sized enterprises (SMEs) – have access to these resources.
The Act also promotes the advancement and safe use of AI by encouraging technological standardization (Articles 13 and 14), supporting SMEs and start-ups, and fostering AI-driven innovation. It also facilitates international collaboration and market expansion while establishing a framework for AI testing and verification (Articles 13 and 14). Together, these measures aim to strengthen South Korea’s broader AI ecosystem and ensure its responsible development and deployment.
5. Comparing the approaches of South Korea’s AI Framework Act and the EU’s AI Act reveals both convergences and divergences
As South Korea is only the second jurisdiction globally to enact comprehensive national AI regulation, comparing its AI Framework Act with the EU AI Act helps illuminate both its distinctive features and its place in the emerging landscape of global AI governance. As many companies will need to navigate both frameworks, understanding of their similarities and differences is essential for global compliance strategies.
South Korea’s AI Framework Act is the first omnibus AI regulation in the APAC region., The South Korean model is notable for establishing an alternative approach to AI regulation: one that seeks to balance the promotion of AI innovation, development, and use, along with safeguards for high-impact aspects.
6.1 Though the Act establishes a framework for direct regulation of AI, several critical areas require further definition through Presidential Decree.
The areas that are expected to be clarified through Presidential Decree include:
Thresholds for computational capacity, which determine when AI systems face additional obligations;
Revenue and user criteria that trigger domestic representative requirements for foreign AI Business Operators; and
Detailed criteria for identifying high-impact AI systems, ensuring consistent risk-based regulation.
The interpretation and implementation of these provisions will significantly shape compliance expectations, influencing how AI businesses—both domestic and international—navigate the regulatory landscape.
6.2 The Act must also be considered in the context of South Korea’s broader efforts to position the country as a leader in AI innovation
The first – and arguably most significant – of these efforts is a significant bill recently introduced by members of the National Assembly, which seeks to amend the Personal Information Protection Act (PIPA) by creating a new legal basis for the processing of personal information specifically for the development and use of AI. The bill introduces a new Article 28-12, which would permit the use of personal information beyond its original purpose of collection, specifically for the development and improvement of AI systems. This amendment would allow such processing provided that:
The nature of the data is such that anonymizing or pseudonymizing it would make it difficult to use in AI development;
Appropriate technical, administrative, and physical safeguards are implemented;
The purpose of AI development aligns with objectives such as promoting public interest, protecting individuals or third parties, or fostering AI innovation;
There is minimal risk of harm to data subjects or third parties, and
The PIPC has confirmed that each of the above requirements has been met (note that the PIPC may also attach further conditions, if necessary).
Second, South Korea’s government is also reportedly exploring other legal reforms to its data protection law to facilitate the development of AI. According to PIPC Chairman Haksoo Ko’s recent interview with a global regulatory news outlet, these reforms could potentially include reforming the “legitimate interests” basis for processing personal information under the PIPA.
South Korea’s Minister for Science and ICT Yoo Sang-im has also reportedly urged the National Assembly to swiftly pass a law on the management and use of government-funded research data to advance scientific and technological development in the AI era.
Third, while creating these pathways for innovation, the PIPC has simultaneously been developing mechanisms to provide oversight over AI systems. For instance, the PIPC’s comprehensive policy roadmap for 2025 (Policy Roadmap) announced in January 2025 outlines an ambitious regulatory framework for AI governance and data protection. In particular, the Policy Roadmap envisions the implementation of specialized regulatory and oversight provisions for the use of unmodified personal data in AI development.
The Policy Roadmap is supplemented by the PIPC’s Work Direction for Investigations in 2025 (Work Direction). Published in January 2025, the Work Direction includes measures intended to provide additional oversight over AI services, including conducting preliminary onsite inspections of AI-powered services, such as AI agents, and reviewing the use of personal information in AI-based legal and human resources services.
A possible instance of this additional emphasis on providing oversight arose in February 2025, when the PIPC announced a temporary suspension of new downloads of the Chinese generative AI application Deepseek over concerns about potential breaches of the PIPA.
Fourth, South Korea is seeking to strengthen the accountability of foreign organizations. The PIPC has expressed its support for a bill amending the PIPA’s domestic representative system for foreign organizations, which was subsequently amended and became effective from April 1, 2025. This amendment bill addresses a significant gap in the current system, which has allowed foreign companies to designate unrelated third parties as their domestic agents in South Korea, often resulting in what one lawmaker described as “formal” compliance without meaningful accountability.
The new requirements would mandate that foreign companies with established business units in South Korea designate those local entities as their representatives, while imposing explicit obligations on foreign headquarters to properly manage and supervise these domestic agents. The bill also establishes sanctions for violations of these requirements, including fines of up to KRW 20 million (approximately USD 14,000).
Fifth, South Korea is seeking to position itself as a global leader in privacy and AI governance through international cooperation and thought leadership. As South Korea prepares to host the annual Global Privacy Assembly in September 2025 – an event involving participants from 95 countries – the PIPC is positioning itself as a bridge between different regional approaches to data protection and AI governance.
6.3 However, these efforts highlight a persistent challenge to ensure clear alignment between key regulatory authorities in South Korea’s AI governance landscape
However, while the AI Framework Act assigns primary responsibility for AI governance to the MSIT, it does not appear to address or acknowledge the PIPC’s role in the regulatory landscape. This creates a potential situation where two parallel AI regulators – one de jure and the other de facto – will likely continue to operate: the MSIT overseeing general AI system safety and trustworthiness under the AI Framework Act, and the PIPC maintaining its oversight of personal data processing in AI systems under the PIPA.
As a result, organizations developing or deploying AI systems in South Korea may need to navigate compliance requirements from both authorities, particularly when their AI systems process personal data. How this dual regulatory structure evolves and whether a more unified governance approach emerges will be a critical factor in determining the success of South Korea’s ambitious AI strategy in the coming years.
Despite these practical challenges, South Korea’s approach to AI regulation offers a potential governance model for other APAC jurisdictions. Regardless, the success of the Act will ultimately depend on how effectively it balances its dual objectives — fostering AI innovation while ensuring responsible deployment. As AI governance evolves globally, the South Korean experience will provide valuable insights for policymakers, regulators, and industry stakeholders worldwide.
Note: Please note that the summary of the AI Framework Act above is based on an English machine translation, which may contain inaccuracies. Additionally, the information should not be considered legal advice. For specific legal guidance, kindly consult a qualified lawyer practicing in South Korea.
The authors would like to thank Josh Lee Kok Thong, Dominic Paulger, and Vincenzo Tiani for their contributions to this post.
Little Rock, Minor Rights: Arkansas Leads with COPPA 2.0-Inspired Law
With thanks to Daniel Hales and Keir Lamont for their contributions.
Shortly before the close of its 2025 session, the Arkansas legislature passed HB 1717, the Arkansas Children and Teens’ Online Privacy Protection Act, with unanimous votes. As the name suggests, Arkansas modeled this legislation after Senator Markey’s federal “COPPA 2.0” proposal, which passed the U.S. Senate as part of a broad child online safety package last year. Presuming enactment by Governor Sarah Huckabee Sanders, HB 1717 will take effect on July 1, 2026. The Arkansas law, or “Arkansas COPPA 2.0” establishes privacy protections for teens aged 13 to 16, introduces substantive data minimization requirements including prohibitions on targeted advertising, and provides new rights to access, delete, and correct personal information for teens. The legislature also considered an Arkansas version of the federal Kids Online Safety Act but this proposal ultimately failed, with the bill’s sponsor noting some uncertainties about its constitutionality.
What to know about Arkansas HB 1717:
Expanded protections to teens: The original Children’s Online Privacy Protection Act of 1998 establishes national privacy protections for children under 13. It requires companies to give notice and obtain verifiable parental consent before data from children is collected. Arkansas COPPA 2.0 goes further by covering not only children but also teens 13 to 16. In doing so, Arkansas will join just New York in adopting specific privacy protections for children and teens in the absence of a comprehensive law protecting the data of all residents.
Similar scope to federal COPPA – mostly: The law applies to “operators” defined as entities who operate or provide a website, online service, online application, or mobile application that is either “directed at” children or teens or when the service has actual knowledge that it is collecting personal information from a child or teen. Notably, Arkansas COPPA 2.0 exempts (but does not define) “interactive gaming platforms” from coverage if they comply with the requirements of the COPPA statute, even though, as mentioned above, the federal law does not provide protections for teens.
Prohibiting targeted advertising: HB 1717 prohibits operators from collecting personal information from a child or teen for targeted advertising or allowing another person to collect, use, disclose, or maintain this information for targeted advertising to children or teens. The framework’s definition of “targeted advertising” includes common carveouts for activities such as contextual advertising and processing data to measure advertising performance, reach, and frequency.
Right to correction: The federal COPPA does not create a right to challenge the accuracy of personal information and have inaccuracies corrected—a right commonly found in other privacy frameworks and a gap that Arkansas COPPA 2.0 fills.
Age verification disclaimer: The law clarifies that there is no requirement to implement age gating or age verification. The federal COPPA already does not require age verification, but this clarification may be in response to an Arkansas social media age verification law from 2023 that was declared unconstitutional.
Vestigial terms? There are various drafting quirks in Arkansas COPPA 2.0. For example, the law defines the term “social media platform” but does not further use the term in any way. Like the federal COPPA, the law uses terms like “personal information” and “operator,” but in a few instances switches to “personal data” and “controller,” perhaps from borrowing language from more modern privacy laws like the Virginia Consumer Data Protection Act.
The substantive data minimization trend continues
While the federal COPPA framework is largely focused on consent, former Commissioner Slaughter noted in 2022 that people “may be surprised to know that COPPA provides for perhaps the strongest, though under-enforced, data minimization rule in US privacy law.” Arkansas builds on these requirements and follows the recent shift towards substantive data minimization with a complex web of layered requirements that operators must satisfy to use both child and teen data:
Collecting child and teen data must be consistent with the “context” of a particular service or the “relationship” between an operator and child or teen user. The provision further goes on to say “including without limitation collection that is necessary to… provide a product or service” requested by the child, teen, or parent of a child or teen. It is unclear how the “consistent with the context” language modifies the rest of this requirement or whether it may be unnecessary.
Operators must also obtain verifiable parental consent to process child data.
Operators must obtain either verifiable parental consent or consent from a teen to process teen data, unless the processing is for one of seven permitted purposes, such as conducting internal business operations or preventing security incidents.
Finally, Arkansas COPPA 2.0 limits retention of child or teen data to no longer than reasonably necessary to fulfill a transaction, provide a requested service, or as required for the safety or integrity of the service, or authorized by law.
In practice, the interaction between these distinct requirements may raise difficult questions of statutory interpretation.
Differences from federal COPPA 2.0
As originally introduced, Arkansas’s bill was nearly identical to last year’s federal COPPA 2.0 bill. Arkansas’ framework went through various, largely business-friendly amendments (and one bill number switch) during its legislative journey. Though HB 1717 maintains the same general framework of COPPA 2.0, it includes several important divergences:
No reliance on existing COPPA guidance and rule: An important reminder that COPPA 2.0 amends an existing statute, which has extensive Federal Trade Commission (FTC) guidance and a rule promulgated by the FTC that is periodically updated. An underlying difference between the two frameworks is that Arkansas COPPA 2.0 declines to reference these existing resources to provide further clarity on what certain terms mean or what compliance obligations might look like. A key example of this is that there is no definition of what is considered “directed at” a teen. The FTC has given guidance on factors for assessing “directed to children,” but it is unclear whether these would apply for assessing what is directed to a teen in Arkansas, particularly given that there is likely to be overlap between what is “teen directed” and what is “adult directed.”
Narrower knowledge standard: One of the most hotly debated aspects of youth privacy is the “knowledge standard”: under what circumstances will a business be required to apply heightened child protections for users and what obligations a service has to determine the age of its users. Arkansas COPPA 2.0 maintains a narrow “actual knowledge” standard concerning teens. In practice, this means companies will only be in scope of the law when they actually know they are collecting information from a teen. As passed, HB 1717 rejects COPPA 2.0’s broader “actual knowledge or knowledge fairly implied on the basis of objective circumstances” approach, which seeks to inch closer to a constructive knowledge standard.
“Consent” vs. “Verifiable consent” (and when it’s needed): The federal COPPA framework requires “verifiable” parental consent, defined as affirmative express consent “reasonably designed in light of available technology to ensure that the person giving the consent is the child’s parent.” Consent under Arkansas COPPA2.0 abandons this “verifiable” modifier but still appears to establish more prescriptive requirements for what constitutes valid consent than typical state privacy laws. Curiously, this section on obtaining consent appears only to apply when an operator has actual knowledge that it is collecting personal information from a teen, rather than also for services directed at teens. Rather than prescribe specific methods for obtaining consent, Arkansas borrows from the COPPA Rule and allows for “any reasonable effort, taking into consideration available technology.”
Narrower targeted advertising restriction: Arkansas’s “targeted advertising” definition is substantially similar to COPPA 2.0’s “individual-specific advertising.” However, Arkansas explicitly allows for targeted advertising to minors based solely on data collected in a first-party context, while the federal proposal would prohibit this type of advertising to minors.
Could COPPA preempt the Arkansas law?
One question likely to emerge from Arkansas COPPA 2.0 is whether certain provisions, or the entire law, may be subject to federal preemption under the existing COPPA statute. COPPA includes an express preemption clause that prohibits state laws from imposing requirements that are inconsistent with COPPA. This is relevant in two ways as the Arkansas law will both (1) extend protections to teens and (2) introduce new substantive limitations on the use of children’s and teens’ data, such as limits on targeted advertising and strict data minimization requirements, that go beyond COPPA’s scope.
The question of COPPA preemption was recently explored in Jones v. Google, with the FTC filing an amicus brief arguing that state laws that “supplement” or “require the same thing” as COPPA are not inconsistent. The FTC references the Congressional record from when COPPA was contemplated, arguing that “Congress viewed ‘the States as partners’. . . rather than as potential intruders on an exclusively federal arena,” and that “the state law protections at issue ‘complement–rather than obstruct–Congress’ ‘full purposes and objectives in enacting the statute.’” Something to additionally keep in mind is that the FTC has been in the process of finalizing an update to the COPPA Rule and which could introduce additional inconsistencies, or at least compliance confusion, between the new final Rule and Arkansas COPPA 2.0 when it comes to key terms like the definition of personal information or whether targeted advertising is allowed with consent.
A trend to watch?
The passage of Arkansas COPPA 2.0 may signal an emerging trend towards a potentially more constitutionally resilient approach to protecting children and teens online. Unlike age-appropriate design codes or social media age verification mandates, which have faced significant First Amendment challenges, Arkansas COPPA 2.0 takes a more targeted approach focused on privacy and data governance, rather than access, online safety, or content. Questions of preemption and drafting quirks aside, this approach may be on firmer ground by focusing on data protection practices and building on a longstanding federal privacy framework. As states explore new ways to safeguard youth online without triggering constitutional pitfalls, privacy-focused legislation modeled on COPPA standards could become a popular path forward.
Chatbots in Check: Utah’s Latest AI Legislation
With the close of Utah’s short legislative session, the Beehive State is once again an early mover in U.S. tech policy. In March, Governor Cox signed several bills related to the governance of generative Artificial Intelligence systems into law. Among them, SB 332 and SB 226 amend Utah’s 2024 Artificial Intelligence Policy Act (AIPA) while HB 452 establishes new regulations for mental health chatbots.
The Future of Privacy Forum has released a chart detailing key elements of these new laws.
Amendments to the Artificial Intelligence Policy Act
SB 332 and SB 226 update Utah’s Artificial Intelligence Policy Act (SB 149), which took effect May 1, 2024. The AIPA requires entities using consumer-facing generative AI services to interact with individuals within regulated professions (those requiring a state-granted license such as accountants, psychologists, and nurses) to disclose that individuals are interacting with generative AI, not a human. The Act was initially set to automatically repeal on May 7, 2025.
SB 332 extends the AIPA’s expiration date by two years, ensuring its provisions remain in effect until July 2027, while SB 226 narrows the law’s scope by limiting generative AI disclosure requirements only to instances when directly asked by a consumer or supplier, or during a “high-risk” interaction. The bill defines “high-risk” interactions to include instances where a generative AI system collects sensitive personal information and involves significant decisionmaking, such as in financial, legal, medical, and mental health contexts. SB 226 includes a safe harbor for AI suppliers if they provide clear disclosures at the start or throughout an interaction, ensuring users are aware they are engaging with AI.
Mental Health Chatbots
Though HB 452 does not directly amend the AIPA, it is closely linked to the broader AI governance framework established by the law. As part of AIPA, Utah established a regulatory sandbox program and created the Office of Artificial Intelligence Policy to oversee AI governance and innovation in the state. One of the AI Office’s early priorities has been assessing the role of AI-driven mental health chatbots in licensed medical practice.
To address concerns surrounding these chatbots, the AI Office convened stakeholders to explore potential regulatory approaches. These discussions, along with the state’s first regulatory mitigation agreement under the AIPA’s sandbox program involving a student-focused mental health chatbot, helped shape the passage of HB 452. The bill establishes new rules governing the use of AI-driven mental health chatbots in Utah, including:
Scope: Applies to mental health chatbots, defined as an AI technology that uses generative AI to engage in conversations that a reasonable person would believe can provide mental health therapy.
Business Obligations: Suppliers of mental health chatbots must refrain from advertising any products or services during user interactions unless explicitly disclosed. Suppliers are also prohibited from the sale or sharing of individually identifiable health information gathered from users.
Enforcement: Suppliers have an affirmative defense if they maintain proper documentation and develop a detailed policy outlining key safeguards. Among other topics, this policy must describe: the involvement of licensed mental health professionals in chatbot development; processes for regular testing and review of chatbot performance; measures to prevent discriminatory treatment of users.
Utah’s latest round of legislation reflects a continued focus on targeted and risk-based regulation for emerging AI systems. Building on the foundation set by the 2024 Artificial Intelligence Policy Act, the new laws reflect an emerging national trend towards affirmatively supporting AI development and innovation while focusing regulatory interventions on particularly high-risk sectors such as healthcare. Utah’s approach to balancing innovation, regulation, and consumer protection in AI space may produce lessons and influence legislators in other states.
FPF Publishes Infographic, Readiness Checklist To Support Schools Responding to Deepfakes
Today, the Future of Privacy Forum (FPF) released an infographic and readiness checklist to help schools better understand and prepare for the risks posed by deepfakes. Deepfakes are realistic, synthetic media, including images, videos, audio, and text, created using a type of Artificial Intelligence (AI) called deep learning. By manipulating existing media, deepfakes can make it appear as though someone is doing or saying something that they never actually did.
Deepfakes, while relatively new, are quickly becomingprevalent in K-12 schools. Schools have a responsibility to create a safe learning environment, and a deepfake incident – even if it happens outside of school – poses real risks to that, including through bullying and harassment, the spread of misinformation and disinformation, personal safety and privacy concerns, and broken trust.
FPF’s infographic describes the different types of deepfakes – video, text, image, and audio – and the varied risks and considerations posed by each in a school setting, from the potential for fabricated phone calls and voice messages impersonating teachers to sharing forged, non-consensual intimate imagery (NCII).
“Deepfakes create complicated ethical and security challenges for K-12 schools that will only grow as the technology becomes more accessible and sophisticated, and the resulting images harder to detect,” said Jim Siegl, Senior Technologist with FPF’s Youth & Education Privacy team. “Schools should understand the risks, their responsibilities and protocols in place to respond, and how they will protect students, staff, and administrators while addressing an incident.”
FPF has also developed a readiness checklist to support schools in assessing and preparing response plans. The checklist outlines a series of considerations for school leaders, from the need for education and training to determining how existing technology, policies, and procedures might apply to engaging legal counsel and law enforcement.
The infographic maps out the various stages of a school’s response to an example scenario – a student reporting that they received a sexually explicit photo of a friend and that the image is circulating among a group of students – inviting school leaders to consider the following:
How can your school leverage internal investigative tools or processes used for other technology violations?
What process does your school use to reduce distribution, ensure the privacy of all students involved in the investigation, and provide appropriate support to the targeted individual?
How might the potential of a deepfake impact the investigation and response?
What policies and procedures does your school have that may apply?
What policies does your school have to ensure students’ privacy and minimize reputational harm when communicating?
As an additional resource for school leaders and policymakers navigating the rapid deployment of AI and related technologies in schools, FPF has developed an infographic highlighting its varied use cases in an educational setting. While deepfakes are a new and evolving challenge, edtech tools using AI have been in schools for years.
FPF Privacy Papers for Policymakers: A Celebration of Impactful Privacy Research and Scholarship
The Future of Privacy Forum (FPF) hosted its 15th Privacy Papers for Policymakers (PPPM) event at its Washington, D.C., headquarters on March 12, 2025. This prestigious event recognized six outstanding research papers that offer valuable insights for policymakers navigating the ever-evolving landscape of privacy and technology. The evening featured engaging discussions and a shared commitment to advancing informed policymaking in digital privacy.
FPF Board President Alan Raul
Daniel Hales, FPF Policy Fellow, kicked off the event as the emcee and recognized the contributions of FPF Board President Alan Raul and Board Secretary-Treasurer Debra Berlyn, along with the FPF staff who helped organize the gathering. Alan Raul, in his opening remarks, emphasized the significance of privacy scholarship and its relevance to policymakers worldwide. He noted that the PPPM event has, for 15 years, successfully brought together scholars, regulators, and industry leaders to discuss privacy research with real-world implications.
Daniel Hales
Lee Matheson, FPF Deputy Director for Global Privacy, opened the discussion by introducing Professor Mark Jia (Georgetown University Law Center), who explored the evolution of privacy law in China. His paper, Authoritarian Privacy, challenges the notion that privacy is solely a Western concept and argues that China’s privacy framework has been shaped not only by state interests but also by public concerns. Professor Jia discussed the role of the Cyberspace Administration of China (CAC) and how privacy regulations have been influenced by social unrest and legitimacy concerns within the government. He emphasized that China’s Personal Information Protection Law (PIPL) is enforceable and not merely symbolic. Their discussion also touched on public “flashpoints” that have prompted government responses and the broader implications for understanding regulatory trends in authoritarian regimes.
Professor Mark Jia and Lee Matheson
Professor Mark MacCarthy (Georgetown University) introduced Alice Xiang (Sony AI) to discuss her paper Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?, which examines algorithmic bias in artificial intelligence models. Ms. Xiang’s research critiques the assumption that fair data sets automatically lead to fair AI outcomes and highlights the challenges in defining fairness. She noted that while engineers often bear the responsibility of addressing bias, broader policy frameworks are needed. Their discussion explored the tension between AI neutrality and the necessity for companies to engage with ethical and social justice considerations. Ms. Xiang argued that AI systems mirror existing societal inequalities rather than solve them and called for stronger regulatory oversight to ensure transparency and accountability in AI decision-making.
Alice Xiang and Professor Mark MacCarthy
Next, Jocelyn Aqua (PwC) conversed with Miranda Bogen (Center for Democracy and Technology), whose paper Navigating Demographic Measurement for Fairness and Equity addresses the paradox of measuring fairness in AI while protecting individuals’ privacy. Ms. Bogen categorized fairness assessment into three key areas: measuring disparities, selecting appropriate metrics, and implementing mitigation strategies. She pointed out that privacy laws like GDPR and CCPA create barriers to demographic data collection, complicating efforts to assess bias in AI systems. The conversation emphasized the need for alternative privacy-preserving methods, such as statistical inference and qualitative analysis, to reconcile fairness assessments with privacy protections. Bogen called for policymakers to establish clearer guidelines that allow for responsible demographic measurement while ensuring compliance with privacy laws.
Miranda Bogen and Jocelyn Aqua
The discussion then turned to Brenda Leong (ZwillGen), who introduced Tom Zick (Orrick, Herrington & Sutcliffe LLP) and Tobin South (Stanford University), two of the co-authors of the paper, Personhood Credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online. Their paper explores the concept of “personhood credentials,” proposing a decentralized approach to verifying online identities while balancing security and privacy. The authors highlighted the risks posed by AI-driven identity fraud and the need for robust authentication mechanisms that protect user privacy. The conversation covered potential issuers of personhood credentials, including governments and private organizations, and the challenges of industry-wide adoption. Ultimately, the paper argues for the importance of developing privacy-first verification solutions that minimize data exposure while maintaining trust in digital interactions.
Tobin South, Tom Zick, and Brenda Leong
Turning to another critical issue, Professor Daniel J. Solove (George Washington University Law School) discussed his paper (co-authored by Boston University Professor Woodrow Hartzog) The Great Scrape: The Clash Between Scraping and Privacy with Jennifer Huddleston (Cato Institute). Professor Solove examined the legal and ethical complexities of data scraping, arguing that while scraping has long existed in a legal gray area, the rise of AI has heightened privacy concerns. He challenged the perception that publicly available data is free for unrestricted use, noting that privacy laws are evolving to address these issues. The discussion explored potential regulatory solutions, emphasizing the importance of distinguishing between beneficial scraping and harmful practices that exploit personal data. Professor Solove advocated for a public interest standard to determine when scraping should be permissible and called for clearer legal frameworks to protect individuals from data misuse.
Professor Daniel J. Solove and Jennifer Huddleston
In the last discussion, Professor James C. Cooper (Antonin Scalia Law School – George Mason University) joined Professor Alicia Solow-Niederman (George Washington University Law School) to discuss her paper The Overton Window and Privacy Enforcement. Professor Solow-Niederman explained how internal norms, congressional oversight, judicial rulings, and public sentiment collectively shape the Federal Trade Commission’s (FTC) approach to privacy enforcement. The conversation also highlighted recent cases where the FTC has expanded its enforcement scope, including actions against data brokers and algorithmic decision-making. The paper argues that policymakers need to balance their legal authority with the evolving public expectations to ensure effective privacy enforcement.
Professor Alicia Solow-Niederman and Professor James C. Cooper
John Verdi, FPF’s Senior Vice President for Policy, closed the event by thanking the winning authors, discussants, event team, and FPF’s Daniel Hales for their contributions. He highlighted FPF’s role in bringing together academia, policy, and industry experts to promote meaningful discussions on privacy.
FPF Releases Report on the Adoption of Privacy Enhancing Technologies by State Education Agencies
The Future of Privacy Forum (FPF) released a landscape analysis of the adoption of Privacy Enhancing Technologies (PETs) by State Education Agencies (SEAs). As agencies face increasing pressure to leverage sensitive student and institutional data for analysis and research, PETs offer a unique potential solution as they are advanced technologies designed to protect data privacy while maintaining the utility of results yielded from analyses.
FPF worked with AEM Corporation to conduct a landscape analysis, including an overview of current PETs adoption, current challenges, and considerations for enhancing data protection measures. The landscape analysis, first previewed in a late 2024 webinar and expert panel discussion, evaluated the organizational readiness and critical use cases for PETs within SEAs and the broader education sector, ultimately highlighting the need to raise awareness of what PETs are and what they are not, the range of available types of PETs, their potential use cases, and considerations for the effective adoption and sustainable implementation of these technologies.
“Intentional PETs implementation can boost community trust, enhance data analysis, and effectively ensure critical privacy protections,” said Jim Siegl, FPF Senior Technologist for Youth & Education Privacy. “But as our landscape analysis highlights, despite the advances PETs offer to SEAs in utilizing the data they steward, a gap persists in applying these technologies and realizing their potential benefits.”
Key findings outlined in the report include:
PETs are not one-size-fits-all solutions but are evolving tools aimed at enabling the sustainable utility of data without sacrificing confidentiality or security.
There is a significant gap in technical knowledge relating to PETs.
There is a lack of awareness of relevant use cases surrounding PETs among practitioners.
Successful PET implementation requires substantial investment in infrastructure, technical capabilities, and ongoing training.
Legal and regulatory requirements complicate PET adoption, with institutions often cautious about deployment due to a lack of clarity and formal guidance.
The report also outlines a series of recommendations to support PET adoption at scale, including establishing a shared vocabulary, creating trusted introductory resources, and curating relevant use cases to raise collective awareness about the capabilities and limitations of PETs. Additional recommendations include developing a PETs readiness model, focusing on core capabilities, and providing targeted technical assistance to support sustainable PET adoption and implementation.
Recognizing the need for a deeper understanding of the potential and limitations of these technologies, FPF has actively contributed to shaping policymaking around PETs through discussion papers, reports, and stakeholder engagement. FPF’s PETs Repository, launched in November 2024, is a centralized, trusted, and up-to-date resource where individuals and organizations interested in these technologies can find practical and useful information.
Singapore Management University and Future of Privacy Forum Form Partnership to Advance Expertise in Digital Law and Data Governance in Asia-Pacific
March 10, 2025 — Singapore Management University (SMU) and the Future of Privacy Forum (FPF) have signed a Memorandum of Understanding (MOU) to strengthen collaboration in data governance, privacy, and emerging technology regulation across the Asia-Pacific region.
By combining SMU’s expertise in digital law with FPF’s global leadership in data protection, privacy and emerging technology governance, this partnership aims to drive impactful research and thought leadership. Through this MOU, SMU and FPF will collaborate on a variety of initiatives, including joint events, research publications, and advisory participation, while also expanding stakeholder networks across academia, industry, and government.
SMU’s Yong Pung How School of Law (YPHSL), ranked among the top 100 globally in the QS World University Rankings, is home to the Centre for Digital Law (CDL), which aims to become Asia’s premier law and technology research hub by integrating expertise from law, computer science, and digital humanities.
“This partnership with SMU’s Yong Pung How School of Law marks an important step in our mission to foster meaningful collaborations with leading academic institutions in the region,” said Josh Lee Kok Thong, FPF Managing Director for APAC. “As two organizations that share a common vision of fostering greater digital trust and innovation, we are excited to forge a strong partnership that will maximize our collective strengths and capabilities.”
With the rapid evolution of AI, digital finance, and cross-border data governance, this collaboration will play a key role in shaping regional and global conversations on responsible and forward-looking digital governance.
“Privacy and data protection is a fundamental aspect of each of our research pillars at the SMU CDL–society, economy, and government. We are excited to announce this closer collaboration with FPF after several years of informal collaboration, including taking part in many of FPF’s excellent events, and to working together to build a community of interest with diverse stakeholders in the region and bringing our regional voice to the global conversation”, said Jason Grant Allen, Director, Centre for Digital Law .
FPF has established a global presence across the US, Europe, Africa, the Asia-Pacific, India, Israel, and Latin America, monitoring policy developments and providing stakeholders with key insights. Its partnership with SMU strengthens this strategy, advancing its expertise and thought leadership in data protection and emerging technology regulation.
“FPF remains committed to leveraging our global reach and expertise in data governance to contribute meaningfully to policy discussions and research,” said Gabriela Zanfir-Fortuna, VP for Global Privacy.
As digital regulation continues to evolve, this collaboration will provide critical insights and policy guidance to ensure balanced, responsible and forward-thinking governance in the Asia-Pacific and beyond.
Data Sharing for Research Tracker
Co-authored by Hannah Babinski, former FPF Intern
In celebration of International Open Data Day, FPF is proud to launch the Data Sharing For Research Tracker, a growing list of organizations that make data available for researchers. It provides information about the company, the data, any access restrictions, and relevant links:
One of the most difficult, time-consuming, and expensive parts of the research process is collecting data, but using existing data can help researchers mitigate the time and cost associated with this process.
Research by the Future of Privacy Forum and others has shown that companies have the potential to make significant contributions to research by sharing their data with researchers. This kind of data sharing carries innate legal, ethical, and privacy risks that must be planned for in advance. Despite these challenges, data sharing for research is well worth the effort: It’s led to scientific breakthroughs in topics ranging from diabetes risk prediction models to wildfire evacuation planning.
FPF’s new resource is intended to help researchers find data for secondary analysis. It also provides a platform for organizations looking to raise awareness about their data sharing programs and benchmark them against what other organizations offer. Check out these publications to learn more about why data sharing is important and how to share data for research while maintaining privacy and ethics:
Chile’s New Data Protection Law: Context, Overview, and Key Takeaways
On August 26, 2024, the Chilean Congress approved Law 21.719, on the Protection of Personal Data (“LPPD”) after eight years of legislative debate. The legislation was published on December 13, 2024, and will become fully effective twenty-four months after that date (in December 2026).
The LPPD was introduced in the Senate in 2017 to replace Law 19.628, Ley sobre Protección de la Vida Privada (hereinafter referred to as “LPVP”), which was adopted in 1999 as Chile’s first national data protection framework, as well as the first such law in Latin America.
The LPVP provided a foundational framework for personal data protection for nearly 24 years. However, the evolving demands of technological development and globalization gradually highlighted the LPVP’s lack of compatibility with newer and more comprehensive global standards for data protection adopted by partner countries.
In particular, stronger data protection standards reflected in the European Union’s Directive 95/46/EC significantly influenced post-LPVP legislation in Latin America, with Argentina passing comprehensive data protection legislation in 2000 and Mexico in 2010, for example. A similar structural effect followed the enactment of the EU’s General Data Protection Regulation (GDPR), which has influenced recent proposals including Brazil’s Lei Geral de Proteção de Dados (LGPD) and Chile’s LPPD, although each nation has approached this era of policymaking in a unique way.
Prior congressional attempts to update the LPVP reflect the country’s efforts to align to best global standards and meet international commitments1. According to the Chilean government, the approved LPPD pursues the dual objective of (i) providing stronger protection for data subjects and (ii) regulating and promoting the country’s digital economy.2
This blog covers some of the new features in the LPPD, including:
Extraterritoriality: the new law applies to private and public organizations processing personal data of individuals residing in Chile, regardless of where the processing takes place;
Stronger and new data subject rights: the LPPD expands regulation on previously recognized rights of access, rectification, suppression, and opposition, and adds new rights to portability and to block the processing of one’s data;
Additional lawful grounds for processing: the law introduces new legitimate bases for processing data as exceptions or alternatives to consent;
New obligations to controllers and processors: the LPPD imposes security incident reporting and confidentiality obligations, and the implementation of technical and organizational measures, compatible with explicitly recognized principles of legality, accountability, and privacy by design and by default, among others;
Cross-border data transfer regulation: the law recognizes several mechanisms for international transfers, including its own exception regime when transferring to non-adequate countries or in the absence of appropriate safeguards;
Data Protection Authority (DPA): the LPPD creates, for the first time, a DPA vested with supervisory, regulatory, and sanctioning powers to enforce the data protection framework;3
Stronger sanctioning regime: the new law incorporates sanctions for data protection violations, which can range between 2% and 4% of an entity’s total revenue, and creates a national registry for infractors.
Read further for a deeper insight into the key features of the new Chilean data protection law and how they differ from its predecessor and other data protection laws in the region.
1. Scope, covered actors, and exterritoriality
The LPPD regulates the form and conditions under which the processing of personal data of natural persons may be carried out, under Article 19 of the Chilean Constitution, which recognizes the right to personal data protection.4
Similar to other laws in the region (and to the model articulated in the GDPR), the LPPD applies extraterritorially to natural and legal persons, including public and private bodies, when the processing is carried out:
By a controller or processor established in Chilean territory;
When the processor or third party, regardless of its place of establishment or incorporation, processes personal data on behalf of a controller established or incorporated in the national territory; or
When the controller or processor is not established in Chilean territory, but the processing operations are intended to offer goods or services to data subjects in Chile – regardless of whether they are required to pay – or to monitor the behavior of data subjects in Chile, such as analysis, tracking, profiling, and behavior prediction (Art. 1 bis).
2. Covered data
Under Article 2(f) of the LPPD, “personal data” is broadly defined as “any information linked to or referring to an identified or identifiable natural person.” The LPPD establishes that an “identifiable” individual is one “whose identity can be determined, directly or indirectly, in particular by means of one or more identifiers, such as name, identity card number, analysis of elements of the physical, physiological, genetic, psychological, economic, cultural or social identity of such person.” In addition, to determine whether an individual is identifiable, the law requires “all objective means and factors that could reasonably be used to identify the individual at the time of the processing” be considered.
The LPPD’s approach to anonymized data is initially consistent with the GDPR’s approach to the subject: anonymized data is information that does not relate to an identified or identifiable person, and thus is not personal data5. A similar initial definition is found in Brazil’s LGPD, though the Brazilian legislation explicitly recognizes that anonymization might be a reversible process6. The key differentiating feature of LPPD’s approach to “anonymization” is the term’s definition as an “irreversible process” that does not allow for the identification of a natural person.7 In that sense, the LPPD’s definition of anonymization seems stricter than the language found in both the GDPR and the LGPD concerning anonymized data. It is likely that future guidance may shed light on the requirements for “irreversibility” under Chilean law.
Concerning “pseudonymization,” the LPPD follows a similar approach to that found in the GDPR and LGPD. Chilean law defines it as a process carried out in a way that “[data] can no longer be attributed to a data subject without additional information, provided that such information is separate and subject to technical and organizational measures to ensure the data is not attributable to a natural person.” This approach points to the possibility of considering pseudonymized data as personal data as long as it can be linked to an identifiable individual through additional information.
Standards and guidance on anonymization and pseudonymization continue to be explored globally by authorities in the context of data protection frameworks. However, some laws explicitly recognize these techniques as a way to comply with data protection principles. The LPPD explicitly refers to pseudonymization as a technique relevant to comply with the security principle. Article 14 quinquies of the LPPD indicates that controllers shall implement “technical and organizational measures to ensure a level of security appropriate to the risk” such as pseudonymization and encryption of personal data, among other security measures.
3. Data Subject Rights: “ARCO” rights, data portability, and the right to block the processing of data
The LPPD includes two new data subject rights – the right to data portability and the right to block the processing of one’s data – in addition to the previous rights granted in the former LPVP: access, rectification, suppression, and opposition, also regionally known as the “ARCO” rights.
Similar to GDPR-inspired laws that have recently incorporated the right to portability, the LPPD indicates the data subject has the right to request and receive a copy of their data in an “electronic, structured, generic and commonly used format,” which allows the data to be read by different systems and the data subject to communicate or transfer the data to another data controller, when (i) the processing is carried out in the automated form; and (ii)the processing is based on the consent of the the data subject. When technically feasible, the LPPD mandates the portability to be performed directly from controller to controller.
In addition, the LPPD indicates the controller must use the “most expeditious and least onerous means” and communicate to the data subject in a “clear and precise manner” the necessary measures to carry out the portability. Notably, under Chilean law, the right to portability does not necessarily entail the deletion of the data by the transferring controller, which means the data subject must jointly request the deletion of their data once the portability is carried out (Art. 9).
The “right to block the processing of personal data” is the other new right added by the LPPD, which resembles the GDPR’s Article 19 “right to restriction of processing” and Brazil’s LGPD Article 18 “right to blocking unnecessary or excessive data.” Under Article 8 ter of the LPPD, this right is understood as a “temporary suspension of any processing operation” that pertains to a data subject when they make a rectification, erasure, or opposition request. The temporary suspension applies as long as the subject’s request remains open. This suggests that under the “right to block processing,” a data subject can immediately and effectively suspend the processing of their data before the rectification, erasure, or opposition request is processed by the controller. The controller is thus restricted from further processing, although it may continue storing the affected personal data.
Closely linked to the right of opposition, the LPPD introduces the “right to object and not be subject to decisions based on automated processing,” including profiling, when such processing produces legal effects on the data subject or significantly affects them (Art. 8 bis). Under the LPPD, “profiling” refers to “any form of automated processing of personal data that consists of using such data to evaluate, analyze or predict aspects relating to the professional performance, economic situation, health, personal preferences, interests, reliability, behavior, location or movements of a natural person” (Art. 2, (w)).
The LPPD hews closer to the GDPR in the sense that it expressly recognizes the “right to object and not be subject” to automated processing, unlike Brazil’s LGPD, which only recognizes a data subject’s “right to review” automated processing. Similar to the GDPR, Article 8 bis of the LPPD restricts the exercise of this right under certain circumstances, such as when:(i)the decision is necessary for the conclusion or execution of a contract between the subject and the agent;(ii)there is prior and express consent; or (iii) as indicated by law, to the extent that it provides safeguards for the rights and freedoms of the data subject. The operationalization of this right must safeguard the data subject’s rights to information and transparency, obtain an explanation and human intervention, express their point of view, and request a review of the decision. This set of rights and freedoms is encapsulated within the right to object and not be subject to automated processing.
4. Lawful grounds for processing and consent requirements
The LPPD maintains consent as the general basis for the processing of personal data – similar to how it was regulated by the former LPVP. Consent must be “free, informed and specific as to its purpose” and given “in advance and unequivocally” by means of a verbal or written statement, or expressed through electronic means or an affirmative act that “clearly shows” the owner’s intent. The data subject can revoke consent without retroactive effects, and its grant or revocation should be expeditious, reliable, free, and permanently available (Art. 12).
In line with the principle of purpose limitation, the LPPD presumes consent is not “freely given” when collected for the performance of a contract or the provision of a service, where the collection is not necessary to serve those purposes. However, this presumption is not applicable when a person or entity offering goods, services, or benefits solely requires the data subject’s consent to process their data (Art. 12). Notably, this scenario applies to many “free” online services, such as social media or messaging platforms, where consent to process an individual’s data for advertising or profiling purposes is often required for the provision of service.
Without consent of the data subject, the LPPD recognizes the following lawful grounds for processing:
When the processing refers to data relating to obligations of an economic, financial, banking, or commercial nature;
When the processing is necessary for the performance or fulfillment of a legal obligation or is required by law;
When the processing is necessary for the conclusion or performance of a contract, or the execution of pre-contractual measures at the request of the data subject;
When the processing is necessary for the satisfaction of the legitimate interests of the controller or a third party, provided that the rights and freedoms of the data subject are not affected – the subject may request to be informed about the processing and the legitimate interest under which the processing is carried out; or
When the processing is necessary for the formulation, exercise, or defense of a right before the courts or public bodies.
Processing sensitive data and children’s and adolescent’s data
Similar to other comprehensive frameworks, the LPPD distinguishes sensitive data from personal data of a general nature. Under Article 2 (g) of the LPPD, “sensitive data” encompasses data that refers to “physical or moral characteristics of persons or to facts or circumstances of their private life or intimacy, that reveal ethnic or racial origin, political, union or trade union affiliation, socioeconomic situation, ideological or philosophical convictions, religious beliefs, data related to health, human biological profile, biometric data, and information related to sexual life, sexual orientation and gender identity of a natural person.”
Chile’s sensitive data definition is comparable to definitions found in other laws in the region such as Brazil’s LGPD and Ecuador’s Ley Orgánica de Protección de Datos (LOPD), which base the nature of sensitivity on the potential of discrimination or impact on an individual’s rights and freedoms if such information is mishandled or unlawfully accessed.
As a general rule, sensitive data may only be processed with the consent of the data subject. Exceptionally, controllers may process sensitive data without consent in the following circumstances (Art. 16):
When the processing refers to sensitive data that has been made public by the data subject and its processing is related to the purposes for which it was published;
When the processing is based on a legitimate interest carried out by a non-profit entity under public or private law and when certain conditions are met;8
When the processing is indispensable to safeguard the life, health, or integrity of the data subject or another person, or when the subject is physically or legally prevented from giving their consent;
When the processing is necessary for the exercise or defense of a right before courts or an administrative body;
When the processing is necessary for the exercise of rights or fulfillment of an obligation related to labor or social security; and
When the processing is expressly authorized or mandated by law.
Under Article 16 bis of the LPPD, health data and biometric data may only be processed for the purposes provided by the applicable laws or with the data subject’s consent, unless one of the following scenarios applies:
There is an official sanitary alert;
When the processing is for historical, statistical, or scientific purposes, based on public interest;
When the processing is necessary for preventive or occupational medicine, evaluation of an employee’s capacity to work, medical diagnosis, or provision and management of health or social care services (Art. 16 bis).
Article 16 ter defines biometric data as “obtained from a specific technical treatment, related to the physical, physiological or behavioral characteristics of a person that allow or confirm the unique identification of the person, such as fingerprint, iris, hand or facial features and voice.” When processing biometric data, the controller is required to disclose the biometric system used, the purpose of the collection, the period during which the data will be processed, and the manner in which the subject can exercise their rights.
Similar to other frameworks in the region like Brazil’s LGPD, Article 16 quater of the LPPD incorporates the standard of “best interest” of the children when processing their data. As a general rule, the processing of such data may only be conducted in the child’s best interest and with respect to their “progressive autonomy” – a concept introduced, yet not defined, by the LPPD. The lawful processing of children’s data must be based on consent granted by the parents or legal guardian unless expressly authorized by law.
The LPPD introduces a notable distinction between the processing rules applicable to data from children (under 14 years old) and adolescents (between 14-18 years old). Under Chilean law, the processing of adolescents’data may be processed following the general rules applicable to adults’ data, except when the information is sensitive and the child is below 16 years of age. This means that for processing sensitive data from 16 and below adolescents, controllers must still obtain consent from the parents or legal guardian. For other non-sensitive data, controllers may process adolescents’ data following the general rules of the LPPD, but would still be subject to the “best interest” standard. This distinction is a novel innovation of Chilean law and is not found in Brazil’s LGPD or Ecuador’s LOPD.
5. Duties and Obligations of Data Controllers
The LPPD’s provisions follow principles of lawfulness, fairness, purpose limitation, proportionality, quality, accountability, security, transparency, and confidentiality. These principles, along with other specific duties, guide the obligations of data controllers and are consistent with other modern data protection frameworks.
For instance, under Article 14 ter, controllers must inform and make available “background information” that proves the lawfulness of the data processing and promptly deliver such information when requested by data subjects or the authority. This suggests that regardless of whether the information is requested or not, controllers should keep this information readily available. This obligation relates to the “duty of information and transparency,” under which controllers must provide and keep “permanently available to the public” its processing policy, the categories of personal data subject to processing, a generic description of its databases, and the security measures to safeguard the data, among other information.
Notably, Article 14 quater also introduces the “duty of protection by design and by default,” resembling GDPR Article 25. Under the LPPD, this duty refers to the application of “appropriate technical and organizational measures” before and during the processing. Drawing inspiration from the GDPR, the LPPD indicates the measures should consider the state of the art, costs, nature, scope, context, purpose, and risks associated with the data processing.
Although the LPPD does not expressly recognize a “right to anonymization” like Brazil’s LGPD, it sets out the controller’s obligation to anonymize personal data when it was obtained for the execution of pre-contractual measures (Art. 14, (e)). This obligation is closely linked to the general data protection principles, and effective compliance with this duty would free controllers from the scope of the LPPD.
In relation to the security principle, Article 14 quinquies of the LPPD provides that controllers must adopt necessary security measures to ensure the confidentiality, integrity, availability, and resilience of the data processing systems, as well as to prevent alteration, destruction, loss, or unauthorized access to the data. Both controller and processor must take technical and organizational measures to ensure the security of the processing, in consideration of the risks associated with the processing, such as:
Applying pseudonymization and encryption of personal data where possible
Capability to restore access to the personal data in the event of a physical or technical incident
Conduct regular assessments of the effectiveness of technical and organizational measures
Security Incident Notification
Under Article 14 sexies of the LPPD, the responsible agent must report to the Agency by the “most expeditious means possible and without undue delay” any incident that can cause the accidental or unlawful destruction, breach, loss, or alteration of the personal data or the unauthorized communication or access to such data, when there is a “reasonable risk to the rights and freedoms of the data subjects.” Since the law is not clear on a specific timeframe for notification, it is expected the Agency will further regulate this area.
The law also requires the controller to record these communications and describe the nature of the incident, its potential or demonstrated effects, the type of affected data, the approximate number of affected data subjects, and measures taken to manage and prevent future incidents.
When the security incident concerns sensitive or children’s data, or data relating to economic, financial, banking, or commercial obligations, the controller must also communicate the incident to the owners in “clear and simple” language. If the notification cannot be made personally, the controller must notify via a mass notice in at least one of the main national media outlets.
Notably, Article 14 septiesincludes different standards of compliance with the “duty of information and transparency” and the “duty to adopt security measures” for controllers, based on whether they are a natural or legal person, their size, the activity they carry out, and the volume, nature, and purposes of their processing. The Agency will issue further regulation on the operationalization of these different standards.
For organizations not incorporated in Chile, Articles 10 and 14 of the LPPD establish that the controller must indicate to the Agency in writing an email address of the legal or natural person authorized to act on their behalf, so that the Agency can establish communications with them and data subjects can exercise their rights.
Similar to other frameworks, Article 15 bislimits the processor to carry out the data processing in accordance with the instructions given by the controller. If the processor or a third party processes the data for a different purpose or transfers the data without authorization, the processor will be considered the data controller for all legal purposes. The processor will be personally liable for any infringements incurred, and jointly and severally liable with the controller for any damages caused. Importantly, the “duty of confidentiality” and the “duty to adopt security measures” extend to the processor in the same terms applicable to the controller.
Data Protection Impact Assessment
Similar to the GDPR, under Article 15 ter of the LPPD, controllers must carry out a personal data protection impact assessment (DPIA) where the data processing is “likely to result in a high risk to the rights of data subjects” and in the following cases:
When the operation involves a systematic and exhaustive evaluation of personal aspects of the data subjects based on automated processing, such as profiling;
Massive or large-scale data processing;
Processing that involves systematic observation or monitoring of a publicly accessible area; or
Processing of sensitive or specially protected information.
The Agency will publish a list indicating the processing operations that may require a DPIA under the LPPD. In addition, the law obligates the Agency to issue guidance on the specific requirements for conducting DPIAs, so forthcoming regulation on this matter is expected once the Agency begins to operate. Notably, Article 15 ter sets out similar DPIA requirements as the GDPR, indicating that data controllers must indicate the description of the processing operations and their purpose, an assessment of the necessity and proportionality of the processing concerning its purpose, an assessment of the risks it may pose, and the adoption of mitigation measures.
Voluntary Appointment of a Data Protection Officer
Unlike other modern comprehensive data protection laws, the LPPD does not require the appointment of a Data Protection Officer (DPO). However, Article 49 indicates that controllers may voluntarily appoint a DPO that meets the requirements of suitability, capacity, and independence. Furthermore, the law indicates that controllers may adopt a “compliance program” that indicates, among other things, the appointment of the DPO and its powers and duties under that program. However, if the organization adopts a compliance program, it must be expressly incorporated into all employment or service provision contracts of the entity acting as data controller or processor.
6. Cross-Border Data Transfers
Similar to other frameworks in the region and the GDPR, cross-border data transfers made to a person, entity, or organization are generally authorized by the LPPD under the following mechanisms: (i) adequacy; (ii) contractual clauses, binding corporate rules, or other legal instruments entered into between the transferor and transferee; or (iii) under a compliance model or certification mechanism, along with adequate guarantees. The Agency will be in charge of publishing a list of “adequate” countries – under the criteria set forth by the law, as well as model contractual clauses and other legal instruments for international data transfers. Although the LPPD does not provide a specific timeline for publication, it does indicate that the agency will publish on its official website a list of countries deemed “adequate” as well as release the model contractual clauses and other data transfer mechanisms.
In the absence of an adequacy decision or proper safeguards, a “specific and non-customary” transfer may still be made under the following circumstances:
With the expressconsent of the data subject;
When it refers to a bank, financial, or stock exchange transfer under the applicable laws;
When the transfer is necessary to comply with international obligations under treaties and conventions ratified by the Chilean State;
When the transfer is necessary for cooperation between public bodies for the fulfillment of their functions or for international judicial cooperation;
When the transfer is necessary for the conclusion or performance of a contract or pre-contractual measures between the data subject and the data controller; or
When the transfer is necessary for urgent medical or sanitary measures or management of health services (Art. 27).
Notwithstanding the previous exceptions, Article 28 of the LPPD also includes a broader authorization for transfers that do not fall under any of these scenarios. Under Chilean law, an international data transfer may still be authorized when the transferor and transferee demonstrate “appropriate guarantees” to protect the rights and interests of the data subjects and the security of the information. This provision leaves a broad possibility to transfer personal data without any of the traditional mechanisms or for any of the purposes listed above as long as the Agency determines there are appropriate measures in place for the transfer to take place.
7. Infractions and Civil Liability
Violations of the principles and obligations set out in the LPPD may be subject to administrative and civil liability. The LPPD classifies violations as “minor” (i.e. failing to respond to data subject’s requests or to communicate with the Agency), “serious” (i.e., processing data without a legal basis or for a purpose different for which the data was collected) and “very serious” (i.e. fraudulent or malicious processing of personal data, or knowingly transferring sensitive data in contravention with the law). Notably, “very serious” violations seem to require the demonstration of intent by the infractor.
Penalties under the LPPD can range from 5,000 national tax units (around USD 387.000) to 20,000 tax units (USD 1.550.000 USD). In the case of repeated “very serious” violations, the Agency may also order the total or partial suspension of processing activities for up to thirty (30) days, a period during which the infractor must demonstrate the adoption of necessary measures to comply with the law. For entities that are not considered “small businesses”9 with repeated serious or very serious violations, the Agency may impose a fine of 2% or 4% of its annual income in the last calendar year.
Furthermore, as a dissuasive mechanism, the LPPD also creates the National Registry of Sanctions and Compliance, which will record all data controllers sanctioned for data protection violations and indicate the seriousness of the infringement, as well as aggravating or mitigating circumstances, for five (5) years.
Towards Stronger Data Protection in Chile
With the passage of the LPPD, Chile enters an era of stronger data protection requirements and enforcement. The new law expands existing data subject rights and interests and incorporates new ones, sets out relevant obligations consistent with the evolving nature and demands of offering goods and services in the digital ecosystem, aligns with other global standards of personal data protection, and incorporates higher fines and dissuasive mechanisms.
Although the LPPD draws structural inspiration from the GDPR, it also maintains certain provisions unique to its predecessor law, the LVPD, such as specific regulations for the commercial and banking sectors, and broader exceptions to the lawful grounds for processing of personal data, including sensitive and children’s data.
The LPPD may again position Chile as a regional data protection trend-setter. Other countries with not-so-old data protection laws currently seeking to update their normative frameworks, such as Argentina and Colombia, could be influenced by the landmark passing of the LPPD, facilitating a new wave of “second generation” data protection laws in Latin America.
The Chilean Congress previously analyzed at least two similar proposals under different administrations in 2008 and 2012. Two of the recurring motivations for updating the data protection framework were to achieve adequacy under the EU’s regime and comply with Chile’s commitment to update its legislation after becoming an OECD member in 2010. ↩︎
See: press release from government after approval of LPPD. ↩︎
The Agency will be managed by a Directive Council composed of three Councilors designated by the Executive and ratified by the Senate. The first Councilors are expected to be appointed within sixty (60) days after the formal enactment of the law. ↩︎
Article 19, sec. 4, of the Chilean Constitution recognizes the right to private life, human dignity, and personal data protection. ↩︎
For this exception to apply, the entity must have a political, philosophical, religious, or cultural purpose, or be a trade union; the processing refers exclusively to the entity’s members or affiliates and fulfills the purposes of the entity; the entity grants necessary guarantees to avoid unauthorized use or access to the data; and the personal data is not transferred to third parties. ↩︎
Geopolitical fragmentation, the AI race, and global data flows: the new reality
Most countries in the world have data protection or privacy laws and there is growing cross-border enforcement cooperation between data protection authorities, which might lead one to believe that the protection of global data flows and transfers is steadily advancing. However, instability and risks arising from wars, trade disputes, and the weakening of the rule of law are increasing, and are causing legal systems that protect data transferred across borders to become more inward-looking and to grow farther apart.
Fragmentation refers to the multiplicity of legal norms, courts and tribunals (including data protection authorities), and regulatory practices regarding privacy and data protection that exist around the world. This diversity is understandable in that it reflects different legal and cultural values regarding privacy and data protection, but it can also create conflicts between legal systems and increased burdens for data flows.
While this new reality affects all regions of the world, it can be illustrated by considering recent developments in three powerful geopolitical players, namely the European Union, the People’s Republic of China, and the United States. Dealing with these risks requires that greater attention be paid to geopolitical crises and legal fragmentation as a threat to protections for the free flow of data across borders.
The end of the ‘Brussels effect’?
There has been much talk of the ‘Brussels effect’ that has allowed the EU to export its regulatory approach, including its data protection law, to other regions. However, the rules on international data transfers contained in Chapter V of the EU General Data Protection Regulation (‘GDPR’) face challenges that may diminish their global influence.
These challenges are in part homemade. The standard of ‘essential equivalence’ with EU law that is required for a country to receive a formal adequacy decision from the European Commission allowing personal data to flow freely to it is difficult for many third countries to attain and sometimes leads to legal and political conflicts. The protection of data transfers under the GDPR has been criticised in the recent Draghi report as overly bureaucratic, and there have been calls to improve harmonisation of the GDPR’s application in order to increase economic growth. In particular, the approval of adequacy decisions is lengthy and untransparent, and other legal bases for data transfers are plagued by disagreements about key concepts between data protection authorities. The GDPR also applies to EU legislation dealing with AI (see the EU AI Act, Article 2(7)), so that problems with data transfers under the GDPR also affect AI-related transfers.
These factors indicate that the EU approach to data transfers may gradually lose traction with other countries. Although many of them still seek EU adequacy decisions and are happy to cooperate with the EU on data protection matters, they may also simultaneously explore other options. For example, some countries that are already subject to an EU adequacy decision or decisions (such as Canada, Japan, Korea, and the UK which has received adequacy decisions under both the GDPR and Law Enforcement Directive) have also joined a group that is establishing ‘Global Cross-Border Privacy Rules’ as a more flexible alternative system for data transfers.
Political challenges to the EU’s personal data transfer regime are now also present. Some companies are encouraging new US President Trump to challenge the enforcement of EU law against them, and some far-right parties in Europe have called for its repeal.
China has already enacted many data-related laws, including some dealing with data transfers, after first introducing sweeping data localization requirements in 2017. It was all the more surprising that in November 2024 the Chinese government announced that it will launch a ‘global cross-border data flow cooperation initiative,’ and that it is ‘willing to deepen cooperation with all parties to promote efficient, convenient, and secure cross-border data flows.’ In a speech he gave at the same time, Chinese leader Xi Jinping said that China ‘is willing to deepen cooperation with all parties to jointly promote efficient, convenient and secure cross-border data flows’.
Exactly what this means is presently unclear. However, China is a member of the BRICS group, which includes countries with nearly half of the world’s population, and has also enacted many regulations dealing with AI. If China is able to use its political and economic clout to influence the agenda for cross-border data flows, as some scholars hypothesize, this could bring the BRICS countries and others deeper into its regulatory orbit for both privacy and AI.
The arrival of data transfer rules in the US
The United States government has recently relaxed its traditional opposition to controls on data transfers and enacted regulations to regulate certain transfers based on US national security concerns.
In February 2024 former US President Biden issued an executive order limiting bulk sales of personal data to ‘countries of concern.’ The Department of Justice then issued a Final Rule in December 2024 setting out a regulatory program to address the ‘urgent and extraordinary national security threat posed by the continuing efforts of countries of concern (and covered persons that they can leverage) to access and exploit Americans’ bulk sensitive personal data and certain U.S. Government-related data.’
It is no secret that these initiatives are primarily focused on data transfers to China, which is one of the six ‘countries of concern’ determined by the Attorney General, with the concurrence of the Secretaries of State and Commerce (the other five are Venezuela, Cuba, North Korea, Iran and Russia, according to Section 202.211 of the Final Rule). While some scholars have expressed skepticism about whether these initiatives will really bring their intended benefits, it is significant that national security has been used as a basis both for regulating data flows and for a shift in US trade policy.
It is too soon to tell if President Trump will continue this focus. However, some of the actions that his administration has already taken have drawn the attention of digital rights groups in Europe who believe they may imperil the EU-US data privacy framework that serves as the basis for the EU adequacy decision allowing free data flows to the US. It is also questionable whether the EU will put resources into negotiating further agreements to facilitate data transfers to the US in light of the current breakdown in transatlantic relations.
Conclusions
We have entered a new era of instability where geopolitical tensions and the AI race have a significant impact on the protection of data flows. To be sure, political factors have long influenced the legal climate for data transfers, such as in the disputes between the EU and the US that led to the EU Court of Justice invalidating EU adequacy decisions in its two Schrems judgments (Case C-362/14 and Case C-311/18). The European Commission has also admitted that political and economic factors influence its approach to data flows. However, in the past political disputes about data transfers largely remained within the limits of disagreements between friends and allies, whereas the tensions that currently threaten them often arise from serious international conflicts that can quickly spiral out of control.
The fragmentation of data transfer rules along regional and sectoral lines will likely increase with the development of AI and similar technologies that require completely borderless data flows, and with increased cross-border enforcement of data protection law in cases involving AI. Initiatives to regulate data transfers used in AI have already been proposed at the regional level, such as in the Continental Artificial Intelligence Strategy published in August 2024 by the African Union, which refers to cooperation ‘to create capacity to enable African countries to self-manage their data and AI and take advantage of regional initiatives and regulated data flows to govern data appropriately’. This will likely also give additional impetus to digital sovereignty initiatives in different regions, which will lead to even greater fragmentation.
The growing influence of geopolitics demonstrates that the protection of data flows requires a strong rule of law, which is currently under threat around the world. The regulation of data transfers is too often regarded as a technocratic exercise that focuses on steps such as filling out forms and compiling impact assessments. However, such exercises can only provide protection within a legal system that is underpinned by the rule of law. The weakening of factors that comprise the rule of law, such as the separation of powers and a strong and independent judiciary, drives uncertainty and the fragmentation of data transfer regulation even more.
The approaches to data transfer regulation pursued by the leading geopolitical players each have their strengths and weaknesses. The EU approach has attained considerable influence around the world, but is coming under pressure largely because of homegrown problems. The US emphasis on national security is inward-looking, but could become popular in other countries as well. China’s new initiative to regulate data transfers seems poised to attain greater international influence, though this may be mainly limited to the Asia-Pacific region.
Although complying with data transfer regulation has always required attention to risk, geopolitical risk has been broadly overlooked so far, perhaps because it can seem overwhelming and impossible to predict. Indeed, events that have disrupted data flows such as Brexit and the Russian invasion of Ukraine were sometimes dismissed before they happened. However, this new reality requires incorporating the management of geopolitical risk into assessing the viability and legal certainty of international data transfers by organizations active across borders. There are steps that can be taken to manage geopolitical risk, such as those identified by the World Economic Forum, namely: assessing risks to understand them better; looking at ways to reduce the risks; ringfencing risks when possible; and developing plans to deal with events if they occur.
Parties involved in data transfers already need to perform risk assessments, but geopolitical events present a larger scale of risk than many will be used to. Risk reduction and ringfencing for unpredictable ‘black swan events’ such as wars or sudden international crises are difficult, and may require drastic measures such as halting data flows or changing supply chains that need to be prepared in advance.
Major geopolitical events and the AI race are having a significant effect on data protection and data flows, making it essential to anticipate them as much as possible and to develop plans to cope with them should they occur. The only thing that can be safely predicted is that further geopolitical developments are in store with the potential to bring massive changes to the data protection landscape and disrupt global data flows, making it essential to give them a prominent place in risk analysis when transferring data.