FPF Publishes Infographic, Readiness Checklist To Support Schools Responding to Deepfakes

Today, the Future of Privacy Forum (FPF) released an infographic and readiness checklist to help schools better understand and prepare for the risks posed by deepfakes. Deepfakes are realistic, synthetic media, including images, videos, audio, and text, created using a type of Artificial Intelligence (AI) called deep learning. By manipulating existing media, deepfakes can make it appear as though someone is doing or saying something that they never actually did. 

Deepfakes, while relatively new, are quickly becoming prevalent in K-12 schools. Schools have a responsibility to create a safe learning environment, and a deepfake incident – even if it happens outside of school – poses real risks to that, including through bullying and harassment, the spread of misinformation and disinformation, personal safety and privacy concerns, and broken trust.

FPF’s infographic describes the different types of deepfakes – video, text, image, and audio – and the varied risks and considerations posed by each in a school setting, from the potential for fabricated phone calls and voice messages impersonating teachers to sharing forged, non-consensual intimate imagery (NCII).

“Deepfakes create complicated ethical and security challenges for K-12 schools that will only grow as the technology becomes more accessible and sophisticated, and the resulting images harder to detect,” said Jim Siegl, Senior Technologist with FPF’s Youth & Education Privacy team. “Schools should understand the risks, their responsibilities and protocols in place to respond, and how they will protect students, staff, and administrators while addressing an incident.”

FPF has also developed a readiness checklist to support schools in assessing and preparing response plans. The checklist outlines a series of considerations for school leaders, from the need for education and training to determining how existing technology, policies, and procedures might apply to engaging legal counsel and law enforcement. 

The infographic maps out the various stages of a school’s response to an example scenario – a student reporting that they received a sexually explicit photo of a friend and that the image is circulating among a group of students – inviting school leaders to consider the following:

As an additional resource for school leaders and policymakers navigating the rapid deployment of AI and related technologies in schools, FPF has developed an infographic highlighting its varied use cases in an educational setting. While deepfakes are a new and evolving challenge, edtech tools using AI have been in schools for years.

FPF Privacy Papers for Policymakers: A Celebration of Impactful Privacy Research and Scholarship

The Future of Privacy Forum (FPF) hosted its 15th Privacy Papers for Policymakers (PPPM) event at its Washington, D.C., headquarters on March 12, 2025. This prestigious event recognized six outstanding research papers that offer valuable insights for policymakers navigating the ever-evolving landscape of privacy and technology. The evening featured engaging discussions and a shared commitment to advancing informed policymaking in digital privacy.

dsc 0747

FPF Board President Alan Raul

Daniel Hales, FPF Policy Fellow, kicked off the event as the emcee and recognized the contributions of FPF Board President Alan Raul and Board Secretary-Treasurer Debra Berlyn, along with the FPF staff who helped organize the gathering. Alan Raul, in his opening remarks, emphasized the significance of privacy scholarship and its relevance to policymakers worldwide. He noted that the PPPM event has, for 15 years, successfully brought together scholars, regulators, and industry leaders to discuss privacy research with real-world implications.

dsc 0742

Daniel Hales

Lee Matheson, FPF Deputy Director for Global Privacy, opened the discussion by introducing Professor Mark Jia (Georgetown University Law Center), who explored the evolution of privacy law in China. His paper, Authoritarian Privacy, challenges the notion that privacy is solely a Western concept and argues that China’s privacy framework has been shaped not only by state interests but also by public concerns. Professor Jia discussed the role of the Cyberspace Administration of China (CAC) and how privacy regulations have been influenced by social unrest and legitimacy concerns within the government. He emphasized that China’s Personal Information Protection Law (PIPL) is enforceable and not merely symbolic. Their discussion also touched on public “flashpoints” that have prompted government responses and the broader implications for understanding regulatory trends in authoritarian regimes.

dsc 0752

Professor Mark Jia and Lee Matheson

Professor Mark MacCarthy (Georgetown University) introduced Alice Xiang (Sony AI) to discuss her paper Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?, which examines algorithmic bias in artificial intelligence models. Ms. Xiang’s research critiques the assumption that fair data sets automatically lead to fair AI outcomes and highlights the challenges in defining fairness. She noted that while engineers often bear the responsibility of addressing bias, broader policy frameworks are needed. Their discussion explored the tension between AI neutrality and the necessity for companies to engage with ethical and social justice considerations. Ms. Xiang argued that AI systems mirror existing societal inequalities rather than solve them and called for stronger regulatory oversight to ensure transparency and accountability in AI decision-making.

dsc 0777

Alice Xiang and Professor Mark MacCarthy

Next, Jocelyn Aqua (PwC) conversed with Miranda Bogen (Center for Democracy and Technology), whose paper Navigating Demographic Measurement for Fairness and Equity addresses the paradox of measuring fairness in AI while protecting individuals’ privacy. Ms. Bogen categorized fairness assessment into three key areas: measuring disparities, selecting appropriate metrics, and implementing mitigation strategies. She pointed out that privacy laws like GDPR and CCPA create barriers to demographic data collection, complicating efforts to assess bias in AI systems. The conversation emphasized the need for alternative privacy-preserving methods, such as statistical inference and qualitative analysis, to reconcile fairness assessments with privacy protections. Bogen called for policymakers to establish clearer guidelines that allow for responsible demographic measurement while ensuring compliance with privacy laws.

dsc 0795

Miranda Bogen and Jocelyn Aqua

The discussion then turned to Brenda Leong (ZwillGen), who introduced Tom Zick (Orrick, Herrington & Sutcliffe LLP) and Tobin South (Stanford University), two of the co-authors of the paper, Personhood Credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online. Their paper explores the concept of “personhood credentials,” proposing a decentralized approach to verifying online identities while balancing security and privacy. The authors highlighted the risks posed by AI-driven identity fraud and the need for robust authentication mechanisms that protect user privacy. The conversation covered potential issuers of personhood credentials, including governments and private organizations, and the challenges of industry-wide adoption. Ultimately, the paper argues for the importance of developing privacy-first verification solutions that minimize data exposure while maintaining trust in digital interactions.

dsc 0803

Tobin South, Tom Zick, and Brenda Leong

Turning to another critical issue, Professor Daniel J. Solove (George Washington University Law School) discussed his paper (co-authored by Boston University Professor Woodrow Hartzog) The Great Scrape: The Clash Between Scraping and Privacy with Jennifer Huddleston (Cato Institute). Professor Solove examined the legal and ethical complexities of data scraping, arguing that while scraping has long existed in a legal gray area, the rise of AI has heightened privacy concerns. He challenged the perception that publicly available data is free for unrestricted use, noting that privacy laws are evolving to address these issues. The discussion explored potential regulatory solutions, emphasizing the importance of distinguishing between beneficial scraping and harmful practices that exploit personal data. Professor Solove advocated for a public interest standard to determine when scraping should be permissible and called for clearer legal frameworks to protect individuals from data misuse.

dsc 0831

Professor Daniel J. Solove and Jennifer Huddleston

In the last discussion, Professor James C. Cooper (Antonin Scalia Law School – George Mason University) joined Professor Alicia Solow-Niederman (George Washington University Law School) to discuss her paper The Overton Window and Privacy Enforcement. Professor Solow-Niederman explained how internal norms, congressional oversight, judicial rulings, and public sentiment collectively shape the Federal Trade Commission’s (FTC) approach to privacy enforcement. The conversation also highlighted recent cases where the FTC has expanded its enforcement scope, including actions against data brokers and algorithmic decision-making. The paper argues that policymakers need to balance their legal authority with the evolving public expectations to ensure effective privacy enforcement.

dsc 0847

Professor Alicia Solow-Niederman and Professor James C. Cooper

John Verdi, FPF’s Senior Vice President for Policy, closed the event by thanking the winning authors, discussants, event team, and FPF’s Daniel Hales for their contributions. He highlighted FPF’s role in bringing together academia, policy, and industry experts to promote meaningful discussions on privacy.

dsc 0858

Read the 15th Annual Privacy Papers for Policymakers Digest

FPF Releases Report on the Adoption of Privacy Enhancing Technologies by State Education Agencies

The Future of Privacy Forum (FPF) released a landscape analysis of the adoption of Privacy Enhancing Technologies (PETs) by State Education Agencies (SEAs). As agencies face increasing pressure to leverage sensitive student and institutional data for analysis and research, PETs offer a unique potential solution as they are advanced technologies designed to protect data privacy while maintaining the utility of results yielded from analyses. 

FPF worked with AEM Corporation to conduct a landscape analysis, including an overview of current PETs adoption, current challenges, and considerations for enhancing data protection measures. The landscape analysis, first previewed in a late 2024 webinar and expert panel discussion, evaluated the organizational readiness and critical use cases for PETs within SEAs and the broader education sector, ultimately highlighting the need to raise awareness of what PETs are and what they are not, the range of available types of PETs, their potential use cases, and considerations for the effective adoption and sustainable implementation of these technologies. 

“Intentional PETs implementation can boost community trust, enhance data analysis, and effectively ensure critical privacy protections,” said Jim Siegl, FPF Senior Technologist for Youth & Education Privacy. “But as our landscape analysis highlights, despite the advances PETs offer to SEAs in utilizing the data they steward, a gap persists in applying these technologies and realizing their potential benefits.”

Key findings outlined in the report include:

The report also outlines a series of recommendations to support PET adoption at scale, including establishing a shared vocabulary, creating trusted introductory resources, and curating relevant use cases to raise collective awareness about the capabilities and limitations of PETs. Additional recommendations include developing a PETs readiness model, focusing on core capabilities, and providing targeted technical assistance to support sustainable PET adoption and implementation. 

Recognizing the need for a deeper understanding of the potential and limitations of these technologies, FPF has actively contributed to shaping policymaking around PETs through discussion papers, reports, and stakeholder engagement. FPF’s PETs Repository, launched in November 2024, is a centralized, trusted, and up-to-date resource where individuals and organizations interested in these technologies can find practical and useful information.

Singapore Management University and Future of Privacy Forum Form Partnership to Advance Expertise in Digital Law and Data Governance in Asia-Pacific

March 10, 2025 — Singapore Management University (SMU) and the Future of Privacy Forum (FPF) have signed a Memorandum of Understanding (MOU) to strengthen collaboration in data governance, privacy, and emerging technology regulation across the Asia-Pacific region. 

By combining SMU’s expertise in digital law with FPF’s global leadership in data protection, privacy and emerging technology governance, this partnership aims to drive impactful research and thought leadership. Through this MOU, SMU and FPF will collaborate on a variety of initiatives, including joint events, research publications, and advisory participation, while also expanding stakeholder networks across academia, industry, and government. 

SMU’s Yong Pung How School of Law (YPHSL), ranked among the top 100 globally in the QS World University Rankings, is home to the Centre for Digital Law (CDL), which aims to become Asia’s premier law and technology research hub by integrating expertise from law, computer science, and digital humanities.

“This partnership with SMU’s Yong Pung How School of Law marks an important step in our mission to foster meaningful collaborations with leading academic institutions in the region,” said Josh Lee Kok Thong, FPF Managing Director for APAC. “As two organizations that share a common vision of fostering greater digital trust and innovation, we are excited to forge a strong partnership that will maximize our collective strengths and capabilities.”

With the rapid evolution of AI, digital finance, and cross-border data governance, this collaboration will play a key role in shaping regional and global conversations on responsible and forward-looking digital governance.

“Privacy and data protection is a fundamental aspect of each of our research pillars at the SMU CDL–society, economy, and government. We are excited to announce this closer collaboration with FPF after several years of informal collaboration, including taking part in many of FPF’s excellent events, and to working together to build a community of interest with diverse stakeholders in the region and bringing our regional voice to the global conversation”, said Jason Grant Allen, Director, Centre for Digital Law . 

FPF has established a global presence across the US, Europe, Africa, the Asia-Pacific, India, Israel, and Latin America, monitoring policy developments and providing stakeholders with key insights. Its partnership with SMU strengthens this strategy, advancing its expertise and thought leadership in data protection and emerging technology regulation.

“FPF remains committed to leveraging our global reach and expertise in data governance to contribute meaningfully to policy discussions and research,” said Gabriela Zanfir-Fortuna, VP for Global Privacy.

As digital regulation continues to evolve, this collaboration will provide critical insights and policy guidance to ensure balanced, responsible and forward-thinking governance in the Asia-Pacific and beyond. 

Data Sharing for Research Tracker

Co-authored by Hannah Babinski, former FPF Intern

In celebration of International Open Data Day, FPF is proud to launch the Data Sharing For Research Tracker, a growing list of organizations that make data available for researchers. It  provides information about the company, the data, any access restrictions, and relevant links:

One of the most difficult, time-consuming, and expensive parts of the research process is collecting data, but using existing data can help researchers mitigate the time and cost associated with this process. 

Research by the Future of Privacy Forum and others has shown that companies have the potential to make significant contributions to research by sharing their data with researchers. This kind of data sharing carries innate legal, ethical, and privacy risks that must be planned for in advance. Despite these challenges, data sharing for research is well worth the effort: It’s led to scientific breakthroughs in topics ranging from diabetes risk prediction models to wildfire evacuation planning.

FPF’s new resource is intended to help researchers find data for secondary analysis. It also provides a platform for organizations looking to raise awareness about their data sharing programs and benchmark them against what other organizations offer. Check out these publications to learn more about why data sharing is important and how to share data for research while maintaining privacy and ethics:

Chile’s New Data Protection Law: Context, Overview, and Key Takeaways

On August 26, 2024, the Chilean Congress approved Law 21.719, on the Protection of Personal Data (“LPPD”) after eight years of legislative debate. The legislation was published on December 13, 2024, and will become fully effective twenty-four months after that date (in December 2026). 

The LPPD was introduced in the Senate in 2017 to replace Law 19.628, Ley sobre Protección de la Vida Privada (hereinafter referred to as “LPVP”), which was adopted in 1999 as Chile’s first national data protection framework, as well as the first such law in Latin America. 

The LPVP provided a foundational framework for personal data protection for nearly 24 years. However, the evolving demands of technological development and globalization gradually highlighted the LPVP’s lack of compatibility with newer and more comprehensive global standards for data protection adopted by partner countries. 

In particular, stronger data protection standards reflected in the European Union’s Directive 95/46/EC significantly influenced post-LPVP legislation in Latin America, with Argentina passing comprehensive data protection legislation in 2000 and Mexico in 2010, for example. A similar structural effect followed the enactment of the EU’s General Data Protection Regulation (GDPR), which has influenced recent proposals including Brazil’s Lei Geral de Proteção de Dados (LGPD) and Chile’s LPPD, although each nation has approached this era of policymaking in a unique way.

Prior congressional attempts to update the LPVP reflect the country’s efforts to align to best global standards and meet international commitments1. According to the Chilean government, the approved LPPD pursues the dual objective of (i) providing stronger protection for data subjects and (ii) regulating and promoting the country’s digital economy.2 

This blog covers some of the new features in the LPPD, including:  

Read further for a deeper insight into the key features of the new Chilean data protection law and how they differ from its predecessor and other data protection laws in the region. 

1. Scope, covered actors, and exterritoriality

The LPPD regulates the form and conditions under which the processing of personal data of natural persons may be carried out, under Article 19 of the Chilean Constitution, which recognizes the right to personal data protection.4 

Similar to other laws in the region (and to the model articulated in the GDPR), the LPPD applies extraterritorially to natural and legal persons, including public and private bodies, when the processing is carried out:

2. Covered data

Under Article 2(f) of the LPPD, “personal data” is broadly defined as “any information linked to or referring to an identified or identifiable natural person.” The LPPD establishes that an “identifiable” individual is one “whose identity can be determined, directly or indirectly, in particular by means of one or more identifiers, such as name, identity card number, analysis of elements of the physical, physiological, genetic, psychological, economic, cultural or social identity of such person.” In addition, to determine whether an individual is identifiable, the law requires “all objective means and factors that could reasonably be used to identify the individual at the time of the processing” be considered.

The LPPD’s approach to anonymized data is initially consistent with the GDPR’s approach to the subject: anonymized data is information that does not relate to an identified or identifiable person, and thus is not personal data5. A similar initial definition is found in Brazil’s LGPD, though the Brazilian legislation explicitly recognizes that anonymization might be a reversible process6. The key differentiating feature of LPPD’s approach to “anonymization” is the term’s definition as an “irreversible process” that does not allow for the identification of a natural person.7 In that sense, the LPPD’s definition of anonymization seems stricter than the language found in both the GDPR and the LGPD concerning anonymized data. It is likely that future guidance may shed light on the requirements for “irreversibility” under Chilean law. 

Concerning “pseudonymization,” the LPPD follows a similar approach to that found in the GDPR and LGPD. Chilean law defines it as a process carried out in a way that “[data] can no longer be attributed to a data subject without additional information, provided that such information is separate and subject to technical and organizational measures to ensure the data is not attributable to a natural person.” This approach points to the possibility of considering pseudonymized data as personal data as long as it can be linked to an identifiable individual through additional information. 

Standards and guidance on anonymization and pseudonymization continue to be explored globally by authorities in the context of data protection frameworks. However, some laws explicitly recognize these techniques as a way to comply with data protection principles. The LPPD explicitly refers to pseudonymization as a technique relevant to comply with the security principle. Article 14 quinquies of the LPPD indicates that controllers shall implement “technical and organizational measures to ensure a level of security appropriate to the risk” such as pseudonymization and encryption of personal data, among other security measures.

3. Data Subject Rights: “ARCO” rights, data portability, and the right to block the processing of data

    The LPPD includes two new data subject rights – the right to data portability and the right to block the processing of one’s data – in addition to the previous rights granted in the former LPVP: access, rectification, suppression, and opposition, also regionally known as the “ARCO” rights

    Similar to GDPR-inspired laws that have recently incorporated the right to portability, the LPPD indicates the data subject has the right to request and receive a copy of their data in an “electronic, structured, generic and commonly used format,” which allows the data to be read by different systems and the data subject to communicate or transfer the data to another data controller, when (i) the processing is carried out in the automated form; and (ii) the processing is based on the consent of the the data subject. When technically feasible, the LPPD mandates the portability to be performed directly from controller to controller. 

    In addition, the LPPD indicates the controller must use the “most expeditious and least onerous means” and communicate to the data subject in a “clear and precise manner” the necessary measures to carry out the portability. Notably, under Chilean law, the right to portability does not necessarily entail the deletion of the data by the transferring controller, which means the data subject must jointly request the deletion of their data once the portability is carried out (Art. 9).

    The “right to block the processing of personal data” is the other new right added by the LPPD, which resembles the GDPR’s Article 19 “right to restriction of processing” and Brazil’s LGPD Article 18 “right to blocking unnecessary or excessive data.” Under Article 8 ter of the LPPD, this right is understood as a “temporary suspension of any processing operation” that pertains to a data subject when they make a rectification, erasure, or opposition request. The temporary suspension applies as long as the subject’s request remains open. This suggests that under the “right to block processing,” a data subject can immediately and effectively suspend the processing of their data before the rectification, erasure, or opposition request is processed by the controller. The controller is thus restricted from further processing, although it may continue storing the affected personal data. 

    Closely linked to the right of opposition, the LPPD introduces the “right to object and not be subject to decisions based on automated processing,” including profiling, when such processing produces legal effects on the data subject or significantly affects them (Art. 8 bis). Under the LPPD, “profiling” refers to “any form of automated processing of personal data that consists of using such data to evaluate, analyze or predict aspects relating to the professional performance, economic situation, health, personal preferences, interests, reliability, behavior, location or movements of a natural person” (Art. 2, (w)). 

    The LPPD hews closer to the GDPR in the sense that it expressly recognizes the “right to object and not be subject” to automated processing, unlike Brazil’s LGPD, which only recognizes a data subject’s “right to review” automated processing. Similar to the GDPR, Article 8 bis of the LPPD restricts the exercise of this right under certain circumstances, such as when: (i) the decision is necessary for the conclusion or execution of a contract between the subject and the agent; (ii) there is prior and express consent; or (iii)  as indicated by law, to the extent that it provides safeguards for the rights and freedoms of the data subject. The operationalization of this right must safeguard the data subject’s rights to information and transparency, obtain an explanation and human intervention, express their point of view, and request a review of the decision. This set of rights and freedoms is encapsulated within the right to object and not be subject to automated processing.

    4. Lawful grounds for processing and consent requirements

      The LPPD maintains consent as the general basis for the processing of personal data – similar to how it was regulated by the former LPVP. Consent must be “free, informed and specific as to its purpose” and given “in advance and unequivocally” by means of a verbal or written statement, or expressed through electronic means or an affirmative act that “clearly shows” the owner’s intent. The data subject can revoke consent without retroactive effects, and its grant or revocation should be expeditious, reliable, free, and permanently available (Art. 12).

      In line with the principle of purpose limitation, the LPPD presumes consent is not “freely given” when collected for the performance of a contract or the provision of a service, where the collection is not necessary to serve those purposes. However, this presumption is not applicable when a person or entity offering goods, services, or benefits solely requires the data subject’s consent to process their data (Art. 12). Notably, this scenario applies to many “free” online services, such as social media or messaging platforms, where consent to process an individual’s data for advertising or profiling purposes is often required for the provision of service.

      Without consent of the data subject, the LPPD recognizes the following lawful grounds for processing:

      Processing sensitive data and children’s and adolescent’s data 

      Similar to other comprehensive frameworks, the LPPD distinguishes sensitive data from personal data of a general nature. Under Article 2 (g) of the LPPD, “sensitive data” encompasses data that refers to “physical or moral characteristics of persons or to facts or circumstances of their private life or intimacy, that reveal ethnic or racial origin, political, union or trade union affiliation, socioeconomic situation, ideological or philosophical convictions, religious beliefs, data related to health, human biological profile, biometric data, and information related to sexual life, sexual orientation and gender identity of a natural person.” 

      Chile’s sensitive data definition is comparable to definitions found in other laws in the region such as Brazil’s LGPD and Ecuador’s Ley Orgánica de Protección de Datos (LOPD), which base the nature of sensitivity on the potential of discrimination or impact on an individual’s rights and freedoms if such information is mishandled or unlawfully accessed. 

      As a general rule, sensitive data may only be processed with the consent of the data subject. Exceptionally, controllers may process sensitive data without consent in the following circumstances (Art. 16):

      Under Article 16 bis of the LPPD, health data and biometric data may only be processed for the purposes provided by the applicable laws or with the data subject’s consent, unless one of the following scenarios applies:  

      Article 16 ter defines biometric data as “obtained from a specific technical treatment, related to the physical, physiological or behavioral characteristics of a person that allow or confirm the unique identification of the person, such as fingerprint, iris, hand or facial features and voice.” When processing biometric data, the controller is required to disclose the biometric system used, the purpose of the collection, the period during which the data will be processed, and the manner in which the subject can exercise their rights.

      Similar to other frameworks in the region like Brazil’s LGPD, Article 16 quater of the LPPD incorporates the standard of “best interest” of the children when processing their data. As a general rule, the processing of such data may only be conducted in the child’s best interest and with respect to their “progressive autonomy” – a concept introduced, yet not defined, by the LPPD. The lawful processing of children’s data must be based on consent granted by the parents or legal guardian unless expressly authorized by law.

      The LPPD introduces a notable distinction between the processing rules applicable to data from children (under 14 years old) and adolescents (between 14-18 years old). Under Chilean law, the processing of adolescents’ data may be processed following the general rules applicable to adults’ data, except when the information is sensitive and the child is below 16 years of age. This means that for processing sensitive data from 16 and below adolescents, controllers must still obtain consent from the parents or legal guardian. For other non-sensitive data, controllers may process adolescents’ data following the general rules of the LPPD, but would still be subject to the “best interest” standard. This distinction is a novel innovation of Chilean law and is not found in Brazil’s LGPD or Ecuador’s LOPD.

      5. Duties and Obligations of Data Controllers

        The LPPD’s provisions follow principles of lawfulness, fairness, purpose limitation, proportionality, quality, accountability, security, transparency, and confidentiality. These principles, along with other specific duties, guide the obligations of data controllers and are consistent with other modern data protection frameworks. 

        For instance, under Article 14 ter, controllers must inform and make available “background information” that proves the lawfulness of the data processing and promptly deliver such information when requested by data subjects or the authority. This suggests that regardless of whether the information is requested or not, controllers should keep this information readily available. This obligation relates to the “duty of information and transparency,” under which controllers must provide and keep “permanently available to the public” its processing policy, the categories of personal data subject to processing, a generic description of its databases, and the security measures to safeguard the data, among other information.

        Notably, Article 14 quater also introduces the “duty of protection by design and by default,” resembling GDPR Article 25. Under the LPPD, this duty refers to the application of “appropriate technical and organizational measures” before and during the processing. Drawing inspiration from the GDPR, the LPPD indicates the measures should consider the state of the art, costs, nature, scope, context, purpose, and risks associated with the data processing.

        Although the LPPD does not expressly recognize a “right to anonymization” like Brazil’s LGPD, it sets out the controller’s obligation to anonymize personal data when it was obtained for the execution of pre-contractual measures (Art. 14, (e)). This obligation is closely linked to the general data protection principles, and effective compliance with this duty would free controllers from the scope of the LPPD.

        In relation to the security principle, Article 14 quinquies of the LPPD provides that controllers must adopt necessary security measures to ensure the confidentiality, integrity, availability, and resilience of the data processing systems, as well as to prevent alteration, destruction, loss, or unauthorized access to the data. Both controller and processor must take technical and organizational measures to ensure the security of the processing, in consideration of the risks associated with the processing, such as: 

        Security Incident Notification 

        Under Article 14 sexies of the LPPD, the responsible agent must report to the Agency by the “most expeditious means possible and without undue delay” any incident that can cause the accidental or unlawful destruction, breach, loss, or alteration of the personal data or the unauthorized communication or access to such data, when there is a “reasonable risk to the rights and freedoms of the data subjects.” Since the law is not clear on a specific timeframe for notification, it is expected the Agency will further regulate this area.

        The law also requires the controller to record these communications and describe the nature of the incident, its potential or demonstrated effects, the type of affected data, the approximate number of affected data subjects, and measures taken to manage and prevent future incidents.

        When the security incident concerns sensitive or children’s data, or data relating to economic, financial, banking, or commercial obligations, the controller must also communicate the incident to the owners in “clear and simple” language. If the notification cannot be made personally, the controller must notify via a mass notice in at least one of the main national media outlets.

        Notably, Article 14 septies includes different standards of compliance with the “duty of information and transparency” and the “duty to adopt security measures” for controllers, based on whether they are a natural or legal person, their size, the activity they carry out, and the volume, nature, and purposes of their processing. The Agency will issue further regulation on the operationalization of these different standards.

        For organizations not incorporated in Chile, Articles 10 and 14 of the LPPD establish that the controller must indicate to the Agency in writing an email address of the legal or natural person authorized to act on their behalf, so that the Agency can establish communications with them and data subjects can exercise their rights.

        Similar to other frameworks, Article 15 bis limits the processor to carry out the data processing in accordance with the instructions given by the controller. If the processor or a third party processes the data for a different purpose or transfers the data without authorization, the processor will be considered the data controller for all legal purposes. The processor will be personally liable for any infringements incurred, and jointly and severally liable with the controller for any damages caused. Importantly, the “duty of confidentiality” and the “duty to adopt security measures” extend to the processor in the same terms applicable to the controller. 

        Data Protection Impact Assessment

        Similar to the GDPR, under Article 15 ter of the LPPD, controllers must carry out a personal data protection impact assessment (DPIA) where the data processing is “likely to result in a high risk to the rights of data subjects” and in the following cases: 

        The Agency will publish a list indicating the processing operations that may require a DPIA under the LPPD. In addition, the law obligates the Agency to issue guidance on the specific requirements for conducting DPIAs, so forthcoming regulation on this matter is expected once the Agency begins to operate. Notably, Article 15 ter sets out similar DPIA requirements as the GDPR, indicating that data controllers must indicate the description of the processing operations and their purpose, an assessment of the necessity and proportionality of the processing concerning its purpose, an assessment of the risks it may pose, and the adoption of mitigation measures.  

        Voluntary Appointment of a Data Protection Officer

        Unlike other modern comprehensive data protection laws, the LPPD does not require the appointment of a Data Protection Officer (DPO). However, Article 49 indicates that controllers may voluntarily appoint a DPO that meets the requirements of suitability, capacity, and independence. Furthermore, the law indicates that controllers may adopt a “compliance program” that indicates, among other things, the appointment of the DPO and its powers and duties under that program. However, if the organization adopts a compliance program, it must be expressly incorporated into all employment or service provision contracts of the entity acting as data controller or processor.

        6. Cross-Border Data Transfers

        Similar to other frameworks in the region and the GDPR, cross-border data transfers made to a person, entity, or organization are generally authorized by the LPPD under the following mechanisms: (i) adequacy; (ii) contractual clauses, binding corporate rules, or other legal instruments entered into between the transferor and transferee; or (iii) under a compliance model or certification mechanism, along with adequate guarantees. The Agency will be in charge of publishing a list of “adequate” countries – under the criteria set forth by the law, as well as model contractual clauses and other legal instruments for international data transfers. Although the LPPD does not provide a specific timeline for publication, it does indicate that the agency will publish on its official website a list of countries deemed “adequate” as well as release the model contractual clauses and other data transfer mechanisms. 

        In the absence of an adequacy decision or proper safeguards, a “specific and non-customary” transfer may still be made under the following circumstances: 

        Notwithstanding the previous exceptions, Article 28 of the LPPD also includes a broader authorization for transfers that do not fall under any of these scenarios. Under Chilean law, an international data transfer may still be authorized when the transferor and transferee demonstrate “appropriate guarantees” to protect the rights and interests of the data subjects and the security of the information. This provision leaves a broad possibility to transfer personal data without any of the traditional mechanisms or for any of the purposes listed above as long as the Agency determines there are appropriate measures in place for the transfer to take place. 

        7. Infractions and Civil Liability

        Violations of the principles and obligations set out in the LPPD may be subject to administrative and civil liability. The LPPD classifies violations as “minor” (i.e. failing to respond to data subject’s requests or to communicate with the Agency), “serious” (i.e., processing data without a legal basis or for a purpose different for which the data was collected) and “very serious” (i.e. fraudulent or malicious processing of personal data, or knowingly transferring sensitive data in contravention with the law). Notably, “very serious” violations seem to require the demonstration of intent by the infractor. 

        Penalties under the LPPD can range from 5,000 national tax units (around USD 387.000) to 20,000 tax units (USD 1.550.000 USD). In the case of repeated “very serious” violations, the Agency may also order the total or partial suspension of processing activities for up to thirty (30) days, a period during which the infractor must demonstrate the adoption of necessary measures to comply with the law. For entities that are not considered “small businesses”9 with repeated serious or very serious violations, the Agency may impose a fine of 2% or 4% of its annual income in the last calendar year.

        Furthermore, as a dissuasive mechanism, the LPPD also creates the National Registry of Sanctions and Compliance, which will record all data controllers sanctioned for data protection violations and indicate the seriousness of the infringement, as well as aggravating or mitigating circumstances, for five (5) years.

        Towards Stronger Data Protection in Chile 

        With the passage of the LPPD, Chile enters an era of stronger data protection requirements and enforcement. The new law expands existing data subject rights and interests and incorporates new ones, sets out relevant obligations consistent with the evolving nature and demands of offering goods and services in the digital ecosystem, aligns with other global standards of personal data protection, and incorporates higher fines and dissuasive mechanisms. 

        Although the LPPD draws structural inspiration from the GDPR, it also maintains certain provisions unique to its predecessor law, the LVPD, such as specific regulations for the commercial and banking sectors, and broader exceptions to the lawful grounds for processing of personal data, including sensitive and children’s data. 

        The LPPD may again position Chile as a regional data protection trend-setter. Other countries with not-so-old data protection laws currently seeking to update their normative frameworks, such as Argentina and Colombia, could be influenced by the landmark passing of the LPPD, facilitating a new wave of “second generation” data protection laws in Latin America. 

        1. The Chilean Congress previously analyzed at least two similar proposals under different administrations in 2008 and 2012. Two of the recurring motivations for updating the data protection framework were to achieve adequacy under the EU’s regime and comply with Chile’s commitment to update its legislation after becoming an OECD member in 2010.   ↩︎
        2. See: press release from government after approval of LPPD. ↩︎
        3. The Agency will be managed by a Directive Council composed of three Councilors designated by the Executive and ratified by the Senate. The first Councilors are expected to be appointed within sixty (60) days after the formal enactment of the law.
          ↩︎
        4. Article 19, sec. 4, of the Chilean Constitution recognizes the right to private life, human dignity, and personal data protection. ↩︎
        5. EU Regulation 2016/679 (GDPR), Recital 26. ↩︎
        6. Lei Geral de Proteção de Dados (LGPD), Article 12 ↩︎
        7. Law No. 21.719, (LPPD), Art. 2(k). ↩︎
        8. For this exception to apply, the entity must have a political, philosophical, religious, or cultural purpose, or be a trade union; the processing refers exclusively to the entity’s members or affiliates and fulfills the purposes of the entity; the entity grants necessary guarantees to avoid unauthorized use or access to the data; and the personal data is not transferred to third parties. ↩︎
        9. As defined under Article 2 of Law no. 20.416. ↩︎

        Geopolitical fragmentation, the AI race, and global data flows: the new reality

        Most countries in the world have data protection or privacy laws and there is growing cross-border enforcement cooperation between data protection authorities, which might lead one to believe that the protection of global data flows and transfers is steadily advancing. However, instability and risks arising from wars, trade disputes, and the weakening of the rule of law are increasing, and are causing legal systems that protect data transferred across borders to become more inward-looking and to grow farther apart. 

        The geopolitical race to take a leading role in the development of AI (the ‘AI race’), a technology which requires borderless access to data for the best performing systems and models, is also fundamentally reshaping the international data flows landscape and leading to increased regulatory fragmentation. These two areas (privacy and data protection on the one hand and AI on the other) are intimately connected, as privacy and data protection law form the basis for AI regulation in many regions of the world.

        Fragmentation refers to the multiplicity of legal norms, courts and tribunals (including data protection authorities), and regulatory practices regarding privacy and data protection that exist around the world. This diversity is understandable in that it reflects different legal and cultural values regarding privacy and data protection, but it can also create conflicts between legal systems and increased burdens for data flows.

        While this new reality affects all regions of the world, it can be illustrated by considering recent developments in three powerful geopolitical players, namely the European Union, the People’s Republic of China, and the United States. Dealing with these risks requires that greater attention be paid to geopolitical crises and legal fragmentation as a threat to protections for the free flow of data across borders. 

        The end of the ‘Brussels effect’?

        There has been much talk of the ‘Brussels effect’ that has allowed the EU to export its regulatory approach, including its data protection law, to other regions. However, the rules on international data transfers contained in Chapter V of the EU General Data Protection Regulation (‘GDPR’) face challenges that may diminish their global influence.

        These challenges are in part homemade. The standard of ‘essential equivalence’ with EU law that is required for a country to receive a formal adequacy decision from the European Commission allowing personal data to flow freely to it is difficult for many third countries to attain and sometimes leads to legal and political conflicts. The protection of data transfers under the GDPR has been criticised in the recent Draghi report as overly bureaucratic, and there have been calls to improve harmonisation of the GDPR’s application in order to increase economic growth. In particular, the approval of adequacy decisions is lengthy and untransparent, and other legal bases for data transfers are plagued by disagreements about key concepts between data protection authorities. The GDPR also applies to EU legislation dealing with AI (see the EU AI Act, Article 2(7)), so that problems with data transfers under the GDPR also affect AI-related transfers. 

        These factors indicate that the EU approach to data transfers may gradually lose traction with other countries. Although many of them still seek EU adequacy decisions and are happy to cooperate with the EU on data protection matters, they may also simultaneously explore other options. For example, some countries that are already subject to an EU adequacy decision or decisions (such as Canada, Japan, Korea, and the UK which has received adequacy decisions under both the GDPR and Law Enforcement Directive) have also joined a group that is establishing ‘Global Cross-Border Privacy Rules’ as a more flexible alternative system for data transfers. 

        Political challenges to the EU’s personal data transfer regime are now also present. Some companies are encouraging new US President Trump to challenge the enforcement of EU law against them, and some far-right parties in Europe have called for its repeal.

        Meanwhile, partly in response to the increased need for access to data in the AI race and partly under a novel digital sovereignty paradigm in this new geopolitical reality, the EU has also begun introducing restrictions on transfers of non-personal data outside the EU, such as through the Data Act, the Data Governance Act, and data localization requirements under the European Health Data Space Regulation. In addition, under the Data Act ‘data holders,’ regardless of where they are based in the world, must make data related to the use of connected devices readily available to EU-based users and recipients. Initiatives to promote the EU’s digital sovereignty and minimise the need to transfer data to centralized foreign platforms can also be expected to gain momentum.

        The rise of China

        China has already enacted many data-related laws, including some dealing with data transfers, after first introducing sweeping data localization requirements in 2017. It was all the more surprising that in November 2024 the Chinese government announced that it will launch a ‘global cross-border data flow cooperation initiative,’ and that it is ‘willing to deepen cooperation with all parties to promote efficient, convenient, and secure cross-border data flows.’ In a speech he gave at the same time, Chinese leader Xi Jinping said that China ‘is willing to deepen cooperation with all parties to jointly promote efficient, convenient and secure cross-border data flows’. 

        Exactly what this means is presently unclear. However, China is a member of the BRICS group, which includes countries with nearly half of the world’s population, and has also enacted many regulations dealing with AI. If China is able to use its political and economic clout to influence the agenda for cross-border data flows, as some scholars hypothesize, this could bring the BRICS countries and others deeper into its regulatory orbit for both privacy and AI.

        The arrival of data transfer rules in the US

        The United States government has recently relaxed its traditional opposition to controls on data transfers and enacted regulations to regulate certain transfers based on US national security concerns.

        In February 2024 former US President Biden issued an executive order limiting bulk sales of personal data to ‘countries of concern.’ The Department of Justice then issued a Final Rule in December 2024 setting out a regulatory program to address the ‘urgent and extraordinary national security threat posed by the continuing efforts of countries of concern (and covered persons that they can leverage) to access and exploit Americans’ bulk sensitive personal data and certain U.S. Government-related data.’

        It is no secret that these initiatives are primarily focused on data transfers to China, which is one of the six ‘countries of concern’ determined by the Attorney General, with the concurrence of the Secretaries of State and Commerce (the other five are Venezuela, Cuba, North Korea, Iran and Russia, according to Section 202.211 of the Final Rule). While some scholars have expressed skepticism about whether these initiatives will really bring their intended benefits, it is significant that national security has been used as a basis both for regulating data flows and for a shift in US trade policy.

        It is too soon to tell if President Trump will continue this focus. However, some of the actions that his administration has already taken have drawn the attention of digital rights groups in Europe who believe they may imperil the EU-US data privacy framework that serves as the basis for the EU adequacy decision allowing free data flows to the US. It is also questionable whether the EU will put resources into negotiating further agreements to facilitate data transfers to the US in light of the current breakdown in transatlantic relations.

        Conclusions

        We have entered a new era of instability where geopolitical tensions and the AI race have a significant impact on the protection of data flows. To be sure, political factors have long influenced the legal climate for data transfers, such as in the disputes between the EU and the US that led to the EU Court of Justice invalidating EU adequacy decisions in its two Schrems judgments (Case C-362/14 and Case C-311/18). The European Commission has also admitted that political and economic factors influence its approach to data flows. However, in the past political disputes about data transfers largely remained within the limits of disagreements between friends and allies, whereas the tensions that currently threaten them often arise from serious international conflicts that can quickly spiral out of control.

        The fragmentation of data transfer rules along regional and sectoral lines will likely increase with the development of AI and similar technologies that require completely borderless data flows, and with increased cross-border enforcement of data protection law in cases involving AI. Initiatives to regulate data transfers used in AI have already been proposed at the regional level, such as in the Continental Artificial Intelligence Strategy published in August 2024 by the African Union, which refers to cooperation ‘to create capacity to enable African countries to self-manage their data and AI and take advantage of regional initiatives and regulated data flows to govern data appropriately’. This will likely also give additional impetus to digital sovereignty initiatives in different regions, which will lead to even greater fragmentation.

        Data protection authorities have also begun sanctioning companies for improper data transfers in connection with the use of AI systems, as happened recently in a case where the South Korea Personal Information Protection Commission ordered the Chinese fintech company Alipay to destroy AI models containing personal information transferred to China in violation of South Korean data protection law (see press release no. 135).

        The growing influence of geopolitics demonstrates that the protection of data flows requires a strong rule of law, which is currently under threat around the world. The regulation of data transfers is too often regarded as a technocratic exercise that focuses on steps such as filling out forms and compiling impact assessments. However, such exercises can only provide protection within a legal system that is underpinned by the rule of law. The weakening of factors that comprise the rule of law, such as the separation of powers and a strong and independent judiciary, drives uncertainty and the fragmentation of data transfer regulation even more.

        The approaches to data transfer regulation pursued by the leading geopolitical players each have their strengths and weaknesses. The EU approach has attained considerable influence around the world, but is coming under pressure largely because of homegrown problems. The US emphasis on national security is inward-looking, but could become popular in other countries as well. China’s new initiative to regulate data transfers seems poised to attain greater international influence, though this may be mainly limited to the Asia-Pacific region.

        Although complying with data transfer regulation has always required attention to risk, geopolitical risk has been broadly overlooked so far, perhaps because it can seem overwhelming and impossible to predict. Indeed, events that have disrupted data flows such as Brexit and the Russian invasion of Ukraine were sometimes dismissed before they happened. However, this new reality requires incorporating the management of geopolitical risk into assessing the viability and legal certainty of international data transfers by organizations active across borders. There are steps that can be taken to manage geopolitical risk, such as those identified by the World Economic Forum, namely: assessing risks to understand them better; looking at ways to reduce the risks; ringfencing risks when possible; and developing plans to deal with events if they occur. 

        Parties involved in data transfers already need to perform risk assessments, but geopolitical events present a larger scale of risk than many will be used to. Risk reduction and ringfencing for unpredictable ‘black swan events’ such as wars or sudden international crises are difficult, and may require drastic measures such as halting data flows or changing supply chains that need to be prepared in advance.

        Major geopolitical events and the AI race are having a significant effect on data protection and data flows, making it essential to anticipate them as much as possible and to develop plans to cope with them should they occur. The only thing that can be safely predicted is that further geopolitical developments are in store with the potential to bring massive changes to the data protection landscape and disrupt global data flows, making it essential to give them a prominent place in risk analysis when transferring data.

        FPF Submits Comments to the California Privacy Protection Agency on Proposed Rulemaking

        On February 19, the Future of Privacy Forum (FPF) submitted comments to the California Privacy Protection Agency (CPPA) concerning draft regulations governing cybersecurity audits, risk assessments, automated decisionmaking technology (ADMT) access and opt-out rights under the California Consumer Privacy Act.

        FPF’s comments identified opportunities to bring additional clarity to key elements of the proposed regulations as well as support interoperability with other US legal frameworks. In particular, FPF recommended that the CPPA—

        1. 1. Clarify the “substantially facilitate” standard for in-scope ADMT systems, to provide more certainty for businesses and focus requirements to the highest-risk uses of ADMT;
        2. 2. Ensure that carve-outs for narrowly used, low-risk AI systems are appropriately tailored to avoid unintended impacts to socially beneficial technologies and use cases; 
        3. 3. Clarify the intended scope of definition “significant decision” to include decisions that result in “access to” the specified goods and services; 
        4. 4. Consider whether application of requirements to training ADMT systems that are “capable” of being used for certain purposes, rather than intended or reasonably likely to be used for such purposes, is too broad; 
        5. 5. Clarify what it means for an ADMT or AI system to be used for “establishing individual identity”;
        6. 6. Clarify that requests to opt-out of having one’s personal information processed to train ADMT or AI systems submitted after processing has begun do not require businesses to retrain models;
        7. 7. Consider whether requiring businesses to identify “technology to be used in the processing” in risk assessments is overly broad; 
        8. 8. Clarify that, in conducting risk assessments, the benefits from processing activities should be weighed against the risks to individuals’ privacy as mitigated by safeguards
        9. 9. Consider whether it is appropriate to require board members to certify a business’s cybersecurity audits; and 
        10. 10. Provide flexibility to support the delivery of effective and context-appropriate privacy notices, particularly with respect to virtual and augmented reality environments.

        FPF’s comments also included a comparison chart highlighting similarities and differences between the CPPA’s proposed risk assessment regulations, data protection assessment regulations pursuant to the Colorado Privacy Act, and data protection impact assessment requirements under the General Data Protection Regulation.

        FPF Releases Infographic Highlighting the Spectrum of AI in Education  

        To highlight the wide range of current use cases for Artificial Intelligence (AI) in education and future possibilities and constraints, the Future of Privacy Forum (FPF) today released a new infographic, Artificial Intelligence in Education: Key Concepts and Uses. While generative AI tools that can write essays, generate and alter images, and engage with students have brought increased attention to the topic, schools have been using AI-enabled applications for years.

        The AI in Education infographic builds on FPF’s 2023 The Spectrum of Artificial Intelligence report and infographic, and illustrates a sample of the use cases these technologies support, tailored to the school environment.

        “AI encompasses a broad range of technologies, and understanding the main types of AI, how they interrelate, and how they use student data is critical for educators, school leaders, and policymakers evaluating their risks and benefits in the educational environment,” said Jim Siegl, FPF Senior Technologist for Youth & Education Privacy. “Understanding the use case and context is critically important, and we hope this infographic underscores the need for nuance when setting AI policies in schools.”

        Although popular edtech tools powered by machine learning (ML), large language models (LLM), and generative AI (GEN) are transforming education by personalizing learning experiences and automating administrative tasks, AI is not limited to these models. It spans various other forms, including knowledge engineering, symbolic AI, natural language processing, and reinforcement learning, each contributing uniquely to enhancing human capabilities in the completion of specific tasks in the school context.

        The infographic takes a closer look at several common AI use cases in schools, including: 

        AI in Education: Key Concepts and Uses is the latest infographic resource in a series from FPF; previously released infographics related to youth and education privacy include Encryption Keeps Everyone Safe, Unpacking Age Assurance: Technologies and Tradeoffs, Understanding Student Monitoring, and Youth Data and Privacy Protection 101. To view all of FPF’s infographics, click here.

        To support schools seeking to vet AI tools for legal compliance, FPF released a checklist and guide last year. To access all of FPF’s Youth & Education Privacy resources, visit StudentPrivacyCompass.org.

        Why data protection legislation offers a powerful tool for regulating AI

        For some, it may have come as a surprise that the first existential legal challenges large language models (LLMs) faced after their market launch were under data protection law, a legal field that looks arcane in the eyes of those enthralled by novel Artificial Intelligence (AI) law, or AI ethics and governance principles. But data protection law was created in the 1960s and 1970s specifically in response to automation, computers and the idea of future “thinking machines”.

        The fact that it is now immediately relevant to AI systems, including the most complex ones, is not an unintended consequence. To some extent, the current wave of AI law and governance principles could be seen as the next generation of data protection law. Yet if it is not developed in parallel and if it fails to build coherently on the existing body of data protection laws, practice and thinking, it risks missing the mark.

        Read the full blog by Dr. Gabriela Zanfir-Fortuna published February 10, 2025 on LSE European Politics and Policy.