Geopolitical fragmentation, the AI race, and global data flows: the new reality
Most countries in the world have data protection or privacy laws and there is growing cross-border enforcement cooperation between data protection authorities, which might lead one to believe that the protection of global data flows and transfers is steadily advancing. However, instability and risks arising from wars, trade disputes, and the weakening of the rule of law are increasing, and are causing legal systems that protect data transferred across borders to become more inward-looking and to grow farther apart.
Fragmentation refers to the multiplicity of legal norms, courts and tribunals (including data protection authorities), and regulatory practices regarding privacy and data protection that exist around the world. This diversity is understandable in that it reflects different legal and cultural values regarding privacy and data protection, but it can also create conflicts between legal systems and increased burdens for data flows.
While this new reality affects all regions of the world, it can be illustrated by considering recent developments in three powerful geopolitical players, namely the European Union, the People’s Republic of China, and the United States. Dealing with these risks requires that greater attention be paid to geopolitical crises and legal fragmentation as a threat to protections for the free flow of data across borders.
The end of the ‘Brussels effect’?
There has been much talk of the ‘Brussels effect’ that has allowed the EU to export its regulatory approach, including its data protection law, to other regions. However, the rules on international data transfers contained in Chapter V of the EU General Data Protection Regulation (‘GDPR’) face challenges that may diminish their global influence.
These challenges are in part homemade. The standard of ‘essential equivalence’ with EU law that is required for a country to receive a formal adequacy decision from the European Commission allowing personal data to flow freely to it is difficult for many third countries to attain and sometimes leads to legal and political conflicts. The protection of data transfers under the GDPR has been criticised in the recent Draghi report as overly bureaucratic, and there have been calls to improve harmonisation of the GDPR’s application in order to increase economic growth. In particular, the approval of adequacy decisions is lengthy and untransparent, and other legal bases for data transfers are plagued by disagreements about key concepts between data protection authorities. The GDPR also applies to EU legislation dealing with AI (see the EU AI Act, Article 2(7)), so that problems with data transfers under the GDPR also affect AI-related transfers.
These factors indicate that the EU approach to data transfers may gradually lose traction with other countries. Although many of them still seek EU adequacy decisions and are happy to cooperate with the EU on data protection matters, they may also simultaneously explore other options. For example, some countries that are already subject to an EU adequacy decision or decisions (such as Canada, Japan, Korea, and the UK which has received adequacy decisions under both the GDPR and Law Enforcement Directive) have also joined a group that is establishing ‘Global Cross-Border Privacy Rules’ as a more flexible alternative system for data transfers.
Political challenges to the EU’s personal data transfer regime are now also present. Some companies are encouraging new US President Trump to challenge the enforcement of EU law against them, and some far-right parties in Europe have called for its repeal.
China has already enacted many data-related laws, including some dealing with data transfers, after first introducing sweeping data localization requirements in 2017. It was all the more surprising that in November 2024 the Chinese government announced that it will launch a ‘global cross-border data flow cooperation initiative,’ and that it is ‘willing to deepen cooperation with all parties to promote efficient, convenient, and secure cross-border data flows.’ In a speech he gave at the same time, Chinese leader Xi Jinping said that China ‘is willing to deepen cooperation with all parties to jointly promote efficient, convenient and secure cross-border data flows’.
Exactly what this means is presently unclear. However, China is a member of the BRICS group, which includes countries with nearly half of the world’s population, and has also enacted many regulations dealing with AI. If China is able to use its political and economic clout to influence the agenda for cross-border data flows, as some scholars hypothesize, this could bring the BRICS countries and others deeper into its regulatory orbit for both privacy and AI.
The arrival of data transfer rules in the US
The United States government has recently relaxed its traditional opposition to controls on data transfers and enacted regulations to regulate certain transfers based on US national security concerns.
In February 2024 former US President Biden issued an executive order limiting bulk sales of personal data to ‘countries of concern.’ The Department of Justice then issued a Final Rule in December 2024 setting out a regulatory program to address the ‘urgent and extraordinary national security threat posed by the continuing efforts of countries of concern (and covered persons that they can leverage) to access and exploit Americans’ bulk sensitive personal data and certain U.S. Government-related data.’
It is no secret that these initiatives are primarily focused on data transfers to China, which is one of the six ‘countries of concern’ determined by the Attorney General, with the concurrence of the Secretaries of State and Commerce (the other five are Venezuela, Cuba, North Korea, Iran and Russia, according to Section 202.211 of the Final Rule). While some scholars have expressed skepticism about whether these initiatives will really bring their intended benefits, it is significant that national security has been used as a basis both for regulating data flows and for a shift in US trade policy.
It is too soon to tell if President Trump will continue this focus. However, some of the actions that his administration has already taken have drawn the attention of digital rights groups in Europe who believe they may imperil the EU-US data privacy framework that serves as the basis for the EU adequacy decision allowing free data flows to the US. It is also questionable whether the EU will put resources into negotiating further agreements to facilitate data transfers to the US in light of the current breakdown in transatlantic relations.
Conclusions
We have entered a new era of instability where geopolitical tensions and the AI race have a significant impact on the protection of data flows. To be sure, political factors have long influenced the legal climate for data transfers, such as in the disputes between the EU and the US that led to the EU Court of Justice invalidating EU adequacy decisions in its two Schrems judgments (Case C-362/14 and Case C-311/18). The European Commission has also admitted that political and economic factors influence its approach to data flows. However, in the past political disputes about data transfers largely remained within the limits of disagreements between friends and allies, whereas the tensions that currently threaten them often arise from serious international conflicts that can quickly spiral out of control.
The fragmentation of data transfer rules along regional and sectoral lines will likely increase with the development of AI and similar technologies that require completely borderless data flows, and with increased cross-border enforcement of data protection law in cases involving AI. Initiatives to regulate data transfers used in AI have already been proposed at the regional level, such as in the Continental Artificial Intelligence Strategy published in August 2024 by the African Union, which refers to cooperation ‘to create capacity to enable African countries to self-manage their data and AI and take advantage of regional initiatives and regulated data flows to govern data appropriately’. This will likely also give additional impetus to digital sovereignty initiatives in different regions, which will lead to even greater fragmentation.
The growing influence of geopolitics demonstrates that the protection of data flows requires a strong rule of law, which is currently under threat around the world. The regulation of data transfers is too often regarded as a technocratic exercise that focuses on steps such as filling out forms and compiling impact assessments. However, such exercises can only provide protection within a legal system that is underpinned by the rule of law. The weakening of factors that comprise the rule of law, such as the separation of powers and a strong and independent judiciary, drives uncertainty and the fragmentation of data transfer regulation even more.
The approaches to data transfer regulation pursued by the leading geopolitical players each have their strengths and weaknesses. The EU approach has attained considerable influence around the world, but is coming under pressure largely because of homegrown problems. The US emphasis on national security is inward-looking, but could become popular in other countries as well. China’s new initiative to regulate data transfers seems poised to attain greater international influence, though this may be mainly limited to the Asia-Pacific region.
Although complying with data transfer regulation has always required attention to risk, geopolitical risk has been broadly overlooked so far, perhaps because it can seem overwhelming and impossible to predict. Indeed, events that have disrupted data flows such as Brexit and the Russian invasion of Ukraine were sometimes dismissed before they happened. However, this new reality requires incorporating the management of geopolitical risk into assessing the viability and legal certainty of international data transfers by organizations active across borders. There are steps that can be taken to manage geopolitical risk, such as those identified by the World Economic Forum, namely: assessing risks to understand them better; looking at ways to reduce the risks; ringfencing risks when possible; and developing plans to deal with events if they occur.
Parties involved in data transfers already need to perform risk assessments, but geopolitical events present a larger scale of risk than many will be used to. Risk reduction and ringfencing for unpredictable ‘black swan events’ such as wars or sudden international crises are difficult, and may require drastic measures such as halting data flows or changing supply chains that need to be prepared in advance.
Major geopolitical events and the AI race are having a significant effect on data protection and data flows, making it essential to anticipate them as much as possible and to develop plans to cope with them should they occur. The only thing that can be safely predicted is that further geopolitical developments are in store with the potential to bring massive changes to the data protection landscape and disrupt global data flows, making it essential to give them a prominent place in risk analysis when transferring data.
FPF Submits Comments to the California Privacy Protection Agency on Proposed Rulemaking
On February 19, the Future of Privacy Forum (FPF) submitted comments to the California Privacy Protection Agency (CPPA) concerning draft regulations governing cybersecurity audits, risk assessments, automated decisionmaking technology (ADMT) access and opt-out rights under the California Consumer Privacy Act.
FPF’s comments identified opportunities to bring additional clarity to key elements of the proposed regulations as well as support interoperability with other US legal frameworks. In particular, FPF recommended that the CPPA—
Clarify the “substantially facilitate” standard for in-scope ADMT systems, to provide more certainty for businesses and focus requirements to the highest-risk uses of ADMT;
Ensure that carve-outs for narrowly used, low-risk AI systems are appropriately tailored to avoid unintended impacts to socially beneficial technologies and use cases;
Clarify the intended scope of definition “significant decision” to include decisions that result in “access to” the specified goods and services;
Consider whether application of requirements to training ADMT systems that are “capable” of being used for certain purposes, rather than intended or reasonably likely to be used for such purposes, is too broad;
Clarify what it means for an ADMT or AI system to be used for “establishing individual identity”;
Clarify that requests to opt-out of having one’s personal information processed to train ADMT or AI systems submitted after processing has begun do not require businesses to retrain models;
Consider whether requiring businesses to identify “technology to be used in the processing” in risk assessments is overly broad;
Clarify that, in conducting risk assessments, the benefits from processing activities should be weighed against the risks to individuals’ privacy as mitigated by safeguards;
Consider whether it is appropriate to require board members to certify a business’s cybersecurity audits; and
Provide flexibility to support the delivery of effective and context-appropriate privacy notices, particularly with respect to virtual and augmented reality environments.
FPF’s comments also included a comparison chart highlighting similarities and differences between the CPPA’s proposed risk assessment regulations, data protection assessment regulations pursuant to the Colorado Privacy Act, and data protection impact assessment requirements under the General Data Protection Regulation.
FPF Releases Infographic Highlighting the Spectrum of AI in Education
To highlight the wide range of current use cases for Artificial Intelligence (AI) in education and future possibilities and constraints, the Future of Privacy Forum (FPF) today released a new infographic, Artificial Intelligence in Education: Key Concepts and Uses. While generative AI tools that can write essays, generate and alter images, and engage with students have brought increased attention to the topic, schools have been using AI-enabled applications for years.
“AI encompasses a broad range of technologies, and understanding the main types of AI, how they interrelate, and how they use student data is critical for educators, school leaders, and policymakers evaluating their risks and benefits in the educational environment,” said Jim Siegl, FPF Senior Technologist for Youth & Education Privacy. “Understanding the use case and context is critically important, and we hope this infographic underscores the need for nuance when setting AI policies in schools.”
Although popular edtech tools powered by machine learning (ML), large language models (LLM), and generative AI (GEN) are transforming education by personalizing learning experiences and automating administrative tasks, AI is not limited to these models. It spans various other forms, including knowledge engineering, symbolic AI, natural language processing, and reinforcement learning, each contributing uniquely to enhancing human capabilities in the completion of specific tasks in the school context.
The infographic takes a closer look at several common AI use cases in schools, including:
Automated grading and feedback, allow teachers to spend more time focused on instruction and student support. While these tools may aid in consistency and objectivity in grading, they should be designed to comply with student privacy laws, and the output should be reviewed by the teacher for risk of bias.
Student monitoring, via integrated systems designed to assist schools in monitoring activity on school-issued divides, accounts, and district internet connections. Student monitoring is done to detect threats to student safety, comply with state and federal regulations, respond to community concerns, and identify at-risk students.
Curriculum development, through systems that can help educators design effective curricula to meet current education standards and student needs. Additionally, AI can help generate quizzes, worksheets, and reading materials tailored to the curriculum.
Intelligent tutoring systems can create customized learning plans and provide students with additional support outside of classroom hours, helping to reinforce concepts through an interactive and engaging learning experience.
School security includes using facial recognition and advanced video analytics to enable faster and more accurate responses to potential threats and ultimately help ensure a safe learning environment.
To support schools seeking to vet AI tools for legal compliance, FPF released a checklist and guide last year. To access all of FPF’s Youth & Education Privacy resources, visit StudentPrivacyCompass.org.
Why data protection legislation offers a powerful tool for regulating AI
For some, it may have come as a surprise that the first existential legal challenges large language models (LLMs) faced after their market launch were under data protection law, a legal field that looks arcane in the eyes of those enthralled by novel Artificial Intelligence (AI) law, or AI ethics and governance principles. But data protection law was created in the 1960s and 1970s specifically in response to automation, computers and the idea of future “thinking machines”.
The fact that it is now immediately relevant to AI systems, including the most complex ones, is not an unintended consequence. To some extent, the current wave of AI law and governance principles could be seen as the next generation of data protection law. Yet if it is not developed in parallel and if it fails to build coherently on the existing body of data protection laws, practice and thinking, it risks missing the mark.
FPF Celebrates Safer Internet Day with Newly Released Encryption Infographic
Future of Privacy Forum (FPF) is thrilled to celebrate Safer Internet Day 2025 with the release of a new infographic, “Encryption Keeps Young People Safe.” Safer Internet Day is an annual event and part of a larger global mission to create a safer online environment, especially for young people. FPF’s new infographic explains how encryption technology plays a crucial role in ensuring data privacy and online safety for a new generation of teens and kids. FPF will host leading experts at a virtual event on Feb. 11 at 10 am ET to discuss the state of encryption technology and policy.
Data encryption is central to online security, privacy, and safety, and of particular importance for particularly vulnerable groups, such as young people. Gen Z and Gen Alpha have lived their entire lives in the age of the commercial internet, social media, electronic records, and internet-connected devices. They have grown up in a world where everything from insulin pumps to cars are internet-connected. Encryption is the best protection to ensure that personal communications, transactions and devices are safe and secure. The 2025 infographic illustrates encryption’s role in protecting data in places young people frequent, such as sports parks, shopping centers, and health clinics.
Encryption is often used to secure or authenticate sensitive documents. Encryption applies a mathematical formula, which obfuscates plaintext information and transforms the plaintext into unreadable ciphertext. Each use of encryption generates a long number that is the mathematical solution to the formula and can unscramble the protected sensitive information. If a private key is not kept secret, anyone with the key can access the private data or impersonate the authenticated person or organization.
This infographic is the latest in FPF’s longstanding work on encryption, which includes a 2020 infographic explaining how encryption more broadly protects enterprises, individuals, and governments—and what may happen when data and devices fail to use strong encryption and are compromised by bad actors. The infographic series advances FPF’s mission of promoting data privacy for every user by showcasing the vital role encryption plays in ensuring online safety, and the detrimental effects of an online world without its protections.
FPF will host a virtual event at 10 am ET today, featuring a Keynote address from Patricia Kosseim, Ontario Information and Privacy Commissioner. There will also be a panel of experts to dive into how encryption protects young people not just online, but in the physical world as well, by preventing malicious actors from gaining access to the devices and spaces they rely on for health, education, convenience, and more. Register now to join the event!
If you’re interested in learning more about encryption or other issues driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on X, LinkedIn, or Instagram.
Minding Mindful Machines: AI Agents and Data Protection Considerations
Thank you for the contributions of Rob van Eijk, Marlene Smith, and Katy Wills
We are now in 2025, the year of AI agents. In the last few weeks, leading large language model (LLM) developers (including OpenAI, Google, Anthropic) have released early versions of technologies described as “AI agents.” Unlike earlier automated systems and even LLMs, these systems go beyond previous technology by having autonomy over how to achieve complex, multi-step tasks, such as navigating on a user’s web browser to take actions on their behalf. This could enable a wide range of useful or time-saving tasks, from making restaurant reservations and resolving customer service issues to coding complex systems. However, AI agents also raise greater and novel data protection risks related to the collection and processing of personal data. Their technical characteristics could also present challenges, such as those around safety testing and human oversight, for organizations seeking to develop or deploy AI agents.
This analysis unpacks the defining characteristics of the newest AI agents and identifies some of the data protection considerations that practitioners should be mindful of when designing and deploying these systems. Specifically:
While agents are not new, emerging definitions across industries describe them as AI systems that are capable of completing more complex, multi-step tasks, and exhibit greater autonomy over how to achieve these goals, such as shopping online and making hotel reservations.
Advanced AI agents raise many of the same data protection questions raised by LLMs, such as challenges related to thecollection and processing of personal data for model training, operationalizing data subject rights, and ensuring adequate explainability.
In addition, the unique design elements and characteristics of the latest agents may exacerbate or raise novel data protection compliance challenges around the collection and disclosure of personal data, security vulnerabilities, the accuracy of outputs, barriers to alignment, and explainability and human oversight.
What are AI Agents?
The concept of “AI Agents” or “Agentic AI” arose as early as the 1950s and has many meanings in technical and policy literature. In the broadest sense, for example, it can include systems that rely on fixed rules and logic to produce consistent and predictable outcomes on a person’s behalf, such as email auto-replies or privacy preferences.
Advances in AI research, particularly around machine and deep learning techniques and the advent of LLMs, have enabled organizations to develop agents that can tackle novel use cases, such as purchasing retail goods and recommending and executing transactions. From finance to hospitality, these technologies could help individuals, businesses, and governments save time they would otherwise dedicate to completing tedious or monotonous tasks.
Companies, civil society, and academia have defined the latest iteration of AI agents, examples of which are provided in the table below:
“[A]n entity that senses percepts (sound, text, image, pressure etc.) using sensors and responds (using effectors) to its environment. AI agents generally have the autonomy (defined as the ability to operate independently and make decisions without constant human intervention) and authority (defined as the granted permissions and access rights to perform specific actions within defined boundaries) to take actions to achieve a set of specified goals, thereby modifying their environment.”
“AI agents [are] systems capable of pursuing complex goals with limited supervision,” having “greater autonomy, access to external tools or services, and an increased ability to reliably adapt, plan, and act open-endedly over long time-horizons to achieve goals.”
“Agents,” Sept. 2024, Julia Wiesinger, Patrick Marlow, and Vladimir Vuskovic, Google
“[A] Generative AI agent can be defined as an application that attempts to achieve a goal by observing the world and acting upon it using the tools that it has at its disposal. Agents are autonomous and can act independently of human intervention, especially when provided with proper goals or objectives they are meant to achieve. Agentscan also be proactive in their approach to reaching their goals. Even in the absence ofexplicit instruction sets from a human, an agent can reason about what it should do next to achieve its ultimate goal.”
Defining long-term planning agents as “an algorithm designed to produce plans, and to prefer plan A to plan B, when it expects that plan A is more conducive to a given goal over a long time horizon.”
“An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.
Table 1. Definitions of “AI agents”
These definitions highlight common characteristics of new AI agents, including:
Autonomy and adaptability: Users generally provide an agent with the task they want it to achieve, but neither they nor the agent’s designers specify how to accomplish the task, leaving those decisions to the agent. For example, upon being instructed by a business to project the sales revenue of its flagship product for the next six months, the agent may decide that it needs sales figures from the last two years and use certain tools (e.g., a text retriever) to obtain these details. If it cannot find these figures or if they contain errors, it may determine that the next step is to seek information from other documentation. Agentic systems may incorporate human review and approval over some or all decisions.
These characteristics enable advanced agents to achieve goals that are beyond the capabilities of other AI models and systems. However, they also raise questions for practitioners about the data protection issues organizations may encounter when developing or deploying these technologies.
Emerging Privacy and Data Protection Issues with Agentic AI
While the latest AI agents may raise similar risks to consequential decision-making and LLMs, they can also exacerbate or pose novel privacy and data protection considerations. The economic and social impact of AI agents is a topic of heated debate and significant financial investment, but there has been less attention on the potential impact of agents on privacy and data protection. In order to effectuate tasks and decision making with autonomy, especially for consumer-facing tools and services, AI agents will need access to data and systems. In fact, much like human assistants, AI agents may be at their most valuable when they are able to assist with tasks that involve highly sensitive data (e.g., managing a person’s email, calendar, or financial portfolio, or assisting with healthcare decision-making).
As a result, many of the same risks relating to consequential decision-making and LLMs (or to machine learning generally) are likely to be present in the context of agents with greater autonomy and access to data. For example, like some LLMs, some AI agents transmit data to the cloud due to the computing requirements of the most powerful models, which may expose the data to unauthorized third parties (e.g., the recent Data protection impact assessment on the processing of personal data with Microsoft 365 Copilot for Education). As with chatbots that use LLMs, AI agents with anthropomorphic qualities may be able to steer individuals towards or away from conducting certain actions against the user’s best interest. Other examples of cross-cutting data protection issues include challenges related to having a lawful basis for model training, operationalizing data subject rights, and ensuring adequate explainability. These legal and policy issues for LLMs, which are the subject of ongoing debate and legal guidance, are only heightened in the context of agentic systems with enhanced capabilities.
In addition, more recent AI agents may present some novel privacy implications or exacerbate data protection issues that go beyond those associated with LLMs.
Data collection and disclosure considerations:The latest AI agents may need to capture data about a person and their environment, including sensitive information, in order to power different use cases. As with LLMs, the collection of personal data by agents will often trigger the need for having a lawful ground in place for such processing. When the personal data collected is sensitive, additional requirements for lawfully processing them often apply too. While current LLM-based systems may train and operate using personal data, they lack the tools (e.g., application programming interfaces, data stores, and extensions) to access external systems and data. The latest AI agents may be equipped with these tools, which could enable them to obtain real-time information about individuals. For example, some agents may take screenshots of a user’s browser window in order to populate a virtual shopping cart, from which intimate details about a person’s life could be inferred. As the number of individuals using AI agents and its use cases grow, so too could AI agents’ access to personal data. For example, AI agents may collect many types of granular telemetry data as part of their operations (e.g., user interaction data, action logs, and performance metrics). Increasingly complex agents may collect large quantities of telemetry information, which may qualify as personal data under data privacy legal regimes.
Security vulnerabilities: Advanced AI agents’ design features and characteristics may make them susceptible to new kinds of security threats. Adversarial attacks on LLMs, such as the use of prompt injection attacks to get these models to reveal sensitive information (e.g., credit card information), can impact AI agents too. Besides causing an agent to reveal sensitive information without permission, prompt injection attacks can also override the system developer’s safety instructions. While prompt injection is not a threat unique to the latest AI agents, new kinds of injection attacks could take advantage of the way agents work to perpetuate harm, such as installing malware or redirecting them to deceptive websites.
Accuracy of outputs: Hallucinations, compounding errors, and unpredictable behavior may impact the accuracy of an agents’ outputs. LLM hallucinations—the making up of factually untrue information that looks correct—may affect the accuracy of an agent’s outputs. These hallucinations are closely tied to the “temperature” parameter that controls randomness in the model’s attention mechanism: higher temperatures increase creativity and the risk of hallucinations, while lower temperatures reduce hallucinations but may limit the agent’s adaptability. However, errors that affect agent outputs may have different implications for individuals, such as misrepresenting a user’s characteristics and preferences when it fills out a consequential form. In addition to hallucinations, the latest AI agents may experience compounding errors, which could occur while the systems perform a sequence of actions to complete a task (e.g., managing a customer’s account). Compounding errors is the phenomenon where the agent’s accuracy decreases the more steps a task takes. For example, an AI agent creating a travel experience may experience an error while making a one-day hotel booking, which cascades into misaligned restaurant reservations and museum tickets. This holds true even when the model’s accuracy is high. Some AI agents may act in unpredictable ways due to dynamic operational environments and agents’ non-deterministic nature—producing probabilistic outcomes, adapting to new situations, learning from data, and exhibiting complex decision-making—leading to malfunctions that affect output accuracy. These accuracy issues may be challenging to redress through risk management testing and assessments and exacerbated when different AI agents interact with each other.
Barriers to “alignment”: Some AI agents may pursue tasks in ways that conflict with human interests and values, including data protection considerations. AI alignment refers to designing AI models and systems to pursue a designer’s goals, such as prioritizing human well-being and conforming to ethical values. Misalignment problems are not new to AI, but continued technological advances with agents may make it challenging for organizations to achieve alignment through safeguards and safety testing. LLMs can fake alignment by strategically mimicking training objectives to avoid undergoing behavioral modifications. These challenges have data protection implications for the latest AI agents. For example, an agent may decide that it needs to access or share sensitive personal data in order to complete a task. Such behavior could implicate an individual’s data protection interest in having control over their data when personal data is processed during deployment. Practitioners must be mindful of the need for safeguards to constrain this behavior, although research into model alignment has focused more on safety issues rather than privacy.
Explainability and human oversight challenges:Explainability barriers arise when users cannot understand an agent’s decisions, even if these decisions are correct. Users and developers may encounter difficulties in understanding how some AI agents reach decisions due to their complex processes. The black box problem, or the challenge of understanding how an AI model or system makes decisions, is not unique to agents. However, the speed and complexity of AI agents’ decision-making processes may create heightened roadblocks to realizing meaningful explainability and human oversight. AI agents utilizing language models can provide some of their reasoning in natural language, but these “chain-of-thought” insights are becoming more complicated and are not always indicative of the agent’s actual reasoning. These challenges may make it more difficult to reliably interrogate agents’ decision-making processes and manage risks.
Looking Ahead
Recent advances in AI agents could expand the utility of these technologies across the private and public sectors, but they also raise many data protection considerations. While practitioners may be aware of some of these considerations due to the relationship between LLMs and the latest AI agents, the unique design elements and characteristics of these agents may exacerbate or raise new compliance challenges. For example, an agent may manage privacy settings (e.g., accepting cookies so that it can continue working on a task) as part of its operations, although companies can establish safeguards to address this risk. In closing, practitioners should remain abreast of technological advances that expand AI agents’ capabilities, use cases, and contexts where they can operate, as these may raise novel data protection issues.
This year’s Winning Privacy Papers to be Honored at the Future of Privacy Forum’s 15th Annual Privacy Papers for Policymakers Event
The Future of Privacy Forum’s 15th Annual Privacy Papers for Policymakers Award Recognizes Influential Privacy Research
The PPPM Awards recognize leading U.S. and international privacy scholarship that is relevant to policymakers in the U.S. Congress, federal agencies, and international data protection authorities. Six winning papers, two honorable mentions, one student submission, and a student honorable mention were selected by a diverse group of leading academics, advocates, and industry privacy professionals from FPF’s Advisory Board.
Authors of the papers will have the opportunity to showcase their work at the Privacy Papers for Policymakers ceremony on March 12, in conversations with discussants, including James Cooper, Professor of Law, Director, Program on Economics & Privacy, Antonin Scalia Law School, George Mason University, Jennifer Huddleston, Senior Fellow in Technology Policy, Cato Institute, and Brenda Leong, Director, AI Division, ZwillGen.
“Data protection and artificial intelligence regulations are increasingly at the forefront of global policy conversations,” said FPF CEO Jules Polonetsky. “And it’s important to recognize the academic research that explores the nuances surrounding data privacy, data protection, and artificial intelligence issues. Our award winners have explored these complex areas ― to all of our benefits.”
FPF’s 2025 Privacy Papers for Policymakers Award winners are:
Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first discusses several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together, they tell only a partial story. Through privacy law, China’s leaders have sought to interpose themselves as benevolent guardians of privacy rights against other intrusive actors—individuals, firms, and even state agencies and local governments. This Article adds to our understanding of privacy law, complicates the relationship between privacy and democracy, and points toward a general theory of authoritarian privacy.
The Great Scrape: The Clash between Scraping And Privacy by Daniel J. Solove, George Washington University Law School and Woodrow Hartzog, Boston University School of Law and Stanford Law School Center for Internet and Society
Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society. Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others. This Article explores the fundamental tension between scraping and privacy law.
Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property.
On paper, the Federal Trade Commission’s consumer protection authority seems straightforward: the agency is empowered to investigate and prevent unfair or deceptive acts or practices. This flexible and capacious authority, coupled with the agency’s jurisdiction over the entire economy, has allowed the FTC to respond to privacy challenges both online and offline. The contemporary question is whether the FTC can draw on this same authority to curtail the data-driven harms of commercial surveillance or emerging technologies like artificial intelligence. This Essay contends that the legal answer is yes and argues that the key determinants of whether an agency like the Federal Trade Commission will be able to confront emerging digital technologies are social, institutional, and political. Specifically, it proposes that the FTC’s privacy enforcement occurs within an “Overton Window of Enforcement Possibility.”
Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
Governments and policymakers increasingly expect practitioners developing and using AI systems in both consumer and public sector settings to proactively identify and address bias or discrimination that those AI systems may reflect or amplify. Central to this effort is the complex and sensitive task of obtaining demographic data to measure fairness and bias within and surrounding these systems. This report provides methodologies, guidance, and case studies for those undertaking fairness and equity assessments — from approaches that involve more direct access to data to ones that don’t expand data collection. Practitioners are guided through the first phases of demographic measurement efforts, including determining the relevant lens of analysis, selecting what demographic characteristics to consider, and navigating how to hone in on relevant sub-communities. The report then delves into several approaches to uncover demographic patterns.
FPF also selected a paper for the Student Paper Award: Data Subjects’ Reactions to Exercising Their Right of Accessby Arthur Borem, Elleen Pan, Olufunmilola Obielodan, Aurelie Roubinowitz, Luca Dovichi, and Blase Ur at the University of Chicago; and Michelle L. Mazurek from the University of Maryland. A Student Paper Honorable Mention went to Artificial Intelligence is like a Perpetual Stew by Nathan Reitinger, University of Maryland – Department of Computer Science.
In reviewing the submissions, winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.
The Privacy Papers for Policymakers Award event will be held on March 12, 2025, at FPF’s offices in Washington, D.C. The event is free and registration is open to the public.
###
About Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, and Singapore. Learn more at fpf.org.
5 Ways to Be a Top Dog in Data Privacy
Data Privacy Day, or Data Protection Day in Europe, is recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data. To raise awareness for the day and promote best practices for data privacy, we’ve partnered with Snap to create a Data Privacy Day Snapchat Lens that lets you choose what type of privacy pup best reflects your personality. Check it out by scanning the Snapchat code!
Once you’ve determined which privacy pup you are, learn more about protecting your privacy with these 5 quick, easy steps.
1. Share your Information with Websites and Apps you Trust
Today, almost everything we do online involves companies collecting personal information about us. When we’re subscribing to marketing emails, making online purchases, filling out surveys, or even applying for jobs online, websites are collecting more information than ever before, and we’ve become accustomed to sharing personal information daily. However, it’s important to be cautious and trust a website or app before sharing any personal information with it.
There are a few ways to evaluate a website or app before sharing personal information. The first is to check the website’s url and domain name. Confirm that both the name of the website is spelled properly, and the domain name ending in .org, .com, .edu, or .gov, which are typically (but not always) more credible. Next, you can look for clear information about the leaders of the organization and their contact information on the website. If that information isn’t available, or is difficult to find, be cautious because you may not know who will be responsible for your personal information. Lastly, take a few minutes to evaluate a company’s privacy policy. The policy should clearly state the company’s full name, explain how they will be using your information, and may include information about the security measures in place. Many states also require companies to let you submit a data access request, and it’s helpful to check that the company is complying with their state law and displaying that information.
2. Update your passwords and multi-factor authentication regularly
Password re-use is one of the top ways that unwanted eyes can get into your accounts: once one service where you used a password is breached, criminals will likely try the same username and password combination on other services just to see if it works. To get a sense of the scale of the risk, you check your info on web service “Have I Been Pwned” (available at haveibeenpwned.com), which allows you to enter your email address and see what data breaches that email has been included in.
Because of the risks involved in recycling passwords, using unique passwords is an essential step for keeping personal information private. You can also consider utilizing a password manager. Password managers save passwords as you create and log in to your accounts, often alerting you of duplicates and suggesting the creation of a stronger password. And no, the name of your dog is not a strong password.
For example, if you use an Apple product when signing up for new accounts and services, you can allow your iPhone, Mac, or iPad to generate strong passwords and safely store them in iCloud Keychain for later access. Some of the best third-party password managers can be found here.
When possible, you should also utilize multi-factor authentication along with a password. This extra step ensures that simply inputting a compromised password is not enough to provide access to your account without an extra step, typically the connection of a device like a yubikey or submission of a numeric code sent to a phone number, e-mail address, or authentication application on your phone. While some forms of multi-factor authentication may be more protective and more resilient than others, any choice will significantly increase the security in comparison to a password alone. You can see how easy it is to set up multi-factor authentication on Snapchat using their easy-to-understand articles, available online.
3. Respect other peoples’ privacy
It’s important to be mindful about the information you share and see on social media. Consider the reach of your own posts, and avoid sharing anything you wouldn’t want to be saved or widely shared, whether it’s about you or someone else. Many social media sites like Instagram, Facebook, and Snapchat allow you to share images and chat with a closed group or limited number of friends, and it’s important to honor when someone chooses to keep information non-public when they share it in closed or private settings. Don’t screenshot or reshare private stories or messages from others.
4. Review all social media settings
Many social media sites include options on how to tailor your privacy settings to limit how data is collected or used. Snap provides privacy options that control who can contact you and many other options. Start with the Snapchat Privacy Center to review your settings. You can find those choices here.
Snap also provides options for you to view any data they have collected about you, including account information and your search history. Downloading your data allows you to view what information has been collected and modify your settings accordingly.
Instagram allows you to manage various privacy settings, including who has access to your posts, who can comment on or like your posts, and manage what happens to posts after you delete them. You can view and change your settingshere.
TikTok allows you to decide between public and private accounts, allows you to change your personalized ad settings, and more. You can check your settingshere.
X allows you to manage what information you allow other people on the platform to see and lets you choose your ad preferences. Check your settings here.
Facebook provides a range of privacy settings that can be found here.
In addition, you can check the privacy and security settings for other popular applications such as Reddit and Pinteresthere. Be sure to also check your privacy settings if you have a profile on a popular dating app such as Bumble, Hinge, or Tinder.
What other social media apps do you use often? Check to see which settings they provide!
5. Use incognito settings to keep personal information about you hidden
Many browsers and apps allow you to turn on a setting that lets you continue to use the service without sharing as much personal information as you normally would.
On Chrome, you can browse the web more privately using incognito mode. To activate, open Chrome, under “More,” click “New Incognito Window.”
Using Safari, you can choose “private browsing” by opening Safari, clicking “File” and then “New Private Window.” If you have the app, you can choose to always browse privately by clicking “Settings” and then for the option “Safari opens with” pop-up menu, choose “a new private window.”
Mozilla also has options for using Firefox in “private browsing mode.” Click Firefox’s menu button, and then click “New private window.” You can also choose to always be in private browsing mode by choosing “Use custom settings for history” from the Firefox’s menu and checking the “Always use private browsing mode” setting.
Browsers like DuckDuckGo and Brave also default to private browsing mode. You can read more about DuckDuckGo’s anonymous browsing settings here, and Brave’s privacy protections here.
Using Snapchat, you can turn on Ghost Mode. While using it, your location won’t be visitable to anyone, including friends you may have previously shared your location with on Snapchat’s Snap Map. To turn it on, open the Map, tap the ⚙️ button at the top of the map screen, toggle Ghost Mode to on, and select how long you’d like to enable Ghost Mode.
If you’re interested in learning more about one of the topics discussed here or other issues driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on X, LinkedIn, or Instagram.
FPF brings together some of the top minds in privacy to discuss how we can all benefit from the insights gained from data while respecting the individual right to privacy.
What to Expect in Global Privacy in 2025
Next year, in 2026, we will celebrate a decade after the adoption of the GDPR, a law with an unprecedented regulatory impact around the world, from California to Brazil, across the African continent, to India, to China, and everywhere in between. The field of data protection and privacy has become undeniably global, with GDPR-inspired laws (from a lesser to a bigger degree) adopted or updated in many jurisdictions around the world throughout the past years. This could not have happened in a more transformative decade for technologies relying on data, with AI decidedly getting out of its winter, and “connected-everything,” from cars to eyewear, increasingly shaping our surroundings.
While jurisdictions around the world were catching up with the GDPR or gearing their own approach to data protection legislation, the EU leaped in the past five years towards comprehensive (and sometimes incomprehensible) regulation of multiple dimensions of the digital economy: AI itself, online platforms through intermediary liability, content moderation and managing systemic risks on very large online platforms and search engines, online advertising in electoral campaigns, digital gatekeepers and competition, data sharing and connected devices, data altruism and even algorithms used in the gig economy.
Against this backdrop, I asked my colleagues in FPF’s offices around the world, who passionately monitor, understand, and explain legislative, regulatory, and enforcement developments across regions, what we should expect in 2025 in Global Privacy. From data-powered technological shifts and their impact on human autonomy, to enforcement and sectoral implementation of general data protection laws adopted in the past years, to AI regulation, cross-border data transfers, and the clash of online safety and children’s privacy, this is what we think you should keep on your radar:
1. AI becoming ubiquitous will put self-determination and control in the center of global privacy debates
“Expect AI to become ubiquitous in everything we do online,” signals Dr. Rob van Eijk, FPF Managing Director for Europe. This will not only bring excitement for tech enthusiasts but also a host of challenges, heightened by the expected increase in consumers using AI agents. “The first challenge is maintaining personal autonomy in the face of technological development, particularly regarding AI,” weighs in Rivki Dvash, Senior Fellow with ITPI – FPF Israel.
Rivki foresees two prominent dimensions of this topic: first, at the ethical level, and second, at the regulatory level, particularly concerned “with the limits of the legitimacy of the use of AI while trying to contour the uniqueness of a person over a machineand the desire to preserve personal autonomy in a space of choice.” “What does it mean to be a human in an Agentic AI future?” is a question that Rob says will ignite a lot of thinking in the policy world in 2025. This makes me think of an older paper from Prof. Mireille Hildebrandt, “Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning” (2019), where she described a framework that could “provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.”
I expect the idea of “control” over one’s persona and personal information in the world of Generative and Agentic AI to increasingly permeate and fuel regulatory debates. In its much-expected Opinion on AI systems and data protection law published over the Holidays, the European Data Protection Board (EDPB) identified “the interest in self-determination and retaining control over one’s own personal data” as chief among individuals’ interests that must be taken into account and balanced, both when personal data is gathered for the development of AI models and with regards to personal data processed once the model is deployed.
Putting self-determination and control at the center of AI governance will not be just academic. For instance, the EDPB asked for an “unconditional opt-out from the outset,” “a discretionary right to object before the processing takes place” for developing and deploying AI systems, “beyond the conditions of Article 21 GDPR,” in order for legitimate interests to be considered as a valid lawful ground legitimizing consentless processing of personal data for AI models.
Rob adds that in 2025, we will see users “becoming increasingly reliant on AI companions for decision-making, from small choices like what to watch on streaming services to larger life decisions.” He highlights what will be one of the key privacy and data protection implications of all this: “AI companions will get unprecedented access to sensitive personal data, from financial transactions to private conversations and daily routines.” Protecting sensitive data in this context, especially with inferences broadly recognized as being covered by such enhanced safeguards under data protection law regimes, will be a key challenge that will keep privacy experts busy this year.
But the ideas of “control,” “self-determination,” and “autonomy” in relation to one’s own personal data are particularly fragile when it comes to non-users or bystanders whose data is collected through another person’s use of a service or device. This is one of the big issues that Lee Matheson, FPF Deputy Director for Global Privacy, sees as defining an enforcement push from Data Protection Authorities (DPAs) from Canada to Europe this year, particularly as it relates to Augmented Reality and connected devices: “It’s a cross-cutting technology that implicates lawful bases for collection/processing, AI and automated decision-making (particularly facial recognition), secondary uses, and data transfers (as unlike smartphones, activity is less likely to be kept on-device). I think a particular focus could be on how to vindicate the rights of non-user data subjects whose information is captured by these kinds of devices.”
2. Three different speeds for AI legislation: Moderation in APAC, Implementation in Europe, Acceleration in Latin America
AI governance and data protection are closely linked, as shown above, which makes AI legislation a particularly poignant topic to follow. “Whether through hard or soft law approaches, preventing significant fragmentation of AI rules globally will be high on the agenda,” observes Bianca-Ioana Marcu, FPF Deputy Director for Global Privacy. Bianca has been closely following initiatives of international organizations and networks in the AI governance space throughout the last year, like the efforts of the UN, the OECD, or the G7 in this space, and she believes that in 2025, “international fora and the principles and guidelines agreed upon within such groups will act as the driving force behind AI standard-setting.” Bianca adds that we might see efforts towards “harmonizing regional data protection rules in the interests of supporting the governance and availability of AI training data.” I can see this happening, for instance, across economic regions in Africa, or even at the ASEAN level.
As for legislative efforts around the world targeting AI, the team identifies three different speeds. In the Asia-Pacific (APAC) region, Josh Lee Kok Thong, FPF Managing Director for APAC, foresees a “possible cooling down” of the race to adopt AI laws and other regulatory efforts. “There will be signs of slight regulatory fatigue in AI governance and regulatory initiatives in APAC. This is especially so among the more mature jurisdictions, such as Japan, Singapore, China, and Australia. Rather than developing new headline regulatory or governance initiatives, efforts are likely to focus on the development of tools for evaluation and content provenance,” he says. Josh notes that jurisdictions across APAC will be closely watching how the implementation of the EU AI Act unfolds, as well as the US regulatory stance towards AI under President Trump’s administration before deciding what steps to take.
In contrast, Latin America will likely move full speed ahead toward AI legislation. Maria Badillo, Policy Counsel for Global Privacy, explains that “this year will mark significant progress on initiatives to govern and regulate AI across multiple countries in Latin America. Brazil has taken a leading role and is getting closer to adopting a comprehensive law in 2025 after the Senate’s recent approval of the AI bill. Other countries like Chile, Colombia, and Argentina have introduced similar frameworks.” Maria says that this will happen mainly under the influence of the EU AI Act, but also from Brazil’s AI bill.
When it comes to AI legislation, the EU is catching its breath this year, focusing on the implementation of the EU AI Act, which was adopted last year and whose application starts rolling out in a month. Necessary Codes of Conduct – like the one dedicated to general purpose AI, implementing acts, and specific standards are expected to flow within the next 18 months or so. But this year, we will certainly see the first signs of whether this new law will successfully achieve its goals. A good indicator will be observing in practice the intricate web of authorities tasked by the EU AI Act with oversight, implementation, and enforcement of the law. “The lack of a one-stop-shop mechanism and the presence of several authorities in the same jurisdiction will be a first test of the efficiency of the AI Act and the authorities’ ability to coordinate,” highlights Vincenzo Tiani, Senior Policy Counsel in FPF’s Brussels office.
Meanwhile, it is expected that DPAs will gain a more prominent role in enforcing the law on matters at the intersection of the GDPR with the various new EU acts regulating the digital space, including the EU AI Act. “DPAs will be increasingly called to step up and drive enforcement actions on a broad number of issues also falling under other EU regulatory acts, but which involve the processing of personal data and the GDPR,” says Andreea Serban, FPF Policy Analyst in Brussels. This will be particularly evident regarding AI systems, after a first infringement decision in a series of complaints surrounding ChatGPT was issued by the Italian DPA, the Garante, at the end of 2024.
The space in AI governance that the GDPR occupies will visibly expand this year, including into issues where copyright is considered central. Vincenzo explains that “the licenses provided by newspapers to providers of LLMs, at least so far, do not cover the protection of personal data contained therein.” The Italian DPA has already raised the flag on this issue.
Countervailing some of the biggest risks of Generative AI beyond the processing of personal data will keep regulators across Europe busy, be they DPAs, the European Commission’s AI Office, or other national EU AI Act implementers. Dr. Desara Dushi, Policy Counsel in our Brussels office, anticipates “a sharp focus on controlling the use of synthetic data that fuels harmful content, with the rise of advanced emotional chatbots and the proliferation of deepfakes.” This could happen through “more robust and specific guidelines targeting generative AI’s risks.”
3. International Data Transfers will come back on top of the Global Privacy agenda
As I anticipated last year in my 2024 predictions, international data transfers started intertwining with the broader geopolitical goals of countries caught in the AI race. This trend will become even more visible in 2025, when we expect that issues related to international data transfers will come back to the top of the Global Privacy agenda, fueled this time not only by the geopolitics of AI development, but also by the broader dynamic between a new European Commission in Brussels and a new administration in Washington DC.
“I think transatlantic data transfers issues will be brought back to center stage in the dynamics of EU’s implementation of digital regulations like the DSA and the DMA on one side, and the priorities of the new administration in the US on the other side,” foresees Lee Matheson, who is based in our Washington DC office and who closely follows international data transfers. But, this time around, the pressure on the continuity of data flows between the US and the EU might first come from the US side.
Lee thinks we should follow closely what happens with Executive Order (E.O.) 14117 “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern,” an instrument adopted last year which bans transfers of bulk sensitive data of Americans outside of the U.S. in specific circumstances and only towards designated countries of concern (currently China, Iran, Russia, Venezuela, Cuba and North Korea). The Executive Order could be left as is, amended, repealed, or replaced by the new administration in Washington. But an interesting point Lee raises is that “E.O. 14117 and its associated DOJ Rules, in particular, provide a framework that could be extended to additional jurisdictions.”
On the other hand, the General Court of the CJEU started early this year with a decision that recognized plaintiffs can obtain compensation for non-material damage if their personal data have been transferred unlawfully, in a case involving transfers made by the European Commission to the U.S. before the Data Privacy Framework became effective. This clarification made by the Court could increase the appetite for challenging the lawfulness of international data transfers. In part due to pressure on more traditional data transfer mechanisms, Lee thinks “the world will see alternative systems for international data transfers, such as the Global Cross Border Privacy Rules system, become substantially more prominent.”
Indeed, transatlantic data flows will only be one of many cross-border data flow stories to follow. “We may well see continuing fragmentation of the cross-border data transfer landscape globally and in APAC into clusters of likeminded jurisdictions, ranging from those like Singapore and Japan that are working to promote trusted data flows (especially through initiatives like the Global CBPRs) to those like Indonesia, India, and Vietnam that have recently renewed their interest in adopting data localization measures,” adds Dominic Paulger, FPF Deputy Director for APAC, from our Singapore office. He also thinks that geopolitical and regulatory trends in the US and the EU will affect dynamics in APAC. “While there will be tension between data localization requirements in some jurisdictions, navigating the right balance will be crucial in shaping both regulatory strategies and business practices across the region in 2025,” concludes Sakshi Shivhare, Policy Associate for FPF APAC.
4. Convergence of youth privacy and online safety will take the spotlight around the world
Convergence of children’s and teen’s privacy and online safety issues into new legislative action, regulatory initiatives, or public policy measures is being emphatically highlighted as a top issue to watch in 2025 by my colleagues across APAC, India, EU, and, to some extent, Latin America.
Dominic explains that jurisdictions in APAC are increasingly incorporating online safety provisions into data protection laws, with some focusing on age verification or age-appropriate design requirements. This highlights tensions between real concerns about young people’s online safety and the substantial privacy risks that are posed by age assurance technologies and related mandates. Experts have raised the need for more cross-cutting conversations to identify and address privacy and security risks created by regulatory efforts. He expects the focus on youth safety to continue throughout 2025, “especially following Australia’s recent ban on social media use for under-16s.” This approach has been criticized by some youth safety and privacy experts while being lauded by others. Several jurisdictions, including Singapore, are considering emulating this model, and many more will be watching to see how it plays out.
“The dialogue around online youth safety will likely intensify in the EU as well, with a notable focus on children’s overall well-being and how that intersects with youth privacy rights,” foresees Desara, who comes to FPF’s Brussels office with extensive research and policy work in this space. “The narrative may broaden to encompass a more holistic approach to child protection, leading toward ‘child rights by design’ requirements,” she adds.
The Child Sexual Abuse Regulation (CSAR) proposal in the EU will continue to be the subject of fierce debate in 2025. The CSAR debate has been characterized by proponents noting the measure’s noble goals and critics characterizing the proposal as technically unworkable and certain to undermine core privacy and security measures. Desara concludes: “With early insights emerging from the UK’s Online Safety Act, the ongoing intersection of privacy and youth safety promises to be a defining issue in the year ahead.”
5. We have a new law, now what? Implementation and groundwork for enforcement will be central in APAC, LatAm, Africa, and EU
Several jurisdictions across all regions will focus on starting the implementation of recently adopted data protection laws. Perhaps this is most visible in the APAC region, which “is seeing a significant maturation of data protection frameworks,” as Sakshi Shivhare notes. Examples include “the promulgation of India’s DPDPA Rules, the phased implementation of Malaysia’s PDPA amendments, the much-awaited finalization of implementing regulations for Indonesia’s PDP Law, and the implementation of Australia’s first tranche of Privacy Act amendments,” explains Josh Lee.
This year, significant attention will be paid to India’s DPDPA Implementing Rules. “With the draft rules now released, attention will shift to public consultations and how the government addresses feedback,” notes Bilal Mohamed, FPF Policy Analyst based in New Delhi. He points out that some of the key concerns discussed so far relate to “the possible reintroduction of data localization norms, (Rules 12(4) and 14) and the practical concerns with the implementation of Verifiable Parental Consent,” also adding to two of the trends we identified above related to international data transfers and children’s privacy and online safety. “Together, these shifts suggest that 2025 will be pivotal for creating a more cohesive, though not necessarily uniform, privacy landscape across APAC,” concludes Sakshi.
Jurisdictions across Africa will face similar challenges this year. Mercy King’ori, FPF Policy Manager for Africa, based in Nairobi, thinks we should expect “more sectoral regulations as controllers and processors continue to seek clarity on the practical implementation of legal provisions in most data protection laws across the continent. This is the continuation of a trend from 2024 where DPAs have been identifying gaps in the implementation of the laws and proposing regulations and guidelines in data-intensive sectors such as education, marketing, and finance.”
She adds that, in parallel, DPAs are dealing with an increasing number of complaints: “The rise of complaints has been due to heightened awareness of data subject rights and DPAs eager to push for compliance with national data protection regimes. The move towards enforcing compliance has even seen DPAs initiate assessments on their own volition, such as South Africa’s Information Regulator leading to enforcement notices and penalties.”
Secondary or implementing regulations are also expected to drive the agenda in Latin America, with a priority on “protecting children’s data, data subject rights, and processing of personal data in the context of AI,” points out Maria Badillo. She specifically notes that “active DPAs in the region, such as those from Brazil and Argentina, have identified AI regulation, exercise of data subject rights, and processing of children’s data among the priority areas for developing secondary regulations and guidance in 2025.”
Even the EU will have implementation fever this year – which is to be expected after intense lawmaking of everything digital and data during the first von der Leyen Commission. “In 2025, we should see a policy shift, prioritizing the application and implementation of existing frameworks, like the EU AI Act, the DSA, the DMA, and so forth, rather than proposals of new legislation,” points out Andreea Serban, who also notes recent messaging in Brussels signaling a decreased focus on regulation, especially in the aftermath of the Draghi report.
This is indeed how the Brussels agenda reads, but it shouldn’t be a surprise if new legislation, like the Digital Fairness Act, will make its way as an official proposal as soon as this year. And with other files like the CSAR still on the legislative train, or the constant “hide and seek” with the ePrivacy Regulation, the Brussels legislation machine might slow down, but it will not halt.
6. Bigger public policy debates will end up shaping global privacy: from “Innovation v. Regulation,” to checks and balances over government access to data
The “Innovation v. Regulation” dichotomy has been omnipresent in the European public debate since the publication of the Draghi report last year, even as some are positing this is a false choice (see Anu Bradford or Max von Thun).
“With a new European Commission taking the reins in Brussels, and with political tides changing across the EU, the innovation versus regulation debate will continue to polarize the digital policy community. Repercussions will be felt in discussions regarding not only the application and enforcement of the DSA and the DMA but also for data protection law as we await new GDPR enforcement rules,” explains Bianca-Ioana Marcu. However, she suggests that this debate might be louder than having effects in practice, as Brussels will move ahead with its regulatory agenda of the new Commission. It is clear, though, that Brussels may experience a “shift towards promoting EU competitiveness,” as Andreea framed it, and that this will impact, even if incrementally, all the “digital agenda” files.
While most of the attention in India might be focused on the DPDPA Implementing Rules, promoting the country’s competitiveness is a bigger goal for many, which could result in regulatory changes supporting it. Bilal signals that there are interesting data-sharing initiatives coming up at a sectoral level. “For instance, MeitY plans to launch an IndiaAI datasets platform to provide high-quality datasets (non-personal data) for AI developers and researchers. Similar initiatives are underway in sectors such as healthcare, e-commerce, and agriculture,” he says. These initiatives are quite similar to the EU Data Spaces, which are also expected to advance. “It will be fascinating to see how these initiatives align with the DPDPA, and how this shapes the definition of ‘non-personal data’ in India,” adds Bilal.
One last bigger public policy debate that may impact concrete data protection this year remains the checks and balances over government access to personal data. For instance, Rivki, based in our Tel Aviv office, highlights that this year she expects the privacy community to confront the long-term privacy consequences of the exceptional measures taken by the government during the war, such as storage of fingerprints in databases or authorization of intrusion into security cameras without consent. The privacy community will likely be focused to “ensure that any measures implemented during this period do not persist or become the new standard for privacy,” she says.
Government access to data shapes up to also be top of mind in policy debates in India, with Bilal noting that “on a broader scale, constitutional challenges related to government exemptions under the DPDPA may surface in the Supreme Court once the implementing rules are officially notified.”
7. A dark horse prediction and further reading
Before ending the round-up of issues to follow in 2025 in Global Privacy, I will make my dark horse prediction: The reopening of the GDPR might appear more convincingly on the regulatory agenda this year, once the procedural reform is done. What seemed almost sacrilegious a couple of years ago will now look more likely, especially in the light of DPAs becoming active in enforcing the GDPR on AI systems, and eventual hiccups of non-DPA enforcers applying the digital strategy package at the intersection with GDPR provisions.
Finally, for a good understanding of what the year might bring to US policymaking, check out this analysis by Jules Polonetsky, FPF CEO, for TechPolicy Press, “2025 May be the Year of AI Legislation: Will we see Consensus Rules or a Patchwork?,” as well as FPF Senior Director for U.S. Legislation Keir Lamont’sblog, “Five Big Questions (and Zero Predictions) for the US State Privacy Landscape in 2025.”
Twelve Privacy Investments for Your Company for a Stronger 2025
FPF has put together a list of Twelve Privacy Investments for Your Company for a Stronger 2025 that reflects on new perspectives on the work that privacy teams do at their organizations. We hope there is something here that’s useful where you work, and we’d love to hear other ideas and feedback.
Privacy Investments for Your Company for a Stronger 2025
Re-review your privacy notice and other disclosures to ensure you are covering any new data collection or uses planned in 2025 – including secondary uses of data – that may be going on. This has been a theme of FTC actions in 2024 and a measure that would enhance transparency as suggested by the EDPB in the latest Opinion on data protection and AI models. Since new uses of data for AI have been prompting consumer alarm, allow time for explanation, education and communication to support user understanding of the value proposition. Consider opt-out options for new uses and opt-in for any significant changes or uses of sensitive data.
Take steps to minimize your processing of precise location data or sensitive data. Explore using less precise alternatives, ensuring limited retention and effective de-identification techniques, or uses of other kinds of data that have less risk of creating sensitive inferences.
Take a good look at vendor management. Don’t just rely on contractual constraints. If there are no technical monitoring or other controls in place, get a plan for some in product roadmaps.
Deepen your relationships with various business teams (sales and marketing, product teams, etc.) so you know what they’re planning and can help develop a forward-looking compliance strategy.
Help FPF gather information about the operational implications of new or prospective laws so we can effectively explain data uses and tech to policymakers to help them craft policy and guidance that strikes the right balance for accountable data use.
While comprehensive federal privacy legislation may not be imminent, the states and the attorneys general are still pretty concerned about privacy, as are governments around the world. Deepen your connections with the AG offices and understand their perspectives. Meet key local legislators and build relationships by supporting their interest in being educated about emerging technologies and their impact.
Although the outcomes of court cases have been unclear, it is clear that protections for users under 18 will continue to be a focus of legislative activity and enforcement. Consider options that can provide for more limited uses of teen data.
Take special care with data that may implicate personal health information and prepare to be vigilant in the case that law enforcement comes knocking for information about a user that could reveal their reproductive health status. We recommend our Health and Wellness Policy Brief.
Map your international data flows and track any instances where internal processes or third-party relationships could put data within reach of one of the U.S. government’s “countries of concern.” Diversify your data transfers tools with an eye on the global landscape, as cross-border data flows restrictions are increasingly expanding beyond the EU-US dynamic.
If you are doing business in India, make sure to have good data governance and data inventories in place for your operations. Major changes are coming, with the implementation date of the DPDPA in sight after the draft implementing rules were published at the very beginning of this year. Keep a close track on India’s DPDPA Rules and stay sufficiently informed to provide feedback during the public consultation exercise.
Align your teams on how your company will use AI tools internally to automated work flows and make all of this work easier, including applying AI tools to making privacy compliance easier, like handling data subject requests or assessing whether your policies could be made easier to read and access. FPF’s new report may help.
Tidy up your clean room practices. You may view your partners as trusted, but the FTC may consider them potential attackers from a de-identification point of view. Ensure technical controls are credible.