FPF Celebrates Safer Internet Day with Newly Released Encryption Infographic
Future of Privacy Forum (FPF) is thrilled to celebrate Safer Internet Day 2025 with the release of a new infographic, “Encryption Keeps Young People Safe.” Safer Internet Day is an annual event and part of a larger global mission to create a safer online environment, especially for young people. FPF’s new infographic explains how encryption technology plays a crucial role in ensuring data privacy and online safety for a new generation of teens and kids. FPF will host leading experts at a virtual event on Feb. 11 at 10 am ET to discuss the state of encryption technology and policy.
Data encryption is central to online security, privacy, and safety, and of particular importance for particularly vulnerable groups, such as young people. Gen Z and Gen Alpha have lived their entire lives in the age of the commercial internet, social media, electronic records, and internet-connected devices. They have grown up in a world where everything from insulin pumps to cars are internet-connected. Encryption is the best protection to ensure that personal communications, transactions and devices are safe and secure. The 2025 infographic illustrates encryption’s role in protecting data in places young people frequent, such as sports parks, shopping centers, and health clinics.
Encryption is often used to secure or authenticate sensitive documents. Encryption applies a mathematical formula, which obfuscates plaintext information and transforms the plaintext into unreadable ciphertext. Each use of encryption generates a long number that is the mathematical solution to the formula and can unscramble the protected sensitive information. If a private key is not kept secret, anyone with the key can access the private data or impersonate the authenticated person or organization.
This infographic is the latest in FPF’s longstanding work on encryption, which includes a 2020 infographic explaining how encryption more broadly protects enterprises, individuals, and governments—and what may happen when data and devices fail to use strong encryption and are compromised by bad actors. The infographic series advances FPF’s mission of promoting data privacy for every user by showcasing the vital role encryption plays in ensuring online safety, and the detrimental effects of an online world without its protections.
FPF will host a virtual event at 10 am ET today, featuring a Keynote address from Patricia Kosseim, Ontario Information and Privacy Commissioner. There will also be a panel of experts to dive into how encryption protects young people not just online, but in the physical world as well, by preventing malicious actors from gaining access to the devices and spaces they rely on for health, education, convenience, and more. Register now to join the event!
If you’re interested in learning more about encryption or other issues driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on X, LinkedIn, or Instagram.
Minding Mindful Machines: AI Agents and Data Protection Considerations
Thank you for the contributions of Rob van Eijk, Marlene Smith, and Katy Wills
We are now in 2025, the year of AI agents. In the last few weeks, leading large language model (LLM) developers (including OpenAI, Google, Anthropic) have released early versions of technologies described as “AI agents.” Unlike earlier automated systems and even LLMs, these systems go beyond previous technology by having autonomy over how to achieve complex, multi-step tasks, such as navigating on a user’s web browser to take actions on their behalf. This could enable a wide range of useful or time-saving tasks, from making restaurant reservations and resolving customer service issues to coding complex systems. However, AI agents also raise greater and novel data protection risks related to the collection and processing of personal data. Their technical characteristics could also present challenges, such as those around safety testing and human oversight, for organizations seeking to develop or deploy AI agents.
This analysis unpacks the defining characteristics of the newest AI agents and identifies some of the data protection considerations that practitioners should be mindful of when designing and deploying these systems. Specifically:
While agents are not new, emerging definitions across industries describe them as AI systems that are capable of completing more complex, multi-step tasks, and exhibit greater autonomy over how to achieve these goals, such as shopping online and making hotel reservations.
Advanced AI agents raise many of the same data protection questions raised by LLMs, such as challenges related to thecollection and processing of personal data for model training, operationalizing data subject rights, and ensuring adequate explainability.
In addition, the unique design elements and characteristics of the latest agents may exacerbate or raise novel data protection compliance challenges around the collection and disclosure of personal data, security vulnerabilities, the accuracy of outputs, barriers to alignment, and explainability and human oversight.
What are AI Agents?
The concept of “AI Agents” or “Agentic AI” arose as early as the 1950s and has many meanings in technical and policy literature. In the broadest sense, for example, it can include systems that rely on fixed rules and logic to produce consistent and predictable outcomes on a person’s behalf, such as email auto-replies or privacy preferences.
Advances in AI research, particularly around machine and deep learning techniques and the advent of LLMs, have enabled organizations to develop agents that can tackle novel use cases, such as purchasing retail goods and recommending and executing transactions. From finance to hospitality, these technologies could help individuals, businesses, and governments save time they would otherwise dedicate to completing tedious or monotonous tasks.
Companies, civil society, and academia have defined the latest iteration of AI agents, examples of which are provided in the table below:
“[A]n entity that senses percepts (sound, text, image, pressure etc.) using sensors and responds (using effectors) to its environment. AI agents generally have the autonomy (defined as the ability to operate independently and make decisions without constant human intervention) and authority (defined as the granted permissions and access rights to perform specific actions within defined boundaries) to take actions to achieve a set of specified goals, thereby modifying their environment.”
“AI agents [are] systems capable of pursuing complex goals with limited supervision,” having “greater autonomy, access to external tools or services, and an increased ability to reliably adapt, plan, and act open-endedly over long time-horizons to achieve goals.”
“Agents,” Sept. 2024, Julia Wiesinger, Patrick Marlow, and Vladimir Vuskovic, Google
“[A] Generative AI agent can be defined as an application that attempts to achieve a goal by observing the world and acting upon it using the tools that it has at its disposal. Agents are autonomous and can act independently of human intervention, especially when provided with proper goals or objectives they are meant to achieve. Agentscan also be proactive in their approach to reaching their goals. Even in the absence ofexplicit instruction sets from a human, an agent can reason about what it should do next to achieve its ultimate goal.”
Defining long-term planning agents as “an algorithm designed to produce plans, and to prefer plan A to plan B, when it expects that plan A is more conducive to a given goal over a long time horizon.”
“An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.
Table 1. Definitions of “AI agents”
These definitions highlight common characteristics of new AI agents, including:
Autonomy and adaptability: Users generally provide an agent with the task they want it to achieve, but neither they nor the agent’s designers specify how to accomplish the task, leaving those decisions to the agent. For example, upon being instructed by a business to project the sales revenue of its flagship product for the next six months, the agent may decide that it needs sales figures from the last two years and use certain tools (e.g., a text retriever) to obtain these details. If it cannot find these figures or if they contain errors, it may determine that the next step is to seek information from other documentation. Agentic systems may incorporate human review and approval over some or all decisions.
These characteristics enable advanced agents to achieve goals that are beyond the capabilities of other AI models and systems. However, they also raise questions for practitioners about the data protection issues organizations may encounter when developing or deploying these technologies.
Emerging Privacy and Data Protection Issues with Agentic AI
While the latest AI agents may raise similar risks to consequential decision-making and LLMs, they can also exacerbate or pose novel privacy and data protection considerations. The economic and social impact of AI agents is a topic of heated debate and significant financial investment, but there has been less attention on the potential impact of agents on privacy and data protection. In order to effectuate tasks and decision making with autonomy, especially for consumer-facing tools and services, AI agents will need access to data and systems. In fact, much like human assistants, AI agents may be at their most valuable when they are able to assist with tasks that involve highly sensitive data (e.g., managing a person’s email, calendar, or financial portfolio, or assisting with healthcare decision-making).
As a result, many of the same risks relating to consequential decision-making and LLMs (or to machine learning generally) are likely to be present in the context of agents with greater autonomy and access to data. For example, like some LLMs, some AI agents transmit data to the cloud due to the computing requirements of the most powerful models, which may expose the data to unauthorized third parties (e.g., the recent Data protection impact assessment on the processing of personal data with Microsoft 365 Copilot for Education). As with chatbots that use LLMs, AI agents with anthropomorphic qualities may be able to steer individuals towards or away from conducting certain actions against the user’s best interest. Other examples of cross-cutting data protection issues include challenges related to having a lawful basis for model training, operationalizing data subject rights, and ensuring adequate explainability. These legal and policy issues for LLMs, which are the subject of ongoing debate and legal guidance, are only heightened in the context of agentic systems with enhanced capabilities.
In addition, more recent AI agents may present some novel privacy implications or exacerbate data protection issues that go beyond those associated with LLMs.
Data collection and disclosure considerations:The latest AI agents may need to capture data about a person and their environment, including sensitive information, in order to power different use cases. As with LLMs, the collection of personal data by agents will often trigger the need for having a lawful ground in place for such processing. When the personal data collected is sensitive, additional requirements for lawfully processing them often apply too. While current LLM-based systems may train and operate using personal data, they lack the tools (e.g., application programming interfaces, data stores, and extensions) to access external systems and data. The latest AI agents may be equipped with these tools, which could enable them to obtain real-time information about individuals. For example, some agents may take screenshots of a user’s browser window in order to populate a virtual shopping cart, from which intimate details about a person’s life could be inferred. As the number of individuals using AI agents and its use cases grow, so too could AI agents’ access to personal data. For example, AI agents may collect many types of granular telemetry data as part of their operations (e.g., user interaction data, action logs, and performance metrics). Increasingly complex agents may collect large quantities of telemetry information, which may qualify as personal data under data privacy legal regimes.
Security vulnerabilities: Advanced AI agents’ design features and characteristics may make them susceptible to new kinds of security threats. Adversarial attacks on LLMs, such as the use of prompt injection attacks to get these models to reveal sensitive information (e.g., credit card information), can impact AI agents too. Besides causing an agent to reveal sensitive information without permission, prompt injection attacks can also override the system developer’s safety instructions. While prompt injection is not a threat unique to the latest AI agents, new kinds of injection attacks could take advantage of the way agents work to perpetuate harm, such as installing malware or redirecting them to deceptive websites.
Accuracy of outputs: Hallucinations, compounding errors, and unpredictable behavior may impact the accuracy of an agents’ outputs. LLM hallucinations—the making up of factually untrue information that looks correct—may affect the accuracy of an agent’s outputs. These hallucinations are closely tied to the “temperature” parameter that controls randomness in the model’s attention mechanism: higher temperatures increase creativity and the risk of hallucinations, while lower temperatures reduce hallucinations but may limit the agent’s adaptability. However, errors that affect agent outputs may have different implications for individuals, such as misrepresenting a user’s characteristics and preferences when it fills out a consequential form. In addition to hallucinations, the latest AI agents may experience compounding errors, which could occur while the systems perform a sequence of actions to complete a task (e.g., managing a customer’s account). Compounding errors is the phenomenon where the agent’s accuracy decreases the more steps a task takes. For example, an AI agent creating a travel experience may experience an error while making a one-day hotel booking, which cascades into misaligned restaurant reservations and museum tickets. This holds true even when the model’s accuracy is high. Some AI agents may act in unpredictable ways due to dynamic operational environments and agents’ non-deterministic nature—producing probabilistic outcomes, adapting to new situations, learning from data, and exhibiting complex decision-making—leading to malfunctions that affect output accuracy. These accuracy issues may be challenging to redress through risk management testing and assessments and exacerbated when different AI agents interact with each other.
Barriers to “alignment”: Some AI agents may pursue tasks in ways that conflict with human interests and values, including data protection considerations. AI alignment refers to designing AI models and systems to pursue a designer’s goals, such as prioritizing human well-being and conforming to ethical values. Misalignment problems are not new to AI, but continued technological advances with agents may make it challenging for organizations to achieve alignment through safeguards and safety testing. LLMs can fake alignment by strategically mimicking training objectives to avoid undergoing behavioral modifications. These challenges have data protection implications for the latest AI agents. For example, an agent may decide that it needs to access or share sensitive personal data in order to complete a task. Such behavior could implicate an individual’s data protection interest in having control over their data when personal data is processed during deployment. Practitioners must be mindful of the need for safeguards to constrain this behavior, although research into model alignment has focused more on safety issues rather than privacy.
Explainability and human oversight challenges:Explainability barriers arise when users cannot understand an agent’s decisions, even if these decisions are correct. Users and developers may encounter difficulties in understanding how some AI agents reach decisions due to their complex processes. The black box problem, or the challenge of understanding how an AI model or system makes decisions, is not unique to agents. However, the speed and complexity of AI agents’ decision-making processes may create heightened roadblocks to realizing meaningful explainability and human oversight. AI agents utilizing language models can provide some of their reasoning in natural language, but these “chain-of-thought” insights are becoming more complicated and are not always indicative of the agent’s actual reasoning. These challenges may make it more difficult to reliably interrogate agents’ decision-making processes and manage risks.
Looking Ahead
Recent advances in AI agents could expand the utility of these technologies across the private and public sectors, but they also raise many data protection considerations. While practitioners may be aware of some of these considerations due to the relationship between LLMs and the latest AI agents, the unique design elements and characteristics of these agents may exacerbate or raise new compliance challenges. For example, an agent may manage privacy settings (e.g., accepting cookies so that it can continue working on a task) as part of its operations, although companies can establish safeguards to address this risk. In closing, practitioners should remain abreast of technological advances that expand AI agents’ capabilities, use cases, and contexts where they can operate, as these may raise novel data protection issues.
This year’s Winning Privacy Papers to be Honored at the Future of Privacy Forum’s 15th Annual Privacy Papers for Policymakers Event
The Future of Privacy Forum’s 15th Annual Privacy Papers for Policymakers Award Recognizes Influential Privacy Research
The PPPM Awards recognize leading U.S. and international privacy scholarship that is relevant to policymakers in the U.S. Congress, federal agencies, and international data protection authorities. Six winning papers, two honorable mentions, one student submission, and a student honorable mention were selected by a diverse group of leading academics, advocates, and industry privacy professionals from FPF’s Advisory Board.
Authors of the papers will have the opportunity to showcase their work at the Privacy Papers for Policymakers ceremony on March 12, in conversations with discussants, including James Cooper, Professor of Law, Director, Program on Economics & Privacy, Antonin Scalia Law School, George Mason University, Jennifer Huddleston, Senior Fellow in Technology Policy, Cato Institute, and Brenda Leong, Director, AI Division, ZwillGen.
“Data protection and artificial intelligence regulations are increasingly at the forefront of global policy conversations,” said FPF CEO Jules Polonetsky. “And it’s important to recognize the academic research that explores the nuances surrounding data privacy, data protection, and artificial intelligence issues. Our award winners have explored these complex areas ― to all of our benefits.”
FPF’s 2025 Privacy Papers for Policymakers Award winners are:
Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first discusses several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together, they tell only a partial story. Through privacy law, China’s leaders have sought to interpose themselves as benevolent guardians of privacy rights against other intrusive actors—individuals, firms, and even state agencies and local governments. This Article adds to our understanding of privacy law, complicates the relationship between privacy and democracy, and points toward a general theory of authoritarian privacy.
The Great Scrape: The Clash between Scraping And Privacy by Daniel J. Solove, George Washington University Law School and Woodrow Hartzog, Boston University School of Law and Stanford Law School Center for Internet and Society
Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society. Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others. This Article explores the fundamental tension between scraping and privacy law.
Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property.
On paper, the Federal Trade Commission’s consumer protection authority seems straightforward: the agency is empowered to investigate and prevent unfair or deceptive acts or practices. This flexible and capacious authority, coupled with the agency’s jurisdiction over the entire economy, has allowed the FTC to respond to privacy challenges both online and offline. The contemporary question is whether the FTC can draw on this same authority to curtail the data-driven harms of commercial surveillance or emerging technologies like artificial intelligence. This Essay contends that the legal answer is yes and argues that the key determinants of whether an agency like the Federal Trade Commission will be able to confront emerging digital technologies are social, institutional, and political. Specifically, it proposes that the FTC’s privacy enforcement occurs within an “Overton Window of Enforcement Possibility.”
Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
Governments and policymakers increasingly expect practitioners developing and using AI systems in both consumer and public sector settings to proactively identify and address bias or discrimination that those AI systems may reflect or amplify. Central to this effort is the complex and sensitive task of obtaining demographic data to measure fairness and bias within and surrounding these systems. This report provides methodologies, guidance, and case studies for those undertaking fairness and equity assessments — from approaches that involve more direct access to data to ones that don’t expand data collection. Practitioners are guided through the first phases of demographic measurement efforts, including determining the relevant lens of analysis, selecting what demographic characteristics to consider, and navigating how to hone in on relevant sub-communities. The report then delves into several approaches to uncover demographic patterns.
FPF also selected a paper for the Student Paper Award: Data Subjects’ Reactions to Exercising Their Right of Accessby Arthur Borem, Elleen Pan, Olufunmilola Obielodan, Aurelie Roubinowitz, Luca Dovichi, and Blase Ur at the University of Chicago; and Michelle L. Mazurek from the University of Maryland. A Student Paper Honorable Mention went to Artificial Intelligence is like a Perpetual Stew by Nathan Reitinger, University of Maryland – Department of Computer Science.
In reviewing the submissions, winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.
The Privacy Papers for Policymakers Award event will be held on March 12, 2025, at FPF’s offices in Washington, D.C. The event is free and registration is open to the public.
###
About Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, and Singapore. Learn more at fpf.org.
5 Ways to Be a Top Dog in Data Privacy
Data Privacy Day, or Data Protection Day in Europe, is recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data. To raise awareness for the day and promote best practices for data privacy, we’ve partnered with Snap to create a Data Privacy Day Snapchat Lens that lets you choose what type of privacy pup best reflects your personality. Check it out by scanning the Snapchat code!
Once you’ve determined which privacy pup you are, learn more about protecting your privacy with these 5 quick, easy steps.
1. Share your Information with Websites and Apps you Trust
Today, almost everything we do online involves companies collecting personal information about us. When we’re subscribing to marketing emails, making online purchases, filling out surveys, or even applying for jobs online, websites are collecting more information than ever before, and we’ve become accustomed to sharing personal information daily. However, it’s important to be cautious and trust a website or app before sharing any personal information with it.
There are a few ways to evaluate a website or app before sharing personal information. The first is to check the website’s url and domain name. Confirm that both the name of the website is spelled properly, and the domain name ending in .org, .com, .edu, or .gov, which are typically (but not always) more credible. Next, you can look for clear information about the leaders of the organization and their contact information on the website. If that information isn’t available, or is difficult to find, be cautious because you may not know who will be responsible for your personal information. Lastly, take a few minutes to evaluate a company’s privacy policy. The policy should clearly state the company’s full name, explain how they will be using your information, and may include information about the security measures in place. Many states also require companies to let you submit a data access request, and it’s helpful to check that the company is complying with their state law and displaying that information.
2. Update your passwords and multi-factor authentication regularly
Password re-use is one of the top ways that unwanted eyes can get into your accounts: once one service where you used a password is breached, criminals will likely try the same username and password combination on other services just to see if it works. To get a sense of the scale of the risk, you check your info on web service “Have I Been Pwned” (available at haveibeenpwned.com), which allows you to enter your email address and see what data breaches that email has been included in.
Because of the risks involved in recycling passwords, using unique passwords is an essential step for keeping personal information private. You can also consider utilizing a password manager. Password managers save passwords as you create and log in to your accounts, often alerting you of duplicates and suggesting the creation of a stronger password. And no, the name of your dog is not a strong password.
For example, if you use an Apple product when signing up for new accounts and services, you can allow your iPhone, Mac, or iPad to generate strong passwords and safely store them in iCloud Keychain for later access. Some of the best third-party password managers can be found here.
When possible, you should also utilize multi-factor authentication along with a password. This extra step ensures that simply inputting a compromised password is not enough to provide access to your account without an extra step, typically the connection of a device like a yubikey or submission of a numeric code sent to a phone number, e-mail address, or authentication application on your phone. While some forms of multi-factor authentication may be more protective and more resilient than others, any choice will significantly increase the security in comparison to a password alone. You can see how easy it is to set up multi-factor authentication on Snapchat using their easy-to-understand articles, available online.
3. Respect other peoples’ privacy
It’s important to be mindful about the information you share and see on social media. Consider the reach of your own posts, and avoid sharing anything you wouldn’t want to be saved or widely shared, whether it’s about you or someone else. Many social media sites like Instagram, Facebook, and Snapchat allow you to share images and chat with a closed group or limited number of friends, and it’s important to honor when someone chooses to keep information non-public when they share it in closed or private settings. Don’t screenshot or reshare private stories or messages from others.
4. Review all social media settings
Many social media sites include options on how to tailor your privacy settings to limit how data is collected or used. Snap provides privacy options that control who can contact you and many other options. Start with the Snapchat Privacy Center to review your settings. You can find those choices here.
Snap also provides options for you to view any data they have collected about you, including account information and your search history. Downloading your data allows you to view what information has been collected and modify your settings accordingly.
Instagram allows you to manage various privacy settings, including who has access to your posts, who can comment on or like your posts, and manage what happens to posts after you delete them. You can view and change your settingshere.
TikTok allows you to decide between public and private accounts, allows you to change your personalized ad settings, and more. You can check your settingshere.
X allows you to manage what information you allow other people on the platform to see and lets you choose your ad preferences. Check your settings here.
Facebook provides a range of privacy settings that can be found here.
In addition, you can check the privacy and security settings for other popular applications such as Reddit and Pinteresthere. Be sure to also check your privacy settings if you have a profile on a popular dating app such as Bumble, Hinge, or Tinder.
What other social media apps do you use often? Check to see which settings they provide!
5. Use incognito settings to keep personal information about you hidden
Many browsers and apps allow you to turn on a setting that lets you continue to use the service without sharing as much personal information as you normally would.
On Chrome, you can browse the web more privately using incognito mode. To activate, open Chrome, under “More,” click “New Incognito Window.”
Using Safari, you can choose “private browsing” by opening Safari, clicking “File” and then “New Private Window.” If you have the app, you can choose to always browse privately by clicking “Settings” and then for the option “Safari opens with” pop-up menu, choose “a new private window.”
Mozilla also has options for using Firefox in “private browsing mode.” Click Firefox’s menu button, and then click “New private window.” You can also choose to always be in private browsing mode by choosing “Use custom settings for history” from the Firefox’s menu and checking the “Always use private browsing mode” setting.
Browsers like DuckDuckGo and Brave also default to private browsing mode. You can read more about DuckDuckGo’s anonymous browsing settings here, and Brave’s privacy protections here.
Using Snapchat, you can turn on Ghost Mode. While using it, your location won’t be visitable to anyone, including friends you may have previously shared your location with on Snapchat’s Snap Map. To turn it on, open the Map, tap the ⚙️ button at the top of the map screen, toggle Ghost Mode to on, and select how long you’d like to enable Ghost Mode.
If you’re interested in learning more about one of the topics discussed here or other issues driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on X, LinkedIn, or Instagram.
FPF brings together some of the top minds in privacy to discuss how we can all benefit from the insights gained from data while respecting the individual right to privacy.
What to Expect in Global Privacy in 2025
Next year, in 2026, we will celebrate a decade after the adoption of the GDPR, a law with an unprecedented regulatory impact around the world, from California to Brazil, across the African continent, to India, to China, and everywhere in between. The field of data protection and privacy has become undeniably global, with GDPR-inspired laws (from a lesser to a bigger degree) adopted or updated in many jurisdictions around the world throughout the past years. This could not have happened in a more transformative decade for technologies relying on data, with AI decidedly getting out of its winter, and “connected-everything,” from cars to eyewear, increasingly shaping our surroundings.
While jurisdictions around the world were catching up with the GDPR or gearing their own approach to data protection legislation, the EU leaped in the past five years towards comprehensive (and sometimes incomprehensible) regulation of multiple dimensions of the digital economy: AI itself, online platforms through intermediary liability, content moderation and managing systemic risks on very large online platforms and search engines, online advertising in electoral campaigns, digital gatekeepers and competition, data sharing and connected devices, data altruism and even algorithms used in the gig economy.
Against this backdrop, I asked my colleagues in FPF’s offices around the world, who passionately monitor, understand, and explain legislative, regulatory, and enforcement developments across regions, what we should expect in 2025 in Global Privacy. From data-powered technological shifts and their impact on human autonomy, to enforcement and sectoral implementation of general data protection laws adopted in the past years, to AI regulation, cross-border data transfers, and the clash of online safety and children’s privacy, this is what we think you should keep on your radar:
1. AI becoming ubiquitous will put self-determination and control in the center of global privacy debates
“Expect AI to become ubiquitous in everything we do online,” signals Dr. Rob van Eijk, FPF Managing Director for Europe. This will not only bring excitement for tech enthusiasts but also a host of challenges, heightened by the expected increase in consumers using AI agents. “The first challenge is maintaining personal autonomy in the face of technological development, particularly regarding AI,” weighs in Rivki Dvash, Senior Fellow with ITPI – FPF Israel.
Rivki foresees two prominent dimensions of this topic: first, at the ethical level, and second, at the regulatory level, particularly concerned “with the limits of the legitimacy of the use of AI while trying to contour the uniqueness of a person over a machineand the desire to preserve personal autonomy in a space of choice.” “What does it mean to be a human in an Agentic AI future?” is a question that Rob says will ignite a lot of thinking in the policy world in 2025. This makes me think of an older paper from Prof. Mireille Hildebrandt, “Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning” (2019), where she described a framework that could “provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.”
I expect the idea of “control” over one’s persona and personal information in the world of Generative and Agentic AI to increasingly permeate and fuel regulatory debates. In its much-expected Opinion on AI systems and data protection law published over the Holidays, the European Data Protection Board (EDPB) identified “the interest in self-determination and retaining control over one’s own personal data” as chief among individuals’ interests that must be taken into account and balanced, both when personal data is gathered for the development of AI models and with regards to personal data processed once the model is deployed.
Putting self-determination and control at the center of AI governance will not be just academic. For instance, the EDPB asked for an “unconditional opt-out from the outset,” “a discretionary right to object before the processing takes place” for developing and deploying AI systems, “beyond the conditions of Article 21 GDPR,” in order for legitimate interests to be considered as a valid lawful ground legitimizing consentless processing of personal data for AI models.
Rob adds that in 2025, we will see users “becoming increasingly reliant on AI companions for decision-making, from small choices like what to watch on streaming services to larger life decisions.” He highlights what will be one of the key privacy and data protection implications of all this: “AI companions will get unprecedented access to sensitive personal data, from financial transactions to private conversations and daily routines.” Protecting sensitive data in this context, especially with inferences broadly recognized as being covered by such enhanced safeguards under data protection law regimes, will be a key challenge that will keep privacy experts busy this year.
But the ideas of “control,” “self-determination,” and “autonomy” in relation to one’s own personal data are particularly fragile when it comes to non-users or bystanders whose data is collected through another person’s use of a service or device. This is one of the big issues that Lee Matheson, FPF Deputy Director for Global Privacy, sees as defining an enforcement push from Data Protection Authorities (DPAs) from Canada to Europe this year, particularly as it relates to Augmented Reality and connected devices: “It’s a cross-cutting technology that implicates lawful bases for collection/processing, AI and automated decision-making (particularly facial recognition), secondary uses, and data transfers (as unlike smartphones, activity is less likely to be kept on-device). I think a particular focus could be on how to vindicate the rights of non-user data subjects whose information is captured by these kinds of devices.”
2. Three different speeds for AI legislation: Moderation in APAC, Implementation in Europe, Acceleration in Latin America
AI governance and data protection are closely linked, as shown above, which makes AI legislation a particularly poignant topic to follow. “Whether through hard or soft law approaches, preventing significant fragmentation of AI rules globally will be high on the agenda,” observes Bianca-Ioana Marcu, FPF Deputy Director for Global Privacy. Bianca has been closely following initiatives of international organizations and networks in the AI governance space throughout the last year, like the efforts of the UN, the OECD, or the G7 in this space, and she believes that in 2025, “international fora and the principles and guidelines agreed upon within such groups will act as the driving force behind AI standard-setting.” Bianca adds that we might see efforts towards “harmonizing regional data protection rules in the interests of supporting the governance and availability of AI training data.” I can see this happening, for instance, across economic regions in Africa, or even at the ASEAN level.
As for legislative efforts around the world targeting AI, the team identifies three different speeds. In the Asia-Pacific (APAC) region, Josh Lee Kok Thong, FPF Managing Director for APAC, foresees a “possible cooling down” of the race to adopt AI laws and other regulatory efforts. “There will be signs of slight regulatory fatigue in AI governance and regulatory initiatives in APAC. This is especially so among the more mature jurisdictions, such as Japan, Singapore, China, and Australia. Rather than developing new headline regulatory or governance initiatives, efforts are likely to focus on the development of tools for evaluation and content provenance,” he says. Josh notes that jurisdictions across APAC will be closely watching how the implementation of the EU AI Act unfolds, as well as the US regulatory stance towards AI under President Trump’s administration before deciding what steps to take.
In contrast, Latin America will likely move full speed ahead toward AI legislation. Maria Badillo, Policy Counsel for Global Privacy, explains that “this year will mark significant progress on initiatives to govern and regulate AI across multiple countries in Latin America. Brazil has taken a leading role and is getting closer to adopting a comprehensive law in 2025 after the Senate’s recent approval of the AI bill. Other countries like Chile, Colombia, and Argentina have introduced similar frameworks.” Maria says that this will happen mainly under the influence of the EU AI Act, but also from Brazil’s AI bill.
When it comes to AI legislation, the EU is catching its breath this year, focusing on the implementation of the EU AI Act, which was adopted last year and whose application starts rolling out in a month. Necessary Codes of Conduct – like the one dedicated to general purpose AI, implementing acts, and specific standards are expected to flow within the next 18 months or so. But this year, we will certainly see the first signs of whether this new law will successfully achieve its goals. A good indicator will be observing in practice the intricate web of authorities tasked by the EU AI Act with oversight, implementation, and enforcement of the law. “The lack of a one-stop-shop mechanism and the presence of several authorities in the same jurisdiction will be a first test of the efficiency of the AI Act and the authorities’ ability to coordinate,” highlights Vincenzo Tiani, Senior Policy Counsel in FPF’s Brussels office.
Meanwhile, it is expected that DPAs will gain a more prominent role in enforcing the law on matters at the intersection of the GDPR with the various new EU acts regulating the digital space, including the EU AI Act. “DPAs will be increasingly called to step up and drive enforcement actions on a broad number of issues also falling under other EU regulatory acts, but which involve the processing of personal data and the GDPR,” says Andreea Serban, FPF Policy Analyst in Brussels. This will be particularly evident regarding AI systems, after a first infringement decision in a series of complaints surrounding ChatGPT was issued by the Italian DPA, the Garante, at the end of 2024.
The space in AI governance that the GDPR occupies will visibly expand this year, including into issues where copyright is considered central. Vincenzo explains that “the licenses provided by newspapers to providers of LLMs, at least so far, do not cover the protection of personal data contained therein.” The Italian DPA has already raised the flag on this issue.
Countervailing some of the biggest risks of Generative AI beyond the processing of personal data will keep regulators across Europe busy, be they DPAs, the European Commission’s AI Office, or other national EU AI Act implementers. Dr. Desara Dushi, Policy Counsel in our Brussels office, anticipates “a sharp focus on controlling the use of synthetic data that fuels harmful content, with the rise of advanced emotional chatbots and the proliferation of deepfakes.” This could happen through “more robust and specific guidelines targeting generative AI’s risks.”
3. International Data Transfers will come back on top of the Global Privacy agenda
As I anticipated last year in my 2024 predictions, international data transfers started intertwining with the broader geopolitical goals of countries caught in the AI race. This trend will become even more visible in 2025, when we expect that issues related to international data transfers will come back to the top of the Global Privacy agenda, fueled this time not only by the geopolitics of AI development, but also by the broader dynamic between a new European Commission in Brussels and a new administration in Washington DC.
“I think transatlantic data transfers issues will be brought back to center stage in the dynamics of EU’s implementation of digital regulations like the DSA and the DMA on one side, and the priorities of the new administration in the US on the other side,” foresees Lee Matheson, who is based in our Washington DC office and who closely follows international data transfers. But, this time around, the pressure on the continuity of data flows between the US and the EU might first come from the US side.
Lee thinks we should follow closely what happens with Executive Order (E.O.) 14117 “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern,” an instrument adopted last year which bans transfers of bulk sensitive data of Americans outside of the U.S. in specific circumstances and only towards designated countries of concern (currently China, Iran, Russia, Venezuela, Cuba and North Korea). The Executive Order could be left as is, amended, repealed, or replaced by the new administration in Washington. But an interesting point Lee raises is that “E.O. 14117 and its associated DOJ Rules, in particular, provide a framework that could be extended to additional jurisdictions.”
On the other hand, the General Court of the CJEU started early this year with a decision that recognized plaintiffs can obtain compensation for non-material damage if their personal data have been transferred unlawfully, in a case involving transfers made by the European Commission to the U.S. before the Data Privacy Framework became effective. This clarification made by the Court could increase the appetite for challenging the lawfulness of international data transfers. In part due to pressure on more traditional data transfer mechanisms, Lee thinks “the world will see alternative systems for international data transfers, such as the Global Cross Border Privacy Rules system, become substantially more prominent.”
Indeed, transatlantic data flows will only be one of many cross-border data flow stories to follow. “We may well see continuing fragmentation of the cross-border data transfer landscape globally and in APAC into clusters of likeminded jurisdictions, ranging from those like Singapore and Japan that are working to promote trusted data flows (especially through initiatives like the Global CBPRs) to those like Indonesia, India, and Vietnam that have recently renewed their interest in adopting data localization measures,” adds Dominic Paulger, FPF Deputy Director for APAC, from our Singapore office. He also thinks that geopolitical and regulatory trends in the US and the EU will affect dynamics in APAC. “While there will be tension between data localization requirements in some jurisdictions, navigating the right balance will be crucial in shaping both regulatory strategies and business practices across the region in 2025,” concludes Sakshi Shivhare, Policy Associate for FPF APAC.
4. Convergence of youth privacy and online safety will take the spotlight around the world
Convergence of children’s and teen’s privacy and online safety issues into new legislative action, regulatory initiatives, or public policy measures is being emphatically highlighted as a top issue to watch in 2025 by my colleagues across APAC, India, EU, and, to some extent, Latin America.
Dominic explains that jurisdictions in APAC are increasingly incorporating online safety provisions into data protection laws, with some focusing on age verification or age-appropriate design requirements. This highlights tensions between real concerns about young people’s online safety and the substantial privacy risks that are posed by age assurance technologies and related mandates. Experts have raised the need for more cross-cutting conversations to identify and address privacy and security risks created by regulatory efforts. He expects the focus on youth safety to continue throughout 2025, “especially following Australia’s recent ban on social media use for under-16s.” This approach has been criticized by some youth safety and privacy experts while being lauded by others. Several jurisdictions, including Singapore, are considering emulating this model, and many more will be watching to see how it plays out.
“The dialogue around online youth safety will likely intensify in the EU as well, with a notable focus on children’s overall well-being and how that intersects with youth privacy rights,” foresees Desara, who comes to FPF’s Brussels office with extensive research and policy work in this space. “The narrative may broaden to encompass a more holistic approach to child protection, leading toward ‘child rights by design’ requirements,” she adds.
The Child Sexual Abuse Regulation (CSAR) proposal in the EU will continue to be the subject of fierce debate in 2025. The CSAR debate has been characterized by proponents noting the measure’s noble goals and critics characterizing the proposal as technically unworkable and certain to undermine core privacy and security measures. Desara concludes: “With early insights emerging from the UK’s Online Safety Act, the ongoing intersection of privacy and youth safety promises to be a defining issue in the year ahead.”
5. We have a new law, now what? Implementation and groundwork for enforcement will be central in APAC, LatAm, Africa, and EU
Several jurisdictions across all regions will focus on starting the implementation of recently adopted data protection laws. Perhaps this is most visible in the APAC region, which “is seeing a significant maturation of data protection frameworks,” as Sakshi Shivhare notes. Examples include “the promulgation of India’s DPDPA Rules, the phased implementation of Malaysia’s PDPA amendments, the much-awaited finalization of implementing regulations for Indonesia’s PDP Law, and the implementation of Australia’s first tranche of Privacy Act amendments,” explains Josh Lee.
This year, significant attention will be paid to India’s DPDPA Implementing Rules. “With the draft rules now released, attention will shift to public consultations and how the government addresses feedback,” notes Bilal Mohamed, FPF Policy Analyst based in New Delhi. He points out that some of the key concerns discussed so far relate to “the possible reintroduction of data localization norms, (Rules 12(4) and 14) and the practical concerns with the implementation of Verifiable Parental Consent,” also adding to two of the trends we identified above related to international data transfers and children’s privacy and online safety. “Together, these shifts suggest that 2025 will be pivotal for creating a more cohesive, though not necessarily uniform, privacy landscape across APAC,” concludes Sakshi.
Jurisdictions across Africa will face similar challenges this year. Mercy King’ori, FPF Policy Manager for Africa, based in Nairobi, thinks we should expect “more sectoral regulations as controllers and processors continue to seek clarity on the practical implementation of legal provisions in most data protection laws across the continent. This is the continuation of a trend from 2024 where DPAs have been identifying gaps in the implementation of the laws and proposing regulations and guidelines in data-intensive sectors such as education, marketing, and finance.”
She adds that, in parallel, DPAs are dealing with an increasing number of complaints: “The rise of complaints has been due to heightened awareness of data subject rights and DPAs eager to push for compliance with national data protection regimes. The move towards enforcing compliance has even seen DPAs initiate assessments on their own volition, such as South Africa’s Information Regulator leading to enforcement notices and penalties.”
Secondary or implementing regulations are also expected to drive the agenda in Latin America, with a priority on “protecting children’s data, data subject rights, and processing of personal data in the context of AI,” points out Maria Badillo. She specifically notes that “active DPAs in the region, such as those from Brazil and Argentina, have identified AI regulation, exercise of data subject rights, and processing of children’s data among the priority areas for developing secondary regulations and guidance in 2025.”
Even the EU will have implementation fever this year – which is to be expected after intense lawmaking of everything digital and data during the first von der Leyen Commission. “In 2025, we should see a policy shift, prioritizing the application and implementation of existing frameworks, like the EU AI Act, the DSA, the DMA, and so forth, rather than proposals of new legislation,” points out Andreea Serban, who also notes recent messaging in Brussels signaling a decreased focus on regulation, especially in the aftermath of the Draghi report.
This is indeed how the Brussels agenda reads, but it shouldn’t be a surprise if new legislation, like the Digital Fairness Act, will make its way as an official proposal as soon as this year. And with other files like the CSAR still on the legislative train, or the constant “hide and seek” with the ePrivacy Regulation, the Brussels legislation machine might slow down, but it will not halt.
6. Bigger public policy debates will end up shaping global privacy: from “Innovation v. Regulation,” to checks and balances over government access to data
The “Innovation v. Regulation” dichotomy has been omnipresent in the European public debate since the publication of the Draghi report last year, even as some are positing this is a false choice (see Anu Bradford or Max von Thun).
“With a new European Commission taking the reins in Brussels, and with political tides changing across the EU, the innovation versus regulation debate will continue to polarize the digital policy community. Repercussions will be felt in discussions regarding not only the application and enforcement of the DSA and the DMA but also for data protection law as we await new GDPR enforcement rules,” explains Bianca-Ioana Marcu. However, she suggests that this debate might be louder than having effects in practice, as Brussels will move ahead with its regulatory agenda of the new Commission. It is clear, though, that Brussels may experience a “shift towards promoting EU competitiveness,” as Andreea framed it, and that this will impact, even if incrementally, all the “digital agenda” files.
While most of the attention in India might be focused on the DPDPA Implementing Rules, promoting the country’s competitiveness is a bigger goal for many, which could result in regulatory changes supporting it. Bilal signals that there are interesting data-sharing initiatives coming up at a sectoral level. “For instance, MeitY plans to launch an IndiaAI datasets platform to provide high-quality datasets (non-personal data) for AI developers and researchers. Similar initiatives are underway in sectors such as healthcare, e-commerce, and agriculture,” he says. These initiatives are quite similar to the EU Data Spaces, which are also expected to advance. “It will be fascinating to see how these initiatives align with the DPDPA, and how this shapes the definition of ‘non-personal data’ in India,” adds Bilal.
One last bigger public policy debate that may impact concrete data protection this year remains the checks and balances over government access to personal data. For instance, Rivki, based in our Tel Aviv office, highlights that this year she expects the privacy community to confront the long-term privacy consequences of the exceptional measures taken by the government during the war, such as storage of fingerprints in databases or authorization of intrusion into security cameras without consent. The privacy community will likely be focused to “ensure that any measures implemented during this period do not persist or become the new standard for privacy,” she says.
Government access to data shapes up to also be top of mind in policy debates in India, with Bilal noting that “on a broader scale, constitutional challenges related to government exemptions under the DPDPA may surface in the Supreme Court once the implementing rules are officially notified.”
7. A dark horse prediction and further reading
Before ending the round-up of issues to follow in 2025 in Global Privacy, I will make my dark horse prediction: The reopening of the GDPR might appear more convincingly on the regulatory agenda this year, once the procedural reform is done. What seemed almost sacrilegious a couple of years ago will now look more likely, especially in the light of DPAs becoming active in enforcing the GDPR on AI systems, and eventual hiccups of non-DPA enforcers applying the digital strategy package at the intersection with GDPR provisions.
Finally, for a good understanding of what the year might bring to US policymaking, check out this analysis by Jules Polonetsky, FPF CEO, for TechPolicy Press, “2025 May be the Year of AI Legislation: Will we see Consensus Rules or a Patchwork?,” as well as FPF Senior Director for U.S. Legislation Keir Lamont’sblog, “Five Big Questions (and Zero Predictions) for the US State Privacy Landscape in 2025.”
Twelve Privacy Investments for Your Company for a Stronger 2025
FPF has put together a list of Twelve Privacy Investments for Your Company for a Stronger 2025 that reflects on new perspectives on the work that privacy teams do at their organizations. We hope there is something here that’s useful where you work, and we’d love to hear other ideas and feedback.
Privacy Investments for Your Company for a Stronger 2025
Re-review your privacy notice and other disclosures to ensure you are covering any new data collection or uses planned in 2025 – including secondary uses of data – that may be going on. This has been a theme of FTC actions in 2024 and a measure that would enhance transparency as suggested by the EDPB in the latest Opinion on data protection and AI models. Since new uses of data for AI have been prompting consumer alarm, allow time for explanation, education and communication to support user understanding of the value proposition. Consider opt-out options for new uses and opt-in for any significant changes or uses of sensitive data.
Take steps to minimize your processing of precise location data or sensitive data. Explore using less precise alternatives, ensuring limited retention and effective de-identification techniques, or uses of other kinds of data that have less risk of creating sensitive inferences.
Take a good look at vendor management. Don’t just rely on contractual constraints. If there are no technical monitoring or other controls in place, get a plan for some in product roadmaps.
Deepen your relationships with various business teams (sales and marketing, product teams, etc.) so you know what they’re planning and can help develop a forward-looking compliance strategy.
Help FPF gather information about the operational implications of new or prospective laws so we can effectively explain data uses and tech to policymakers to help them craft policy and guidance that strikes the right balance for accountable data use.
While comprehensive federal privacy legislation may not be imminent, the states and the attorneys general are still pretty concerned about privacy, as are governments around the world. Deepen your connections with the AG offices and understand their perspectives. Meet key local legislators and build relationships by supporting their interest in being educated about emerging technologies and their impact.
Although the outcomes of court cases have been unclear, it is clear that protections for users under 18 will continue to be a focus of legislative activity and enforcement. Consider options that can provide for more limited uses of teen data.
Take special care with data that may implicate personal health information and prepare to be vigilant in the case that law enforcement comes knocking for information about a user that could reveal their reproductive health status. We recommend our Health and Wellness Policy Brief.
Map your international data flows and track any instances where internal processes or third-party relationships could put data within reach of one of the U.S. government’s “countries of concern.” Diversify your data transfers tools with an eye on the global landscape, as cross-border data flows restrictions are increasingly expanding beyond the EU-US dynamic.
If you are doing business in India, make sure to have good data governance and data inventories in place for your operations. Major changes are coming, with the implementation date of the DPDPA in sight after the draft implementing rules were published at the very beginning of this year. Keep a close track on India’s DPDPA Rules and stay sufficiently informed to provide feedback during the public consultation exercise.
Align your teams on how your company will use AI tools internally to automated work flows and make all of this work easier, including applying AI tools to making privacy compliance easier, like handling data subject requests or assessing whether your policies could be made easier to read and access. FPF’s new report may help.
Tidy up your clean room practices. You may view your partners as trusted, but the FTC may consider them potential attackers from a de-identification point of view. Ensure technical controls are credible.
CEO Jules Polonetsky: 2025 May be the Year of AI Legislation: Will We See Consensus Rules or a Patchwork?
In 2024, lawmakers across the United States introduced more than 700 AI-related bills, and 2025 is off to an even quicker start, with more than 40 proposals on dockets in the first days of the new year. In Washington D.C., a post-election reshuffle presents unique opportunities to address AI issues on a national level, with one party controlling the White House and both houses of Congress. But, while Congress has shown strong interest in AI generally, the 119th Congress seems more likely to prioritize other tech issues, such as online speech and child safety, over regulating the consumer protection aspects of AI.
With contributions from Judy Wang, Communications Intern
2024 was a landmark year for the Future of Privacy Forum, as we continued to grow our privacy leadership through research and analysis, domestic and global meetings, expert testimony, and more – all while commemorating our 15th anniversary.
Expanding our AI Footprint
While 2023 was the year of AI, 2024 was the year of navigating how AI was used in practice and its influence across policy and emerging technologies. FPF further expanded its AI with the launch of FPF’s Center for Artificial Intelligence.
The FPF Center for AI supports FPF’s role as the leading pragmatic and trusted voice for those who seek impartial, practical analysis of the latest challenges for AI-related regulation, compliance, and responsible use.
Earlier this month, the Center officially launched its first report, “AI Governance Behind The Scenes: Emerging Practicers For AI Impact Assessments,” which examines the key considerations, emerging practices, and challenges that arise in the evaluations companies use to identify and address potential risks associated with AI models and systems.
Check out some other highlights of FPF’s AI work this year:
Detailed the complex policy, legal, and technical challenges posed by California’s AB 1008.
Produced a new report on confidential computing and how it differs from other PETs, as well as an in-depth analysis of its sectoral applications and policy considerations.
Presented the Government of Singapore with the inaugural Global Responsible AI Leadership Award for the country’s pragmatic work in establishing frameworks for AI regulation and governance. The award also granted privacy experts Jim Halpert and Patrice Ettinger its Career Achievement Award and Excellence in Career Award.
Updated our Generative AI internal compliance document with new content addressing organizations’ ongoing responsibilities, specific concerns (e.g., high-risk uses), and lessons taken from recent regulatory enforcement related to these technologies.
With the enactment of the Digital Personal Data Protection Act (DPDPA), provided five ways the DPDPA could shape the development of AI in India.
Highlighted the African Union AI Continental Strategy and how it centers AI governance as a foundational aspect for the successful development and deployment of AI in the continent.
Published a Two–Page Fact Sheet overview of The Council of Europe’s (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (Framework Convention on AI).
Summarized key elements of the Colorado AI Act and identified significant observations about the law.
Bringing Our Expertise Across the Globe
2024 continued to be pivotal for our global experts, as they followed privacy developments across the Asia Pacific, Europe, Latin America, and Africa. We also participated in key events in Brussels, South Korea, France, Tokyo, and Tel Aviv.
Europe
FPF brought together European data protection experts through high-level convenings, blogs, and reports. We developed key takeaways from the Commission’s second Report on the GDPR, with an overview and analysis of the findings from various stakeholders, including DPAs, and a new key resources page covering all aspects of the EU AI Act. At CPDP.ai, a multi-stakeholder comparative panel, we explored what we can learn from regional and international approaches to AI regulation and how these may facilitate a more global, interoperable approach to AI laws. Finally, we held our 8th Annual Brussels Symposium in collaboration with the Brussels Privacy Hub of Vrije Universiteit Brussel (VUB), where lively in-person discussions took place covering this year’s topic, “Integrating the AI Act in the EU Data Governance Ecosystem: Bridging Regulatory Regimes.”
The Asia-Pacific
FPF’s APAC office entered its fourth year of continued growth and became a main component of our global research. We provided a comprehensive analysis of strategy documents and key regulatory actions of the DPAs in 10 jurisdictions, published or developed in 2023 and 2024, setting out regulatory priorities for the following years. This includes Australia, China, Hong Kong, the Special Administrative Region of China (SAR), Japan, Malaysia, New Zealand, the Philippines, Singapore, South Korea, and Thailand.
In July, FPF participated in Personal Data Protection Week 2024 (PDP Week), an event organized and hosted by the Personal Data Protection Commission of Singapore, examining emerging technologies, including generative AI, India’s landmark data protection legislation, and PETs. Our second annual Japan Privacy Symposium in conjunction with the 62nd Asia-Pacific Privacy Authorities (APPA) Forum, was a big success. In cooperation with the Personal Information Protection Commission of Japan (PPC), the Japan DPO Association, and S&K Brussels LPC, this year’s Symposium featured a keynote speech from Commissioner OHSHIMA Shuhei, which focused on emerging data protection and privacy trends in Japan.
We dissected “neurorights,” a set of proposed rights that specifically protect mental freedom and privacy, which have captured the interest of many governments, scholars, and advocates, which is very apparent in Latin America. FPF looked into several countries that are actively seeking to enshrine these rights in law, including Chile, Mexico, and Brazil.
The African Continent
We gave an overview of harmonization efforts in regional and continental data protection policies in Africa and the role of Africa’s 8 Regional Economic Communities (RECs) and submitted comments to the Nigeria Data Protection Commission (NDPC) on the proposed General Application and Implementation Directive (GAID).
Federal and State U.S. Legislation
FPF played a critical role in informing both federal and state government entities on protecting data privacy interests.
We provided recommendations and filed comments with the following:
U.S. Department of Transportationin response to their request for information on opportunities and challenges of AI transportation and again in response to the National Highway Traffic Safety Administration (NHTSA) and the DOT Advanced Notice of Proposed Rulemaking regarding advanced impaired driving prevention technology.
Federal Trade Commission (FTC)in response to its request for comment on the Children’s Online Privacy Protection Act (COPPA) proposed rule, again in response to the FTC’s Supplemental Notice of Proposed Rulemaking as well as in response to its request for comment on the Children’s Online Privacy Protection Act (COPPA) proposed rule.
Office of Management and Budget (OMB)regarding the agency’s Request for Information on how privacy impact assessments (PIAs) may mitigate privacy risks exacerbated by AI and other advances in technology and again to Request for Information (RFI) regarding responsible procurement of AI in government.
Department of Justice (DOJ) regarding the Advance Notice of Proposed Rulemaking on Access to Americans’ Bulk Sensitive Personal Data and Government-Related Data by Countries of Concern (ANPRM).
Bureau of Industry and Security (BIS) and the United States Department of Commerce’s (DOC) Advanced Notice of Proposed Rulemaking (ANPRM)in response to securing the information and communications technology and services supply chain of connected vehicles.
California Civil Rights Councilin response to their proposed modifications to the state Fair Employment and Housing Act (FEHA) regarding automated-decision systems (ADS) and again regarding their Proposed Modifications to the Employment Regulations Regarding Automated-Decision Systems.
Federal Communications Commission (FCC)in response to the FCC’s Notice of Proposed Rulemaking (NPRM) on the use of artificial intelligence (AI) to generate content for political advertisements and again in response to the Notice of Inquiry (NOI) on technologies that can alert consumers that they may be interacting with an AI-generated call based on real-time phone call content analysis.
New York State Senateto inform forthcoming rulemaking for the implementation of a pair of bills aimed at creating heightened protections for children and teens online.
D.C. Council Committee on Healthgave feedback on the role of consent in the Consumer Health Information Privacy Protection Act of 2024 (“CHIPPA”).
This year also marked the 14th annual Privacy Paper for Privacymakers Award on research for policymakers in the U.S. Congress, U.S. federal agencies, and international data protection authorities. The event was kicked off at Capitol Hill, featuring an opening keynote by U.S. Senator Peter Welch (D-VT). FPF honored winners of internationally focused papers in a virtual conversation the following week.
Youth & Education
In 2024, federal and state policymakers continued to work on legislation that protects children online, including the Kids Online Safety and Privacy Act (KOSPA) and the California Age-Appropriate Design Code Act (AADC). FPF’s work includes a breakdown of bills related to children’s online safety and a checklist designed for K-12 schools to help vet generative AI tools.
FPF published a blog in August that contextualized the Kids Online Safety and Privacy Act (KOSPA), which includes two bills that gained significant traction in the Senate in recent years: the Kids Online Safety Act (KOSA) and Children and Teens Online Privacy Protection Act (“COPPA 2.0”).
In July, we explored how the California Age-Appropriate Design Code Act (AADC) catalyzed conversations in America around protecting kids and teens online. We also analyzed the implications of the CA AADC and the evolving landscape of children’s online privacy.
As children spend more time online, lawmakers have continued introducing legislation to enhance the privacy and safety of kids’ and teens’ online experiences beyond the Children’s Online Privacy Protection Act (COPPA) framework. FPF analyzed the status quo of knowledge standards under COPPA and provided key observations on the current knowledge standards in various state privacy laws.
We also released a checklist and accompanying policy brief designed specifically for K-12 schools to help them vet generative AI tools for compliance with student privacy laws, outlining key considerations when incorporating generative AI into a school or district’s edtech vetting checklist.
With young people adopting immersive technologies like extended reality (XR) and virtual world applications, companies have expanded their presence in digital spaces, launching brand experiences, advertisements, and digital products. FPF analyzed recent regulatory and self-regulatory actions related to youth privacy in immersive spaces while also pulling out key lessons for organizations building spaces in virtual worlds.
Diving Deeper into Privacy Enhancing Technologies (PETs) Research and Large Language Models (LLMs)
2024 also marked further exploration into Privacy Enhancing Technologies (PETs) with FPF’s establishment of the PETs Research Coordination Network (RCN) and the creation of the PETs Repository. Additionally, we further explored large language models (LLMs) and whether or not they contained personal information.
In February, the National Science Foundation (NSF) and the Department of Energy (DOE) awarded FPF grants to support its establishment of a Research Coordination Network (RCN) for Privacy-Preserving Data and Analytics. FPF’s work will support developing and deploying Privacy Enhancing Technologies (PETs) for socially beneficial data sharing and analytics.
In July, FPF also launched the Privacy-Enhancing Technologies (PETs) Research Coordination Network (RCN), bringing together a group of cross-sector and multidisciplinary experts dedicated to exploring PETs’ potential in AI and emerging technologies and stewarding their adoption and scalability. Building on these initiatives and other efforts, FPF launched the PETs Repository, a webpage that consolidates available resources and developments around the development and deployment of PETs.
FPF further delved into LLMs to explore if they contain personal data. If they do, what requirements must companies follow for processing personal data for training AI models? Recent analysis focused on Brazil’s Autoridade Nacional de Proteçao de Dados Pessoais (ANPD) and issuing a preliminary decision on the legal basis for processing personal data in LLMs. We also wrote a blog on California’s recently passed Assembly Bill 1008 applying CCPA privacy rights to LLMs and whether personal data exists in an AI model. An online discussion in a LinkedIn Live featuring FPF experts also delved into LLMs and personal data.
Facilitating Privacy Thought Leadership Home and Abroad
To celebrate the milestone of 15 years, FPF convened leading data protection regulators and FPF members at our 15th Anniversary Spring Social. The event also marked the transition of FPF Board Chairman Christoper Wolf, recognizing his founding role at FPF and many years of leadership. We welcomed our new Board Chair, Alan Raul.
High-level engagement from the year included:
Our first DC Privacy Forum: AI Forward, accompanied by the launch of FPF’s new Center for Artificial Intelligence.
The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics was launched with a virtual kick-off and White House Roundtable events. The Virtual Kick-off event featured over 40 global experts who helped shape the RCN’s work for the next three years. Hosted by the White House Office of Science and Technology Policy, we began a collaborative effort to advance PETs and their use in developing more ethical, fair, and representative AI.
The above is only a partial list of FPF initiatives from the year but highlights some of our major achievements. We thank all those who contributed, participated, advised and supported. Continue to follow FPF’s work by subscribing to our monthly briefing and following us on LinkedIn, Twitter/X, and Instagram. On behalf of the FPF team, we wish you a very Happy New Year and look forward to what’s to come in 2025!
OAIC’s Dual AI Guidelines Set New Standards for Privacy Protection in Australia
On 21 October 2024, the Office of the Australian Privacy Commissioner (OAIC) released two sets of guidelines (collectively, “Guidelines”), one for developing and training generative AI systems and the other one for deploying commercially available “AI products”. This marks a shift in OAIC’s regulatory approach from enforcement-focused oversight to proactive guidance.
The Guidelines establish rigorous requirements under the Privacy Act and its 13 Australian Privacy Principles (APPs), particularly emphasizing accuracy, transparency, and heightened scrutiny of data collection and secondary use. Notably, the Guidelines detail conditions that must be met for lawfully collecting personal information publicly available online for purposes of training generative AI, including through a detailed definition of what “fair” collection means.
This regulatory development aligns with Australia’s broader approach to AI governance, which prioritizes technology-neutral existing laws and voluntary frameworks while reserving mandatory regulations for high-risk applications. However, it may signal increased regulatory scrutiny of AI systems processing personal information going forward.
This blog post summarizes the key aspects of these Guidelines, their relationship to Australia’s existing privacy law, and their implications for organizations developing or deploying AI systems in Australia.
Background: AI Regulation in Australia and the Role of OAIC
Australia, like many jurisdictions globally, is currently in the process of developing its approach to AI regulation. Following a public consultation on “Safe and Responsible AI in Australia” in 2023, the Australian Government issued an “Interim Response” outlining an approach that seeks to regulate AI primarily through existing, technology-neutral laws and regulations, prioritizing voluntary frameworks and soft law mechanisms, and potentially reserving future mandatory regulations for high-risk areas. This stands in contrast to the European Union’s AI Act, which introduces a comprehensive regulatory framework covering a broader range of AI systems.
While the Australian Government has been giving shape to the country’s overall approach to AI regulation, several Australian regulators, as part of the Digital Platform Regulators (DP-REG) Forum, have been closely following developments in AI technology, co-authoring working papers on large language models (2023) and more recently, multimodal foundation models (2024).
The OAIC issued its first ever guidance on complying with the Privacy Act in the context of AI in a DP-REG working paper on multimodal foundation models released in September 2024. It followed up the next month with two sets of more detailed guidelines that provide practical advice for organizations on complying with the Privacy Act and the APPs in two important contexts:
The “Guidance on Developing and Training Generative AI Models” (AI Development Guidelines) targets developers andfocuses specifically on privacy considerations that may arise from training generative AI models on datasets containing personal information. It identifies obligations regarding the collection and processing of such datasets and highlights specific challenges that may arise from practices like data scraping and obtaining datasets from third parties.
The “Guidance on Privacy and the Use of Commercially Available AI Products” (AI Product Guidelines) is directed at organizations deploying commercially available AI systems that process personal information, in order to offer products or services internally or externally. It also covers the use of freely accessible AI products, such as AI chatbots.
Both Guidelines are complementary, acknowledging and referring to each other, while addressing distinct phases in the AI lifecycle and different stakeholders within the broader AI ecosystem. However, they are not intended to be comprehensive. Instead, they aim to highlight the key privacy considerations that may arise under the Privacy Act when developing or deploying generative AI systems.
The Guidelines Recognize Both AI’s Benefits and Significant Privacy Risks
Both Guidelines acknowledge AI’s potential to benefit the Australian economy through improved efficiency and enhanced services. However, they also emphasise that AI technologies’ data-driven nature creates substantial privacy risks that must be managed carefully. Key risks highlighted include:
Loss of control for individuals over how their personal information is used in AI training datasets.
Bias and discrimination: Inherent biases in training data can be amplified, leading to discriminatory outcomes.
Inaccuracies: Outputs of AI systems may be inaccurate and are not always easily explainable, impacting trust and decision-making.
Re-identification: Aggregation of data from multiple sources increases the risk of individual re-identification.
Potential for misuse: Generative AI in particular can be misused for malicious purposes, including disinformation, fraud, and creation of harmful content.
Data breaches: Vast datasets used in training increase the risk and potential impact of data breaches.
To address these risks, both Guidelines emphasize that it is important for organizations to adopt a “Privacy by Design” approach when developing or deploying AI, and conducting Privacy Impact Assessments to identify and mitigate potential privacy impacts throughout the AI product lifecycle.
The Guidelines Establish Rigorous Accuracy Requirements
Organizations are required under APP 10 to take reasonable steps to ensure personal information is accurate, up-to-date, and complete when collected, and also relevant when used or disclosed.
Both Guidelines emphasize that the accuracy obligation in APP 10 is vital to avoid the risks that may arise when AI systems handle inaccurate personal information, which range from incorrect or unfair decisions, to reputational or even psychological harm.
For AI systems, identifying “reasonable steps” under APP 10 requires organizations to consider:
the sensitivity of the personal information being processed;
the organization’s size, resources, and expertise – factors which affect their capacity to implement accuracy measures; and
the potential consequences of inaccurate processing for individuals, as higher risks of harm necessitate more robust safeguards.
The Guidelines emphasize that generative AI models in particular present distinct challenges under APP 10 because they are trained on massive internet-sourced datasets that may contain inaccuracies, biases, and outdated information which can be perpetuated in their outputs. The probabilistic nature of these models also makes them prone to generating plausible but factually incorrect information, and their accuracy can deteriorate over time as they encounter new data or their training data becomes outdated.
To address these challenges, the Guidelines recommend that organizations should implement comprehensive measures, including thorough testing with diverse datasets, robust data quality management, human oversight of AI outputs, and regular monitoring and auditing. The key theme is that organizations must take proactive steps to ensure accuracy throughout the AI system’s lifecycle, with the stringency of measures proportional to the system’s intended use and potential risks.
The Guidelines Make Transparency a Core Obligation Throughout the AI System Lifecycle
The OAIC’s guidelines also establish transparency as a fundamental obligation throughout the lifecycle of an AI system. Notably, however, the guidelines see transparency as an obligation that operates on multiple levels.
The transparency obligation is rooted in APP 1, which requires organizations to manage personal information openly and transparently (including by publishing a privacy policy), and APP 5, which requires organizations to notify individuals about how their personal information is collected, used, and disclosed.
The Guidelines emphasize that in an AI context, privacy policies must provide clear explanations of how AI systems process personal information and make decisions. When AI systems collect or generate personal information, organizations must give timely and specific notifications that provide individuals genuine insight into how their information is processed and empower them to understand AI-related decisions that affect them.
To support this transparency framework, organizations must invest in comprehensive staff training to ensure employees understand both the technical aspects and privacy implications of their AI systems, enabling them to serve as knowledgeable intermediaries between complex AI technologies and affected individuals. This human oversight is to be complemented by regular audits and monitoring, which help organizations maintain visibility into their AI systems’ performance, address privacy issues proactively, and generate the information needed to maintain meaningful transparency with individuals.
The Guidelines Place Heightened Scrutiny on Data Collection and Secondary Use
The Guidelines underscore the need for heightened scrutiny on data collection practices under APP 3 and the secondary use of personal information under APP 6 in the AI context. The Guidelines also emphasize that organizations may face distinct challenges across different collection methods.
With regard to challenges in data collection methods, the AI Developer Guidelines highlight that the collection of training datasets that may contain personal information through web scraping – defined as “the automated extraction of data from the web” – raises several concerns under APP 3.
Notably, the Guidelines caution that developers should not automatically assume that information posted publicly can be used to train AI models. Rather, developers must ensure that they comply with APP 3 by demonstrating that:
It would be unreasonable or impracticable to collect the personal information directly from the individuals concerned;
The collection of personal information through web scraping is lawful and fair. Noting that collection of personal information via web scraping is often done without the direct knowledge of data subjects, the Guidelines identify 6 factors to consider in determining whether such collection is fair:
Individuals’ reasonable expectations;
The sensitivity of the information;
The intended purpose of the collection, including the intended operation of the AI model;
The risk of harm to individuals;
Whether the individuals concerned intentionally made the information public; and
The steps the developer will take to prevent privacy impacts, including deletion, de-identification, and mechanisms to increase individuals’ control over how their information is processed; and
Insofar as the dataset contains “sensitive information” (as defined under Australia’s Privacy Act), individuals have provided express consent for this information to be used to train an AI model.
The Guidelines therefore do not prohibit the collection of training data through web scraping, but they lay out detailed requirements that must be fulfilled to lawfully do so. Notably, the Guidelines define what “fair” collection of personal data through web scraping requires, bringing forward several dimensions to consider, from individuals’ perception of the collection and attitude when making the information public, to intrinsic characteristics of the information collected, to extrinsic assessments of risks of harm, to technical and organizational measures that are privacy-enhancing. The Guidelines acknowledge that organizations may face significant challenges in meeting many of these requirements.
Further, the Guidelines note that many of the above considerations under APP 3 also apply to third-party datasets.The Guidelines therefore recommend that organizations seeking to rely on such datasets conduct thorough due diligence regarding data provenance and the original circumstances in which the information was collected.
By contrast, when organizations seek to use their existing datasets to train AI models, the main consideration under the Guidelines is complying with APP 6, whichgoverns secondary use of personal information.This principle requires organizations to either obtain informed consent or carefully evaluate whether AI training aligns with individuals’ reasonable expectations based on the original collection purpose.
Throughout all methods, organizations must adhere to the principle of data minimization, limiting collection of personal information to what is strictly necessary, and must also consider techniques like de-identification or the use of synthetic data to further reduce risks to individuals.
The AI Product Guidelines Require Organizations to Pay Attention to Privacy Throughout the Deployment Lifecycle
The AI Product Guidelines advocates for a “privacy by design” approach that integrates privacy considerations throughout the AI product lifecycle.
They specifically call on organizations to conduct thorough due diligence before adopting AI products. Recommended steps include assessing the appropriateness of these products for their intended use, evaluating the quality of training data, understanding security risks, and analyzing data flows to identify parties that can access inputted information.
In the deployment and use phase, organizations must exercise strict caution when inputting personal information into AI systems, particularly systems that are provided to the public for free, such as AI chatbots. They emphasize the need to comply with APP 6 for any secondary use of personal information, minimizing data input, and maintaining transparency with individuals about how their information will be used.
While the AI Product Guidelines primarily focus on APPs 1, 3, 5, 6, and 10, they also emphasize that several other APPs may play crucial roles, depending on how the AI product is being used. These APPs include:
APP 8, which governs cross-border data transfers when AI systems process information on overseas servers;
APP 11, which requires reasonable security measures to protect personal information in AI systems from unauthorized access and misuse; and
APPs 12 and 13, which ensure individuals can access and correct their personal information, respectively.
Looking Ahead: The Guidelines Signal Increased Privacy Scrutiny for AI
The OAIC’s guidelines represent a significant step in regulating AI use in Australia that not only aligns with broader Australian government initiatives, such as the Voluntary AI Safety Standard, but also reflects a broader global trend of data protection authorities issuing rules and guidance on AI governance through existing privacy laws.
The OAIC’s guidelines establish a foundation for privacy-protective AI development and deployment, but organizations must remain vigilant as both the technology and regulatory requirements continue to develop. The release of the Guidelines may hint at increased regulatory scrutiny of AI systems that process personal information, meaning that organizations that develop or deploy such systems will need to carefully consider their obligations under the Privacy Act and implement appropriate safeguards.
Insights from the Second Japan Privacy Symposium: Global Data Protection Authorities Discuss Their 2025 Priorities, from AI, to Cross-Regulatory Collaboration
The Future of Privacy Forum (FPF) hosted the Second Japan Privacy Symposium (Symposium) in Tokyo on November 15, 2024. The Symposium brought together leading data protection authorities (DPAs) from around the world to discuss pressing issues in privacy and data governance. The Symposium featured in-depth discussions on international collaboration, artificial intelligence (AI) governance, and the evolving landscape of data protection laws.
The Symposium kickstarted the Personal Information Protection Commission of Japan’s (PPC) Japan Privacy Week, and was an official side-event of the 62nd Asia-Pacific Privacy Authorities (APPA) Forum (APPA 62). FPF is grateful for the collaboration and support from the PPC, the Japan DPO Association, and S&K Brussels LPC.
In this blog post, we share some of the key takeaways from the Symposium.
Japan Privacy Symposium features global privacy regulators in Tokyo
The Symposium welcomed an esteemed line-up of speakers. Commissioner Shuhei Oshima from the PPC delivered the opening keynote. In his keynote, Commissioner Oshima shared about the PPC’s regulatory priorities for 2025. These included cross-border data transfers and the Data Free Flow with Trust initiative, as well as further collaboration with the G7 DPAs and bilaterally with various international regulators.
Following the keynote, Gabriela Zanfir-Fortuna, Vice-President for Global Privacy at FPF moderated a panel on the regulatory strategies of APAC and global DPAs in 2024 and beyond. Gabriela was joined by Philippe Dufresne, Privacy Commissioner of Canada, Office of the Privacy Commissioner of Canada, Ashkan Soltani, Executive Director of the California Privacy Protection Agency (CPPA), Dr. Nazri Kama, Commissioner, Personal Data Protection Commissioner’s Office of Malaysia (PDPD), Thienchai Na Nakorn, Chairman, Personal Data Protection Committee of Thailand (PDPC), and Josh Lee Kok Thong, Managing Director for Asia-Pacific at FPF.
Regulators in APAC have some common priorities, such as cybersecurity and cross-border data transfers
The panel kicked off with highlights from a recent report published by FPF’s APAC office, “Regulatory Strategies of Data Protection Authorities in the Asia-Pacific Region: 2024, and Beyond”, presented by Josh. In line with similar FPF work focusing on the EU, Latin America and Africa, the report provides a comprehensive analysis of strategy documents and key regulatory actions of DPAs in 10 major jurisdictions in Asia-Pacific, as well as an overview of key trends in the region.
There are three top common priorities for APAC’s major DPAs:
First, cybersecurity and data breach responses, with 90% of the DPAs included in the Report prioritising this. However, jurisdictions are at various stages of implementing measures in these areas, while enforcement approaches also differ significantly.
Second, cross-border data transfers, which are a priority for 80% of APAC DPAs. Jurisdictions are similarly taking a diversity of approaches, from taking a leading role in international initiatives, such as the Global Cross-Border Privacy Rules (CBPR) System (for instance, Japan and Singapore), to promoting the use of standardized contractual clauses (for instance, China, Japan and Singapore).
Third, AI governance, with 70% of regulators prioritising this. Some have developed comprehensive policy frameworks and regulations for AI, while others have focused on issuing guidelines or addressing AI within existing regulatory structures.
Cross-regulatory and cross-border collaboration is a shared priority for regulators in APAC and beyond
During the panel discussion, one top regulatory priority surfaced was on cross-border collaboration. Commissioner Dufresne emphasized the importance of international cooperation in addressing privacy challenges. “At the OPC, we will continue to be focused on topics such as international collaboration,” he noted. Commissioner Dufresne discussed the OPC’s efforts to collaborate with domestic and international partners, including other regulators in fields such as competition, copyright, broadcasting, telecommunications, cybersecurity, and national security. “Data protection is key to so many of those things,” Commissioner Dufresne said. “It touches other regulators, so working very closely is something we’ve been discussing, including at the G7.”
Expanding regional and international collaboration was similarly a key priority for Malaysia. Commissioner Nazri noted that Malaysia’s PDPD had visited fellow regulators in the UK, EU, Japan, South Korea and Singapore. The PDPD had also just joined the APPA Forum, as well as the APEC Cross-Border Privacy Enforcement Arrangement (CPEA). Going forward, Commissioner Nazri noted that the PDPD would be “moving towards” applying for the Global Cross-Border Privacy Rules (CBPR) certification system. The PDPD is also taking steps towards meeting the EU’s adequacy requirements, with Commissioner Nazri expressing hope that Malaysia would attain EU adequacy “in the next two years.”
Similarly, Chairman Thienchai from Thailand’s PDPC noted that it had sent delegations to attend Global CBPR workshops, and that the PDPC could also be applying to be a member of the Global CBPR system soon.
Regulators are balancing between AI innovation and risk, while managing an ever-growing pool of AI-related issues
AI remains a top concern for regulators worldwide. Commissioner Dufresne stated that ensuring the protection of privacy in the context of emerging and changing technology is a key priority for the OPC. “Certainly, generative AI and other emerging technologies like quantum computing and neurorights are changing the landscape,” he said. “We need to use innovation to protect data.”
He emphasized the importance of leveraging technology to protect privacy, noting that AI can be used as a tool against threats like deepfakes. The OPC is also looking to work with cross-regulatory partners to address issues such as synthetic media. “We’re looking to work with cross-regulatory partners in identifying specific areas and seeing what are the common areas or perhaps different areas of privacy and competition with a specific topic like synthetic media,” he explained.
California’s CPPA has also been at the forefront of rule-making and enforcement actions pertaining to AI and automated decision-making challenges. In this regard, Director Soltani observed that “there is no AI without PI (personal information).” The CPPA has thus had to develop deep expertise in AI while acting as California’s privacy regulator. Besides focusing on rule-making, the CPPA has been conducting enforcement sweeps in various sectors, starting with the connected vehicle sector.
The task of applying data protection laws to AI and issuing relevant industry guidance is also one that Thailand’s PDPC is working on. Chairman Thienchai noted that the PDPC had “established a working group study” on how AI is impacting the protection mechanism under Thailand’s Personal Data Protection Act, with results expected in the first quarter of 2025. Thailand’s PDPC is also working to issue guidelines on the intersection of AI and the PDPA. The guidelines could state, for instance, that in using personal data to train AI systems, developers have to do so on an anonymised basis.
Regulators continue to work on implementing updates to data protection laws to deal with new and emerging challenges
A third theme that emerged from the panel discussion was how regulators were planning to continue working on updates to their data protection laws – and implementing them – in 2025.
For California’s CPPA, Director Soltani highlighted that his agency was deeply engaged in rule-making, especially in these areas: (a) cybersecurity, where companies in California will be required to perform and submit cybersecurity assessments to the CPPA; (b) data protection impact assessments or risk assessments, where companies will be required to perform such assessments including where they deploy AI tools; and (c) automated decision-making technologies and AI. Director Soltani also highlighted the ongoing work of implementing aspects of the California Consumer Privacy Act (CCPA). For instance, with the CPPA’s Data Broker Registry, the CPPA is working on setting up a one-stop shop by January 2026, where Californians will “have the ability to go to one place and request that all of their data be deleted from all of these companies.”
For Malaysia, Commissioner Nazri provided an update on recent amendments to Malaysia’s Personal Data Protection Act (PDPA) that were passed in late-2024. “The amendment was presented to our national parliament in July this year and was officially approved on July 31,” he noted. Commissioner Nazri noted several key changes to the PDPA, including:
A requirement to appoint a Data Protection Officer (DPO);
A mandatory data breach notification system;
Introducing responsibilities for data processors;
Introducing data portability rights;
Revising conditions for cross-border data transfers; and
Increasing penalties for non-compliance.
Commissioner Nazri also noted that the PDPD would be issuing 19 new documents in tranches throughout 2025. Specifically, these were nine pieces of subsidiary legislation, two circulars (or Commissioner’s Orders), seven guidelines, and one standard. Commissioner Nazri further shared that work was ongoing to re-formulate the PDPD into an independent Commissioner’s Office.
For Thailand, while Thailand had passed its PDPA in 2021, Chairman Thienchai noted that Thailand’s PDPA contained a review requirement to update the law if necessary. Chairman Thienchai thus noted that the PDPC would be working in 2025 to introduce a proposal to amend the PDPA “to catch up with the global community.” Further, Chairman Thienchai acknowledged challenges with data breaches, especially in the public sector, and emphasized the need for coordination among agencies. “We have to coordinate with other agencies to improve the enforcement mechanism in the PDPA,” he said.
Finally, the PDPC is prioritizing cross-border data transfers. “We issued some subordinate laws related to cross-border transfers and we adopted ASEAN Model Contractual Clauses (MCCs) and also EU Standard Contractual Clauses (SCCs) in our subordinate laws,” Chairman Thienchai explained, concluding with an update that his office is “promoting ASEAN MCCs with the Thai Chamber of Commerce.”
Conclusion
The second edition of the Japan Privacy Symposium showcased the shared challenges and priorities among global data protection authorities. From AI governance to cross-regulatory collaboration and legal reforms, the Symposium highlighted the need for continued dialogue, cooperation, and information-sharing.
Following the Symposium, FPF was also honored and privileged to have been invited to participate in speaking opportunities during the closed and public sessions of APPA 62. In particular, Gabriela moderated a session on AI governance and regulation, while Josh spoke on a panel on balancing innovation and data protection.
FPF remains committed to facilitating these important conversations and advancing the discourse on privacy and emerging technologies globally.