6 Privacy Tips for the Generative AI Era

Data Privacy Day, or Data Protection Day in Europe, is recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data. The Council of Europe initiated the day in 2006, with the first official celebration held on January 28, 2007, marking this year as the 19th anniversary of celebration. Companies and organizations around the world often devote time for internal privacy training during this week, working to improve awareness of key data protections issues for their staff.

It’s also a good time for all of us to think about our own sharing of personal data. Nowadays, one of the most important decisions we need to make about our data is when and how we use AI-powered services. To raise awareness, we’ve partnered with Snap to create a Data Privacy Day Snapchat Lens. Check it out by scanning the Snapchat code and learn more below about privacy tips for generative AI! 

snap lens 2026

  1. Know When You’re Using Generative AI

As a first step, it’s important to know what generative AI is and when you’re using it. Generative AI is a type of artificial intelligence that creates original text, images, audio, and code in response to input. In addition to visiting dedicated generative AI platforms (such as ChatGPT), you may find that many companies’ existing functionality now also includes generative AI capabilities. For example, a search in Google now provides answers powered by Google’s generative AI, Gemini. Other examples include Snap’s AI Lenses and AI Snaps in creative tools, and Adobe’s Acrobat and Express are now powered with Firefly, Adobe’s generative AI. X’s Grok now assists users and answers questions. 

One of the best ways to identify when you’re using generative AI is to look for a symbol or disclaimer. Many organizations provide clues like symbols, and a range of companies like Snap, Github, and many others often use either a sparkle or star icon to denote generative AI features. You might also notice labels like “AI-generated” or “Experimental” alongside results from some companies, including Meta

  1. Think Carefully Before You Share Sensitive or Private Information

While this is a general rule of thumb for interacting with any product, it’s especially important when using generative AI because most generative AI systems use data that users provide (such as conversation text or images) to allow their models to continuously learn and improve. While your prompts, generated images, and other pieces of data can improve the technology for all users, it also means that if you share sensitive or private information, it could potentially be shared or surfaced in connection with training and developing the algorithm. 

Be especially careful when uploading files, images, or screenshots to generative AI tools. Documents, photos, or screenshots can include more information than you realize, such as metadata, background details, or information about third parties. Before uploading, consider redacting, cropping, or otherwise limiting files to include only the information necessary for your task.

Some companies promise to not use your data for training, often if you are using the paid version of their service. Others provide an option to opt-out of use of your data for training or versions that have special protections. For example, ChatGPT’s new health service supports the upload of health records with additional privacy and security commitments, but you need to be sure to be using the specific Health tab that is being rolled out to users.

  1. Manage Your AI’s Memory

Many generative AI tools now feature a memory function that allows them to remember details about you over time, providing more tailored responses. While this can be helpful for maintaining context in long-term projects, such as remembering your writing style, professional background, or specific project goals, it also creates a digital record of your preferences and behaviors. A recent FPF report explores these different kinds of personalization. 

Fortunately, you typically have the power to control what Generative AI platforms remember. Most have settings to view, edit, or delete specific memories or to turn the feature off entirely. For instance, in ChatGPT, you can manage these details under Settings > Personalization, and Gemini allows you to toggle off “Your past chats” within its activity settings to prevent long-term tracking. Meta also provides options for deleting all chats and images from the Meta AI app. Another option is to use “Temporary” or “Incognito” modes, so you can enjoy a personalized experience without generative AI compiling data attributed to your profile. 

In addition to managing memory features, it’s also helpful to understand how long Generative AI services keep your data. Some platforms store conversations, images, or files for only a short time, while others may keep them longer unless you choose to delete them. Taking a moment to review retention timelines can give you a clearer picture of how long your information sticks around, and help you decide what you’re comfortable sharing.

  1. Define Boundaries for Agentic AI 

Agentic AI, a form of generative AI that can complete tasks for users with greater autonomy, is becoming increasingly popular. For example, companies like Perplexity, OpenAI, and Amazon have unveiled agentic systems that can make purchases for consumers. While these systems can take on more tasks, they still require users to review purchases before they are final. As a best practice, you should look over the purchase to check that it aligns with your expectations (e.g., ordering 1 pair of socks and not 10). It is also important to keep in mind that since agentic systems can pull information from third party sources, there is a risk that the system will rely on inaccurate information about a product during purchases (e.g., that an item is in stock).

As agentic systems become more embedded in our lives, you should also be mindful about how much information you share with them. Consumers are already disclosing sensitive details about themselves to more basic chatbots, which businesses, the government, and other third parties may want to access. When interacting with agentic systems, keep this in mind and pay attention to what you disclose about yourself and others. You may similarly want to consider what type of access to provide to the agentic AI product, and rely on the principle of least privilege–only providing the minimum access needed for your use. For example, if an agentic system is going to manage your calendar, think through options for narrowing the access so your entire calendar is not shared, and that other apps connected to your calendar, like your email, are not shared unless necessary.

  1. Review How Generative AI Products Handle Privacy and Safety

It’s important to regularly review the privacy and security practices of any company with which you share information, and this applies similarly to companies offering generative AI products. This can include checking what data is collected and how, as well as how that information is used and stored. 

Snap has a Snapchat Privacy Center where you can review your settings. You can find those choices here.

ChatGPT’s privacy controls are available in the ChatGPT display, and OpenAI has a Data Controls FAQ that outlines where to find the settings and what options are available.

Gemini has the Gemini Privacy Hub, as well as an area to read about and configure your settings for Gemini Apps, which includes options for turning your Gemini history off. 

Claude has a Privacy Settings & Controls page that outlines how long they store your data, how you can delete it, and more. 

Co-Pilot provides an array of options for reviewing and updating your privacy settings, including how to delete specific memories and how your data is used. These settings are available on Microsoft’s website, here. Microsoft also provides a detailed Privacy FAQ page as well. 

Keep in mind that Generative AI products change quickly, and new features may introduce new data uses, defaults, or controls. Periodically revisiting privacy and safety settings can help ensure your preferences continue to reflect how the product works today, rather than how it worked when you first configured it.

  1. Explore and Have Fun!

LLMs can often provide useful data protection advice, so ask them questions about AI and privacy. Just be sure to double-check sources and accuracy, especially for important topics!

Data Privacy Day is a reminder that privacy is a shared responsibility. By bringing together FPF’s expertise in privacy research and policy with Snap’s commitment to building products with privacy and safety in mind, this collaboration aims to help people better understand how AI works and how to use it thoughtfully.

FPF Releases Updated Infographic on Age Assurance Technologies, Emerging Standards, and Risk Management

The Future of Privacy Forum is releasing an updated version of its Age Assurance: Technologies and Tradeoffs infographic, reflecting how rapidly the technical and policy landscape has evolved over the past year. As lawmakers, platforms, and regulators increasingly converge on age assurance as a governance tool, the updated infographic sharpens the focus on proportionality, privacy risk, and real-world deployment challenges.

What’s New

The updated infographic introduces several key changes that reflect the current state of age assurance technology and policy:

A Fourth Category: Inference. The original infographic outlined three approaches to age assurance: declaration, estimation, and verification. This update adds a fourth category—inference—which draws reasonable conclusions about a user’s age range based on behavioral signals, account characteristics, or financial transactions. For example, an email address linked to workplace applications, a mortgage lender, or a 401(k) provider, combined with login patterns during business hours, may infer that a user is an adult.

Relatedly, the updated version intentionally downplays age declaration as a standalone solution. While declaration remains useful for low-risk contexts and as an entry point in layered systems, experience and enforcement history continue to show that it is easily bypassed and insufficient where legal or safety obligations attach to age thresholds. The infographic now situates declaration primarily as an initial step within a waterfall or layered approach, rather than as a meaningful assurance mechanism on its own.

The update also highlights several new and emerging potential risks associated with modern age assurance systems. If not addressed properly, these could include loss of anonymity through linkage, increased breach impact from improper secured retained assurance data, secondary data use of assurance data, and circumvention risks such as presentation attacks or shared-device misuse.

In parallel, the infographic expands its coverage of risk management tools that can mitigate these concerns when age assurance is warranted. These include tokenization and zero-knowledge proofs to limit data disclosure, on-device processing and immediate deletion of source data, separation of processing across third parties, user-binding through passkeys or liveness detection, and emerging standards such as ISO/IEC 27566 and IEEE 2089.1. The emphasis is not on eliminating risk—which is rarely possible—but on aligning technical controls with the specific harms a service is attempting to address.

As with prior versions, the updated infographic reinforces a core message: there is no one-size-fits-all age assurance solution. Effective approaches are risk-based, use-case-specific, and privacy-preserving by design, balancing assurance goals against the rights and expectations of users. By clarifying the role of inference, contextualizing declaration, and surfacing both new risks and mitigation strategies, this update aims to support more informed decision-making across policy, product, and engineering teams.

Emerging Age Assurance Concepts. The field has advanced considerably, and the updated infographic now includes a dedicated section on emerging technologies that address Age Signals and Age Tokens, User-Binding, Zero Knowledge Proofs (ZKP), Double-Blind Models and One-Time vs. Reusable Credential.

Updated Risks and Risk Management Approaches. The infographic now presents a more comprehensive view of the risks and challenges associated with age assurance—including excessive data collection and retention, secondary data use, lack of interoperability, false positives and negatives, data breaches, and user acceptance challenges. Correspondingly, the risk management section highlights both established and emerging mitigations: on-device processing, tokenization and zero knowledge proofs, anti-circumvention measures (such as Presentation Attack Detection), standards (ISO/IEC 27566-1, IEEE 2089.1), and certification and auditing.

Practical Example: The updated infographic includes a detailed use case following “Miles,” a 16-year-old accessing an online gaming service. The scenario illustrates how multiple age assurance methods can work together in a layered “waterfall” approach—starting with low-assurance age declaration for basic access, escalating to facial age estimation for age-restricted features, and offering authoritative inference or parental consent as inclusive fallbacks when estimation results are inconclusive and formal id is not available . The example also demonstrates token binding with passkeys, ensuring that even if Miles shares his phone with a younger friend, the age credential cannot be accessed without the correct PIN, pattern, or biometric.

Future of Privacy Forum to Honor Top Scholarship at Annual Privacy Papers for Policymakers Event

Washington D.C. — (January 26th, 2026) — Today, the Future of Privacy Forum (FPF) — a global non-profit that advances principled and pragmatic data protection, AI, and digital governance practices — announced the winners of its 16th annual Privacy Papers for Policymakers (PPPM) Awards.

The PPPM Awards recognize leading research and analytical scholarship in privacy relevant to policymakers in the U.S. and internationally. The award highlights important work that analyzes current and emerging privacy and AI issues and proposes achievable short-term solutions or means of analysis that have the potential to lead to real-world policy solutions. Seven winning papers, two honorable mentions, and one student submission were selected by a select group of FPF staff members and advisors based on originality, applicability to policymaking, and overall quality of writing.

Winning authors will have the opportunity to present their work at virtual webinars scheduled for March 4, 2026, and March 11, 2026.

“As artificial intelligence and data protection increasingly shape global policy discussions, high-quality academic research is more important than ever,” says FPF CEO Jules Polonetsky. “This year’s award recipients offer the kind of careful analysis and independent thinking policymakers rely on to address complex issues in the digital environment. We are pleased to recognize scholars whose work helps ensure that technological innovation develops in ways that remain grounded in privacy and responsible data governance.”

FPF’s 2026 Privacy Papers for Policymakers Award winners are:

In addition to the winning papers, FPF awarded two papers as Honorable Mentions: Brokering Safety by Chinmayi Sharma, Fordham University School of Law; Thomas Kadri, University of Georgia School of Law; and Sam Adler, Fordham University, School of Law; and Focusing Privacy Law by Paul Ohm, Georgetown University Law Center. 

FPF also selected a paper for the Student Paper Award: Decoding Consent Managers under the Digital Personal Data Protection Act, 2023: Empowerment Architecture, Business Models and Incentive Alignment by Aditya Sushant Jain of O.P. Jindal Global University – Jindal Global Law School. 

In reviewing the submissions, winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.

The 2026 Privacy Papers for Policymakers Awards will take place over two virtual events on March 4 and 11. Attendance is free, and registration is open to the public. Find more information to register for the March 4 webinar here, and the March 11 webinar here

FPF Releases an Updated Issue Brief on Vietnam’s Law on Protection of Personal Data and the Law on Data

This Issue Brief has been updated to reflect the latest changes introduced by Decree 356/2025, the implementing decree to Vietnam’s Personal Data Protection Law, which was enacted on 31 December 2025.

Vietnam is undergoing a sweeping transformation of its data protection and governance framework. Over the past two years, the country has accelerated its efforts to modernize its regulatory architecture for data, culminating in the passage of two landmark pieces of legislation in 2025: the Law on Personal Data Protection (Law No. 91/2025/QH15) (PDP Law), which elevates the Vietnamese data protection framework from an executive act to a legislative act, while preserving many of the existing provisions, and the Law on Data (Law No. 60/2025/QH15) (Data Law). Notably, the PDP Law is expected to come into effect on January 1st, 2026.

The Data Law is Vietnam’s first comprehensive framework for the governance of digital data (both personal and non-personal), and applies to all Vietnamese agencies, organizations and individuals, as well as foreign agencies, organizations and individuals either in Vietnam or directly participating or are related to digital data activities in Vietnam. The data law became effective in July 2025. Together, these two laws mark a significant legislative shift in how Vietnam approaches data regulation, addressing overlapping domains of data protection, data governance, and emerging technologies. 

This Issue Brief analyzes the two laws, which together define a new, comprehensive regime, for data protection and data governance in Vietnam. The key takeaways from this joint analysis show that:

This Issue Brief has three objectives. First, it summarizes key changes between the PDP Law and Vietnam’s existing data protection regime, and draws a comparison between the PDP Law and the EU’s General Data Protection Regulation (GDPR) (Section 1). Second, it analyzes the interplay between the Data Law and the PDP Law (Section 2). We then provide key takeaways for organizations as they navigate the implementation of these laws (Section 3). 

You can also view the previous version of the Issue Brief here.

Innovation and Data Privacy Are Not Natural Enemies: Insights from Korea’s Experience 

The following is a guest post to the FPF blog authored by Dr. Haksoo Ko, Professor at Seoul National University School of Law, FPF Senior Fellow and former Chairperson of South Korea’s Personal Information Protection Commission. The guest post reflects the opinion of the author only and does not necessarily reflect the position or views of FPF and our stakeholder communities. FPF provides this platform to foster diverse perspectives and informed discussion.

1. Introduction: From “trade-off” rhetoric to mechanism design

I served as Chairman of South Korea’s Personal Information Protection Commission (PIPC) between 2022 and 2025. Nearly every day I felt that I was at the intersection of privacy enforcement, artificial intelligence policy, and innovation strategy. I was asked, repeatedly, whether I was a genuine data protectionist or whether I was fully supportive of unhindered data use for innovation. The question reflects a familiar assumption: that there is a dichotomy between robust privacy protection on one hand and rapid AI/data innovation on the other, and that a country must choose between the two.

This analysis draws on the policy-and-practice vantage point that I gained to argue that innovation and privacy are compatible when institutions establish suitable mechanisms that reduce legal uncertainty, while maintaining constructive engagement and dialogue. 

Korea’s recent experience suggests that the “innovation vs. privacy” framing is analytically under-specified. The binding constraint is often not privacy protection as such, but uncertainty as to whether lawful pathways exist for novel data uses. In AI systems, this uncertainty is heightened by the intricate nature of their pipelines. Factors such as large-scale data processing, extensive use of unstructured data, composite modeling approaches, and subsequent fine-tuning or other modifications all contribute to this complexity. The main practical issue is less about choosing among lofty values; it is more about operationalizing workable mechanisms and managing risks under circumstances of rapid technological transformation.

Since 2023, Korea’s trajectory can be read as a pragmatic move toward mechanisms of compatibility—institutional levers that lower the transaction costs of innovative undertakings while preserving proper privacy guardrails. These levers include structured pre-deployment engagement, controlled experimentation environments, risk assessment frameworks that can be translated into repeatable workflows, and a maturing approach to privacy-enhancing technologies (PETs) governance.

Conceptually, the approach aligns with the idea of cooperative regulation: regulators offer clearer pathways and procedural predictability for innovative undertakings, while also deepening their understanding of the technological underpinnings of these new undertakings. 

This article distills the mechanisms Korea has attempted in an effort to operationalize compatibility of privacy protection with the AI-and-data economy. The emphasis is pragmatic: to identify which institutional levers reduce legal and regulatory uncertainty without eroding accountability, and how those levers map to the AI lifecycle.

2. Korea’s baseline architecture of privacy protection

2.1 General statutory backbone and regulatory capacity

Korea maintains an extensive legal framework for data privacy, primarily governed by the Personal Information Protection Act (PIPA), and further reinforced through specific guidance and strong institutional capacity of the PIPC. The PIPA supplies durable principles and enforceable obligations, while guidance and engagement tools translate those principles and statutory obligations into implementable controls in emerging contexts such as generative AI.

The PIPA embeds familiar principles into statutory obligations: purpose limitation, data minimization, transparency, and various data subject rights. In AI settings, the central challenge has been their application: how to interpret these obligations in the context of, e.g., model training and fine-tuning, RAG (retrieval augmented generation), automated decision-making, and AI’s extension into physical AI and various other domains.

2.2 Principle-based approach combined with risk-based operationalization

Korea’s move is not “light-touch privacy,” but a principle-based approach combined with risk-based operationalization. The PIPC concluded that, given the uncertain and hard-to-predict nature of technological developments surrounding AI, adopting a principle-based approach was inevitable: an alternative like a rule-based approach would result in undue rigidity and stifle innovative energy in this fledgling field. At the same time, the PIPC recognized that a major drawback of a principle-based approach could be the lack of specificity and that it was imperative to issue sufficient guidance to show how principles are interpreted and applied in practice. Accordingly, the PIPC embarked on a journey of publishing a series of guidelines on AI.

In formulating and issuing these guidelines, an emphasis was consistently placed on the significance of implementing and operationalizing risk-based approaches. Emphasizing risk-based operationalization has several noteworthy implications. First, risk is a constant feature of new technologies, and pursuing zero risk is not realistic. As such, the focus was directed towards minimizing relevant risks, instead of seeking their complete elimination. Second, as technologies evolve, the resulting risk profile would also change continuously. Thus, putting in place procedures for periodic risk assessment would be crucial so that a proper mechanism for risk management could be at play. Third, a ‘one-size-fits-all’ approach would rarely be suitable, and multiple tailored solutions often need to be applied simultaneously. Furthermore, it is advisable to consider the overall risk profile of an AI system rather than concentrating on a few salient individual risks. This is akin to the Swiss cheese approach in cybersecurity: deploying multiple independent security measures at multiple layers on the assumption that every layer may have unknown vulnerabilities. 

Through a series of guidelines, the PIPC indicated that compliance can be achieved by implementing appropriate safety measures and guardrails that are proportionate to the risks at issue. Some of the guidelines that the PIPC issued include guidelines on pseudonymizing unstructured data, guidelines on utilizing synthetic data, guidelines on data-subject rights in automated decision-making, and guidelines on processing data captured by mobile hardware devices (such as automobiles and delivery robots).

3. Mechanisms of compatibility: What Korea has deployed

The PIPC devised and deployed multiple mechanisms to convert the “innovation vs. privacy” framework into a tractable governance program. They function as a portfolio: some instruments reduce uncertainty through ex ante engagement, while others enable innovative experimentation under structured constraints. Still, others turn principles into repeatable compliance workflows. The PIPC aimed to offer organizations a set of options, acknowledging that, depending on the type of data and the purposes for which the data would be used, different data processing needs would arise. The PIPC recognized that tailored mechanisms would be necessary to address these diverse requirements effectively.

3.1 Case-by-case assessments to reduce uncertainty

AI services could reach the market before regulators can fully resolve novel interpretive questions. In some cases, regulators may commence investigations after new AI services have been launched. As such, businesses may have to accept that they may face regulatory scrutiny ex post. The uncertainty resulting from this unpredictability could make innovators hesitant to launch new services. Accordingly, the PIPC has implemented targeted engagement mechanisms designed to deliver timely and effective responses on an individual basis. For organizations, this would provide predictability, in an expedited manner. The PIPC, on the other hand, through these mechanisms, would gain in-depth information about the intricate details and inner workings of new AI systems. By adopting this approach, the PIPC could develop the necessary expertise to make well-informed decisions that are consistent with current technological realities. The following provides an overview of several mechanisms that have been implemented.

(1) “Prior adequacy review”: Structured pre-deployment engagement

A “prior adequacy review” refers to a structured pre-deployment engagement pathway. The participating business would, on a voluntary basis, propose a data processing design and safeguard package in consideration of the risks involved; the PIPC would then evaluate the adequacy of the proposal against the identified risks; and, if deemed adequate, the PIPC would provide ex ante comfort that the proposed package aligns with the PIPC’s interpretation of the law.

The discipline is the trade: reduced uncertainty in exchange for concrete safeguards and future audits. Safeguard packages could include structured data sourcing and documentation, minimization and de-identification of data where feasible, strict access control, privacy testing and red-teaming for model outputs, input and output filtering for data privacy, and/or structured handling of data-subjects’ requests.

More than a dozen businesses have used this mechanism as they prepared to launch new services. One example is Meta’s launch of a service in Korea for screening and identifying fraudulent advertisements using celebrities’ images without their authorization. While there was a concern about the legality of processing someone’s images without his/her consent, the issue was resolved, in part, by considering the technological aspect that can be called the “temporary embedding” of images. 

(2) “No action letters” and conditional regulatory signaling

A “no action letter” is another form of regulatory signaling: under specified facts and conditions, the PIPC clarifies that it will not initiate an enforcement action. The overall process for a “no action letter” is much simpler than for a prior adequacy review. Its development was informed by the “no action letter” framework, which is widely used in the financial sector.

Where used, its value is to reduce uncertainty significantly to an articulated set of commitments. Although preparatory work had taken place earlier, the mechanism was officially implemented in November 2025. The first no action letter was issued in December 2025 for an international research project that used pseudonymized health data of deceased patients. 

(3) “Preliminary fact-finding review” 

A “preliminary fact-finding review” serves as an expedited evaluative process particularly suited to rapidly evolving sectors. Its primary objective is to develop a comprehensive understanding of the operational dynamics within an emerging service category and to identify pertinent privacy concerns. Although this review may result in the issuance of a corrective recommendation, which is a form of an administrative sanction, issuing such a corrective recommendation is typically not a principal motivation for conducting a preliminary fact-finding review.

For organizations, the value of this review process lies in gaining directional clarity without having to worry about the possibility of immediate escalation into a formal investigative proceeding. For the PIPC, the value is an enlightened understanding of market practices, which in turn serves to inform guidance and targeted supervision. 

In early 2024, the PIPC conducted a comprehensive review of several prominent large language models, including those developed or deployed by OpenAI, Microsoft, Google, Meta, and Naver. The assessment focused on data processing practices across pre-training, training, and post-deployment phases. The PIPC issued several minor corrective recommendations. As a result of this review, the businesses obtained legal and regulatory clarity regarding their data processing practices associated with their large language models. 

3.2 Controlled experimentation environments: Providing “playgrounds” for R&D

A second group of mechanisms centers on establishing controlled experimental environments. For instance, in situations requiring direct access to raw data for research and development, policy priorities shift towards enabling experimentation while simultaneously reinforcing safeguards that address the corresponding heightened risks. The following is an overview of several specific mechanisms that were implemented in this regard.

(1) “Personal Data Innovation Zones” 

“Personal Data Innovation Zones” provide secure environments where vetted researchers and firms can work with high-quality data in a relatively flexible manner. The underlying idea is an appropriate risk-utility calculus. That is, once a secure data environment—an environment that is more secure than usual with strict technical and procedural controls—is established, research within such a secure environment can be conducted with more room for flexibility than usual. 

Within a Personal Data Innovation Zone, for instance, data can be used for a long period of time (up to five years with a renewal possibility), data can be retrieved and reused rather than being disposed of after one-time use, and adequacy review of pseudonymization can be conducted using sampled data, instead of reviewing the entire dataset. So far, seven organizations, such as Statistics Korea and Korea National Cancer Center, have been designated as having satisfied the conditions for establishing secure data environments. 

(2) Regulatory sandboxes for personal data

Regulatory sandboxes for personal data permit time-limited experiments under specific conditions designed by regulators. Through this mechanism, approval may be granted to organizations that have implemented suitable safeguard measures. One example of this mechanism that has supported new technological developments is a case involving the use of unobfuscated original video data to develop algorithms for autonomous systems such as self-driving cars and delivery robots. Developing algorithms for self-driving cars and delivery robots would almost inevitably require permitting the use of unobfuscated data since, otherwise, it would be exceedingly cumbersome to obfuscate or otherwise de-identify personal data that can be found in all of the video data to be used. In the review process, certain conditions would be imposed in order to safeguard the data properly, often emphasizing strict access control and the management of data provenance. 

(3) Pseudonymized data and synthetic data: From encouragement to proceduralization

The PIPC has also moved from generic endorsement of privacy-enhancing technologies (PETs) to procedural guidance. Pseudonymized data and synthetic data are the clearest examples. A phased process was developed—preparation, generation, safety/utility testing, expert or committee assessment, and controlled utilization—with an emphasis on risk evaluation.

Some organizations, in particular certain research hospitals, established data review boards (DRBs), although doing so was not a statutory requirement. A DRB’s role would include, among others, evaluating the suitability of using pseudonymized data, assessing the identifiability of personal data from a dataset that is derived from multiple pseudonymized datasets, and assessing identifiability risks from synthetic data. 

4. Institutional design features that make the mechanisms credible

4.1 Building credibility and maintaining active channels of engagement

Compatibility is not achieved by guidance alone. Pro-innovation tools require institutional credibility. From the perspective of businesses, communicating with regulators can readily trigger anxiety. Businesses may worry that information they share could invite unwanted scrutiny. Given this anxiety, regulators need to be proactive and send out a consistent and coherent signal that information gathered through these mechanisms will not be used against the participating businesses. Maintaining sustained and reliable communication channels is critical. 

4.2 Expertise and professionalism as regulatory infrastructure

Case-by-case reviews, sandboxes, and risk models are only credible if the regulator has expertise in data engineering, AI system design, security, and privacy risk measurement—alongside legal and administrative capacity. To be effective, principle-based regulation requires sophisticated interpretive capability.

5. Implications: Why compatibility is plausible

Korea’s experience shows that the “innovation vs. privacy” framing is analytically under-specified. At an operational level, greater challenges tend to occur at the intersection of uncertainty, engagement, and institutional capacity. When legal and regulatory interpretations are vague and enforcement is unpredictable, innovators may perceive privacy as a barrier. When safeguards are demanded but not operationalized, privacy advocates may perceive innovation policy as de facto deregulation.

Korea’s mechanisms have attempted to resolve new challenges by translating principles into implementable controls, creating structured engagement and experimentation pathways. Privacy law does not inherently block innovation; poorly engineered compliance pathways do.

6. Conclusion

Korea’s experience supports a disciplined proposition: innovation and data privacy are compatible when compatibility is properly designed and executed. Compatibility does not come from declaring a balance; it comes from mechanisms that reduce uncertainty for innovators while increasing the credibility of the adopted safeguards for data subjects.

Korea’s toolkit—a principle-based approach combined with risk-based operationalization, structured risk management frameworks, active engagement channels, and credibility supported by professionalism and expertise—offers privacy professionals and policymakers a practical reference point for governance in the AI era.

The RAISE Act vs. SB 53: A Tale of Two Frontier AI Laws

What the enactment of New York’s RAISE Act reveals compared to California’s SB 53, the nation’s first frontier AI law

On December 19, New York Governor Hochul (D) signed the Responsible AI Safety and Education (RAISE) Act, ending months of uncertainty after the bill passed the legislature in June and making New York the second state to enact a statute specifically focused on frontier artificial intelligence (AI) safety and transparency.1 Sponsored by Assemblymember Bores (D) and Senator Gounardes (D), the law closely follows California’s enactment of SB 53 in late September, requiring advanced AI developers to publish governance frameworks and transparency reports, and establishing mechanisms for reporting critical safety incidents. As they moved through their respective legislatures, the RAISE Act and SB 53 shared a focus on transparency and catastrophic risk mitigation but diverged in scope, structure, and enforcement–raising concerns about a compliance patchwork for nationally operating developers.

The New York Governor’s chapter amendments ultimately narrowed those differences, revising  the final version of the RAISE Act to more closely align with California’s SB 53, with conforming changes expected to be formally adopted by the Legislature in January. Even so, the two laws are not identical, and the remaining distinctions may be notable for frontier developers navigating compliance in both the Golden and the Empire State.

Understanding the RAISE Act, and how it aligns with and diverges from California’s SB 53, offers a useful lens into how states are approaching frontier AI safety and transparency and where policymaking may be headed in 2026.

At a high level, the two statutes now share largely identical scope and core requirements. Still, several distinctions remain, including:

RAISE Act: Scope and Requirements

Despite these distinctions, the RAISE Act largely mirrors California’s SB 53 in how it defines covered models, developers, and risks, resulting in a substantially similar compliance scope across the two states. The sections below summarize the RAISE Act’s scope and key requirements.

Scope:

The law regulates frontier developers, defined as entities that “trained or initiated the training” of high-compute frontier models, or foundation models trained with more than 10^26 computational operations. It separately defines large frontier developers, or those with annual gross revenues above $500 million, targeting compliance towards the largest AI companies. 

Like California SB 53, the RAISE Act is focused on preventing catastrophic risk, defined as a foreseeable and material risk that a frontier model could:

Requirements:

The RAISE Act establishes multiple compliance requirements, with certain requirements applying to all frontier developers and additional duties reserved for large frontier developers.

Enforcement: The RAISE Act authorizes the Attorney General to bring civil actions for violations, with penalties up to $1 million for a first violation and up to $3 million for subsequent violations, scaled to the severity of the offense. The statute expressly does not create a private right of action. It also clarifies, unlike California’s SB 53, that a large frontier developer may assert that alleged harm or damage was caused by another person, entity, or contributing factor.

Before the Amendments: How the RAISE Act Changed

Before Governor Hochul’s chapter amendments, the RAISE Act would have diverged much more sharply from California’s SB 53. The earlier iteration of the bill that passed out of the Legislature took a more expansive approach, including higher penalties and stricter liability thresholds, raising the prospect of meaningfully different compliance regimes on opposite coasts.

Most notably, the original RAISE Act applied only to “large developers,” defined by annual compute spending above $100 million, rather than distinguishing between frontier developers and large frontier developers as SB 53 does. That threshold would have captured a different (and potentially broader) set of companies than the enacted framework, which now relies on a $500 million revenue benchmark aligned with California’s approach. The bill also originally framed its focus around “critical harm,” rather than the “catastrophic risk” standard now shared with California’s SB 53, and paired that definition with heightened liability requirements, including that harm be a probable consequence, that the developer’s conduct be a substantial factor, and that the harm could not have been reasonably prevented. Those qualifiers were ultimately removed in favor of the “catastrophic risk” standard used in SB 53, including utilizing the same 50-person harm threshold.

The RAISE Act’s requirements evolved as well. Earlier versions lacked both the transparency report obligation (now shared with SB 53) and the frontier developer disclosure program (a new New York-specific addition). While the original RAISE Act did include an obligation to maintain a “safety and security protocol,” that requirement was less prescriptive about governance and mitigation practices than the now enacted “Frontier AI Framework.” 

Perhaps the most significant change was the removal of a deployment prohibition. As passed by the Legislature, the RAISE Act would have barred deployment of models posing an unreasonable risk of critical harm, a restriction not found in SB 53. Chapter amendments left the final law focused on transparency and reporting, rather than direct deployment restrictions. Penalties were similarly scaled back, falling from a maximum of $10 million for a first violation and $30 million for subsequent violations to $1 million and $3 million, respectively.

Looking Ahead: What Comes Next in 2026?

With chapter amendments expected to be formally adopted in the coming weeks, the RAISE Act will take effect after California’s SB 53, which became operative on January 1, 2026. As a result, SB 53 will be the first real test of how a frontier AI statute operates in practice, with New York following shortly thereafter.

That rollout comes amid renewed uncertainty over the balance between state and federal AI policymaking. A recent White House executive order, Ensuring a National Policy Framework for Artificial Intelligence,  seeks to apply federal pressure against state AI laws deemed excessive, including through an AI Litigation Task Force and funding restrictions tied to state enforcement of certain AI laws. While the practical impact of the EO remains unclear, it adds complexity for states and developers preparing for compliance.

Both SB 53 and the RAISE Act include severability clauses, which preserve the remainder of each statute if individual provisions are invalidated. While standard in complex legislation, those clauses may become more consequential if either law is drawn into these broader federal-state tensions. At the same time, the EO directs the Administration to engage Congress on a federal AI framework, raising the possibility that SB 53 and the RAISE Act could serve as reference points for future federal legislation. With other states, including Michigan, already introducing similar bills, it should become clearer in 2026 whether SB 53 and the RAISE Act function as models for broader adoption or face legal challenge.

  1.  Passed by the Legislature as A 6453A and to be enacted through chapter amendments reflected in A 9449. ↩︎

FPF Year in Review 2025

Co-authored by FPF Communications Intern Celeste Valentino with contributions from FPF Global Communications Manager Joana Bala

This year, FPF continued to broaden its footprint across priority areas of data governance, further expanding activities across a range of cross-sector topics, including AI, Youth, Conflict of Laws, AgeTech (seniors), and Cyber-Security. We have engaged extensively at the local and national levels in the United States and are increasingly active in every major global region.

Highlights from FPF work in 2025

2025 saw the release of a range of FPF reports and issue briefs highlighting top data protection and AI developments. A few highlights follow, showing the breadth of comprehensive coverage.

the state of state ai 2025 cover

The State of State AI: Legislative Approaches to AI in 2025

FPF tracked and analyzed 210 bills in 42 states, highlighting five key takeaways which include, (1) states shifted from broad frameworks to narrower, transparency-driven approaches, (2) three main approaches to private sector AI regulation emerged: use or context-based, tech-specific, and liability/accountability, (3) the most commonly enacted frameworks focus on healthcare, chatbots, and innovation safeguards, (4) policymakers signaled an interest in balancing consumer protection with AI growth, (5) definitional uncertainty, agentic AI, and algorithmic pricing are likely to be key topics in 2026. Learn further in a LinkedIn Live event with the report’s authors here.

data minimization white paper 2025.06.05

FPF Unveils Paper on State Data Minimization Trends

Several states have enacted “substantive” data minimization rules that aim to place default restrictions on the purposes for which personal data can be collected, used, or shared. What questions do these rules raise, and how might policymakers construct them in a forward-looking manner? FPF covers lawmakers’ turn towards substantive data minimization and addresses the relevant challenges and questions they pose. Watch a LinkedIn Live here on the topic.

concepts in ai governance personality vs. personalization

Concepts in AI Governance: Personality vs. Personalization

The Concepts in AI Governance: Personality vs. Personalization issue brief explores the specific use cases of personalization and personality in AI, identifying their concrete risks to individuals and interactions with U.S. law, and proposes steps that organizations can take to manage these risks. Read Part 1 (exploring concepts), Part 2 (concrete uses and risks), and Part 3 (intersection with U.S. law) and Part 4 (Responsible Design and Risk Management).

issue brief updated apac consent report google docs

Consent for Processing Personal Data in the Age of AI: Key Updates Across Asia-Pacific

From India’s DPDPA to Vietnam’s new Decree and Indonesia’s PDPL, the Asia-Pacific region is undergoing a shift in its data protection law landscape. This issue brief provides an updated view of evolving consent requirements and alternative legal bases for data processing across key APAC jurisdictions. The brief also explores how the rise of AI is impacting shifts in lawmaking and policymaking across the region regarding lawful grounds for processing personal data. Watch the LinkedIn Live panel discussion on key legislative developments in APAC since 2022.

issue brief brazils digital eca

Brazil’s Digital ECA: New Paradigm of Safety & Privacy for Minors Online

This Issue Brief analyzes Brazil’s recently enacted children’s online safety law, summarizing its key provisions and how they interact with existing principles and obligations under the country’s general data protection law (LGPD). It provides insight into an emerging paradigm of protection for minors in online environments through an innovative and strengthened institutional framework, focusing on how it will align with and reinforce data protection and privacy safeguards for minors in Brazil and beyond.

june issue brief cross border data flows in africa

Cross-Border Data Flows in Africa: Examining Policy Approaches and Pathways to Regulatory Interoperability

As digital trade accelerates, countries across Africa are adopting varied approaches to data transfers—some incorporating data localization measures, others prioritizing open data flows.

FPF examines the current regulatory landscape and offers a structured analysis of regional efforts, legal frameworks, and opportunities for interoperability, including a comparative annex covering Kenya, Nigeria, South Africa, Rwanda, and the Ivory Coast.

FPF Filings and Comments
Throughout the year, FPF provided expertise through filings and comments to government agencies on proposed rules, regulations, and policy changes in the U.S. and abroad. 

FPF provided recommendations and filed comments with:

The FPF Center for Artificial Intelligence

This year, the FPF Center for Artificial Intelligence expanded its resources, releasing insightful blogs, comprehensive issue briefs, detailed infographics, and a flagship report on issues related to AI agents, assessment, and risk, as well as key concepts in AI governance.

In addition, the Center for AI hosted two events, convening top scholars specializing in complex technical questions that impact law and policy: 

Check out some other highlights of FPF’s AI work this year:

Global 

In 2025, FPF’s global work focused on how jurisdictions worldwide are adapting privacy and data protection frameworks to keep pace with AI and shifting geopolitical and regulatory landscapes. From children’s privacy and online safety to cross-border data flows and emerging AI governance frameworks, FPF’s teams engaged across regions to provide thought leadership, practical guidance, and stakeholder engagement, helping governments, organizations, and practitioners navigate complex developments while balancing innovation with fundamental rights.

In APAC, FPF analyzed South Korea’s AI Framework Act and Japan’s AI Promotion Act, highlighting differing approaches to innovation, risk management, and oversight. A comparative overview of the EU, South Korean, and Japanese frameworks provided practical insights into global AI policy trends. The evolution of consent was also a key focus. Our experts examined Vietnam’s rapidly evolving data framework, analyzing the newly adopted Personal Data Protection Law and Law on Data and their implications for a comprehensive approach to data protection and governance. From Japan to New Zealand, the team engaged on timely issues and contributed to major regional forums, demonstrating leadership in advancing privacy and AI governance across the region.

In India, FPF engaged with key stakeholders and conducted peer-to-peer sessions on the Digital Personal Data Protection (DPDP) rules. Notably, FPF’s analysis of the DPDPA and generative AI systems helped inform India’s newly released AI Governance Guidelines, demonstrating the local impact of FPF’s resources.

In Latin America, FPF tracked developments such as Chile’s new data protection law and Brazil’s children’s privacy legislation. FPF also participated in regional events on age verification for minors, discussing technologies like facial recognition and emerging legal trends in the region. We also examined how data protection authorities are responding to AI, reviewing developments across Latin America and Europe.

In Africa, FPF examined cross-border data flows and regulatory interoperability, emphasizing regional coordination for responsible data transfers. This year, we launched the Africa Council Membership, a dedicated platform for companies operating in the continent. FPF also hosted its first in-person side event in Africa at the 2025 NADPA Convening in Abuja, Nigeria, centered on “Securing Safe and Trustworthy Cross-Border Data Flows in Africa.” The positive feedback from the session underscored the value of convening stakeholders around Africa’s evolving data protection landscape.

FPF’s flagship European event, the Brussels Privacy Symposium, co-organized with the Brussels Privacy Hub, brought together stakeholders to examine the GDPR’s role in the EU’s evolving digital framework. In partnership with OneTrust, FPF also published an updated Conformity Assessment under the EU AI Act: A Step-by-Step Guide and infographic, providing a roadmap for organizations to assess high-risk AI systems and meet accountability requirements. FPF closely followed the European Commission’s Digital Omnibus proposals, offering exclusive member analysis and public insights, including a rapid first-reaction LinkedIn Live discussion.

State and Federal U.S. Legislation

In 2025, FPF continued to track and analyze critical legislation in the privacy landscape from AI chatbots to neural data across various states in the U.S.

We unpacked the new wave of state chatbot legislation, focusing specifically on California SB 243, which became the first state to pass legislation governing companion chatbots with protections explicitly tailored to minors, and Utah’s SB 332, SB 226, and HB 452, where the state proved to be an early mover in state AI legislation as lawmakers signed three generative AI bills, amending Utah’s 2024 Artificial Intelligence Policy Act (AIPA) and establishing new regulations for mental health chatbots. 

FPF compared California’s SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA) to the New York Raise Act anticipating where U.S. policy on frontier model safety may be headed, as this was signed into law making California the first state to enact a statute specifically targeting frontier AI safety and transparency. 

We also looked at how amendments to previous state privacy laws, such as the Montana Consumer Data Privacy Act (MCDPA), were modified to create new protections for minors and examined how SB 1295 will amend the Connecticut Data Privacy Act (CTDPA), including how it expanded its scope, added a new consumer right, heightened the already strong protections for minors, and more. 

Data-driven pricing also became a critical topic as states across the U.S. are introducing new legislation to regulate how companies use algorithms and personal data to set consumer prices as these modern pricing models can personalize pricing over time at scale, and are under increasing scrutiny. FPF looked at how legislation varies from state to state, and potential consequences of legislation, and the future of enforcement against these practices.

We explored “neural data”, or information about people’s central and/or peripheral nervous system activity. As of July 2025, four states have passed laws that seek to regulate “neural data.” FPF detailed in a blog why, given the nature of “neural data,” it is challenging to get the definition just right for the sake of regulation.

Building off of last year’s “Anatomy of State Comprehensive Privacy Law,” our recent report breaks down the critical commonalities and differences in the laws’ components that collectively constitute the “anatomy” of a state comprehensive privacy law.

Also this year, FPF hosted its 15th Annual Privacy Papers for Policymakers Award, recognizing cutting-edge privacy scholarship, bringing together brilliant minds at a critical time for data privacy amid the rise of AI. We listened to insightful discussions between our awardees and an exceptional lineup of privacy academics and industry leaders, while connecting with our awardees through a networking session with privacy professionals, policymakers and others.

U.S. Policy

AgeTech

FPF was awarded a grant from the Alfred P. Sloan Foundation to lead the two-year research project, “Aging at Home: Caregiving, Privacy, and Technology,” in partnership with the University of Arizona’s Eller College of Management. FPF launched the project in April, setting out to explore the complex intersection of privacy, economics, and the use of emerging technologies designed to support aging populations (“AgeTech”). In July, we released our first blog as part of the project, posing five essential privacy questions for older adults and caregivers to consider when utilizing tech to support aging populations.

During the holiday season, FPF also put together three types of AI-enabled agetech and the privacy and data protection considerations to navigate when gift-giving to older individuals and caregivers. 

Youth Privacy

The start of 2025 was marked by significant policy activity at both the federal and state levels, focusing on legislative proposals aimed at strengthening online safeguards for minors. 

FPF kicked off the year by releasing a redline comparison of the Federal Trade Commission’s notice of proposed changes to the Children’s Online Privacy Protection Act (COPPA) Rule. Later in the spring, an amendment to the COPPA Rule was reintroduced in the Senate and FPF completed a second redline, comparing the newly proposed COPPA 2.0 bill to the original COPPA Rule. 

Towards the end of the year, the U.S. House Energy & Commerce Committee introduced a comprehensive bill package to advance child online privacy and safety, including its own version of COPPA 2.0, marking the latest step toward modernizing the nearly 30-year-old Children’s Online Privacy Protection Act.

FPF analyzed how the new House proposal compares to long-standing Senate efforts, what’s changing, and what it means for families, platforms, and policymakers navigating today’s digital landscape.

States across the U.S. also took action, introducing legislation to enhance the privacy and safety of kids’ and teens’ online experiences. Using the federal COPPA framework as a guide, FPF analyzed Arkansas’s proposed “Arkansas Children and Teens’ Online Privacy Protection Act”, describing how the bill establishes new privacy protections for teens aged 13 to 16. Other states, such as Vermont and Nebraska, took a different approach, opting to pass Age-Appropriate Design Code Acts (AADCs). FPF discussed how these new bills take two very different approaches to a common goal, crafting a design code that can withstand First Amendment scrutiny. 

We utilized infographics to visually illustrate complex issues related to technology and children’s online experiences. In celebration of Safer Internet Day 2025, we released an infographic explaining how encryption technology plays a crucial role in ensuring data privacy and online safety for a new generation of teens and children. We also illustrated the Spectrum of Artificial Intelligence, exploring the wide range of current use cases for Artificial Intelligence (AI) in education and future possibilities and constraints. Finally, we released an infographic and readiness checklist that details the various types of deepfakes and the varied risks and considerations posed by each in a school setting, ranging from the potential for fabricated phone calls and voice messages impersonating teachers to the sharing of forged, non-consensual intimate imagery (NCII).

As agencies face increasing pressure to leverage sensitive student and institutional data for analysis and research, Privacy Enhancing Technologies (PETs) offer a unique potential solution as they are advanced technologies designed to protect data privacy while maintaining the utility of results yielded from analyses. FPF released a landscape report of the adoption of Privacy Enhancing Technologies (PETs) by State Education Agencies (SEAs).

Data Sharing for Research Tracker
In March, we celebrated Open Data Day by launching the Data Sharing for Research Tracker, a growing list of organizations that make data available for researchers. The tracker helps researchers locate data for secondary analysis and organizations seeking to raise awareness about their data-sharing programs, benchmarking them against what other organizations offer.

Foundation Support

FPF’s funding is broad across every industry sector and includes funded competitive projects from the U.S. National Science Foundation and leading private foundations.  We work to support ethical access to data by researchers, responsible uses of technology in K-12 education, and we seek to advance the uses of Privacy Enhancing Technologies in the private and public sectors.

FPF Membership 

FPF Membership provides the leading community for privacy professionals to meet, network, and engage in discussions on top issues in the privacy landscape. 

The Privacy Executives Network (PEN) Summit

We held our 2nd annual PEN Summit in Berkeley, California, which showcased the power of quality peer-to-peer conversations, focusing on the most pressing global privacy and AI issues. The event opened with the latest from CPPA Executive Director Tom Kemp, followed by dynamic peer-to-peer roundtables, and closed with a lively half-day privacy simulation- participants were challenged to pool their knowledge and identify potential solutions to a scenario that privacy executives may face in their career. 

New Trainings for FPF Members

FPF Membership expanded its benefits with complimentary trainings for all members. FPF members are  able to attend live virtual trainings, along with access to training recordings and presentation slides via the FPF Member Portal. We had our first course for members in late September on De-Identification and subsequent training on running a Responsible AI program. Stay tuned for more courses next year and be sure to join the FPF Training community in the Member Portal to receive updates on future trainings and view existing training materials.

FPF convenes top privacy and data protection minds and can give your company access to our outstanding network through FPF membership. Learn more on how to become an FPF member.

Top-level FPF Convenings and Engagements from 2025

DC Privacy Forum: Governance for Digital Leadership and Innovation

This year, FPF hosted two major events gathering leading experts and policymakers for critical discussions on privacy, AI, and digital regulation. In D.C., FPF hosted our second annual DC Privacy Forum, convening a broad audience of key government, civil society, academic, and corporate privacy leaders to discuss AI policy, critical topics in privacy, and other priority issues for the new administration and policymakers.

Brussels Privacy Symposium: A Data Protection (R)evolution?

Our ninth edition of the Brussels Privacy Symposium focused on the impact of the European Commission’s competitiveness and simplification agenda on digital regulation, including data protection. This year’s event featured bold discussions on refining the GDPR, strengthening regulatory cooperation, and shaping the future of AI governance. Read the report here.  

FPF experts also took the stage across the globe: 

New initiatives and expanding FPF’s network:

Please continue to follow FPF’s work by subscribing to our monthly briefing and following us on LinkedIn, Twitter/X, and Instagram. On behalf of the FPF team, we wish you a very Happy New Year and look forward to 2026!

This material is based upon work supported by the Alfred P. Sloan Foundation under Grant No. G-2025-2519, Aging at Home: Caregiving, Privacy, and Technology.

FPF Releases Issue Brief on Vietnam’s Law on Protection of Personal Data and the Law on Data

Vietnam is undergoing a sweeping transformation of its data protection and governance framework. Over the past two years, the country has accelerated its efforts to modernize its regulatory architecture for data, culminating in the passage of two landmark pieces of legislation in 2025: the Law on Personal Data Protection (Law No. 91/2025/QH15) (PDP Law), which elevates the Vietnamese data protection framework from an executive act to a legislative act, while preserving many of the existing provisions, and the Law on Data (Law No. 60/2025/QH15) (Data Law). Notably, the PDP Law is expected to come into effect on January 1st, 2026.

The Data Law is Vietnam’s first comprehensive framework for the governance of digital data (both personal and non-personal), and applies to all Vietnamese agencies, organizations and individuals, as well as foreign agencies, organizations and individuals either in Vietnam or directly participating or are related to digital data activities in Vietnam. The data law became effective in July 2025. Together, these two laws mark a significant legislative shift in how Vietnam approaches data regulation, addressing overlapping domains of data protection, data governance, and emerging technologies. 

This Issue Brief analyzes the two laws, which together define a new, comprehensive regime, for data protection and data governance in Vietnam. The key takeaways from this joint analysis show that:

This Issue Brief has three objectives. First, it summarizes key changes between the PDP Law and Vietnam’s existing data protection regime, and draws a comparison between the PDP Law and the EU’s General Data Protection Regulation (GDPR) (Section 1). Second, it analyzes the interplay between the Data Law and the PDP Law (Section 2). We then provide key takeaways for organizations as they navigate the implementation of these laws (Section 3). 

You can view the updated version of this Issue Brief here.

Five Big Questions (and Zero Predictions) for the U.S. Privacy and AI Landscape in 2026

Introduction

For better or worse, the U.S. is heading into 2026 under a familiar backdrop: no comprehensive federal privacy law, plenty of federal rumblings, and state legislators showing no signs of slowing down. What has changed is just how intertwined privacy, youth, and AI policy debates have become, whether the issue is sensitive data, data-driven pricing, or the increasingly spirited discussions around youth online safety. And with a new administration reshuffling federal priorities, the balance of power between Washington and the states may shift yet again.

In a landscape this fluid, it’s far too early to make predictions (and unwise to pretend otherwise). Instead, this post highlights five key questions that will influence how legislators and regulators navigate the evolving intersection of privacy and AI policy in the year ahead.

  1. No new comprehensive privacy laws in 2025: A portent of stability, or will 2026 increase legal fragmentation?

One of the major privacy storylines of 2025 is that no new state comprehensive privacy laws were enacted this year. Although that is a significant departure from the pace set in prior years, it is not due to an overall decrease in legislative activity on privacy and related issues. FPF’s U.S. Legislation team tracked hundreds of privacy bills, nine states amended their existing comprehensive privacy laws, and many more enacted notable sectoral laws dealing with artificial intelligence, health, and youth privacy and online safety. Nevertheless, the number of comprehensive privacy laws remains fixed for now at 19 (or 20, for those who count Florida). 

Reading between the lines, there are several things this could mean for 2026. Perhaps the lack of new laws this year was more due to chance than anything else, and next year will return to business-as-usual. After all, Alabama, Arkansas, Georgia, Massachusetts, Oklahoma, Pennsylvania, Vermont, and West Virginia all had bills make it to a floor vote or progress into cross-chamber, and some of those bills have been carried over into the 2026 legislative session. Or perhaps this is indicative that a critical capacity of state laws has been reached and we should expect stability, at least in terms of which states do and do not have comprehensive privacy laws. 

A third possibility is that next year promises something different. Although the landscape has come to be dominated by the “Connecticut model” for privacy, a growing bloc of other New England states are experimenting with bolder, more restrictive frameworks. Vermont, Maine, and Massachusetts all have live bills going into the 2026 legislative session that would, if enacted, represent some of the strictest state privacy laws on the books–many drawing heavily from Maryland’s substantive data minimization requirements. Vermont’s proposal would also include private right of action, and Massachusetts’ proposals, S.2619 and H.4746, would ban selling sensitive data and targeted advertising to minors. State privacy law is clearly at an inflection point, and what these states do in 2026—including whether they move in lock-step—could prove influential on the state privacy landscape. 

— Jordan Francis

  1. Are age signals the future of youth online protections in 2026?

As states have ramped up youth online privacy and safety legislation in recent years, a perennial question emerges each legislative session like clockwork: how can entities apply protections to minors if they don’t know who is a minor? Historically, legislatures have tried to solve this riddle with different approaches to knowledge standards that define when entities know, or should know, whether a user is a minor, while others tested age assurance requirements placed at the point of access to covered services. In 2025, however, that experimentation took a notable turn with the emergence of novel “age signals” frameworks. 

Unlike earlier models that focused on service-level age assurance, age signals frameworks seek to shift age determination responsibilities upstream in the technology stack, relying on app stores or operating system providers to generate and transmit age signals to developers. In 2025, lawmakers enacted two distinct versions of this approach: the App Store Accountability Act (ASAA) model in Utah, Texas, and Louisiana; and the California AB 1043 model. 

While both frameworks rely on age signaling concepts, they diverge significantly in scope and regulatory ambition. The ASAA model assigns app stores responsibility for age verification and parental consent, and requires them to send developers age signals that indicate (1) users’ ages and (2), for minors, whether parental consent has been obtained. These obligations introduce new and potentially significant technical challenges for companies, which must integrate age-signaling systems while reconciling these obligations with requirements under COPPA and state privacy laws. Meanwhile, the Texas’ ASAA law is facing two First Amendment challenges in federal court, with plaintiffs seeking to obtain preliminary injunctions before the law’s January 1 effective date. 

California’s AB 1043 represents a different approach. The law requires operating system (OS) providers to collect age information during device setup and share this information with developers via the app store. This law does not require parental consent or additional substantive protections for minors; its sole purpose is to enable age data sharing to support compliance with laws like the CCPA and COPPA. The AB 1043 model—while still mandating novel age signaling dynamics between operating system providers, app stores, and developers— could be simpler to implement and received notable support from industry stakeholders prior to enactment.

So what might one ponder—but not dare predict—about the future of age signals in 2026? Two developments bear watching. The highly anticipated decision on the plaintiff’s request for an injunction against the Texas law may set the direction for how aggressively states will replicate this model—though momentum may continue, particularly given federal interest reflected in the House Energy & Commerce Committee’s introduction of H.R. 3149 to nationalize the ASAA framework. Second, the California AB 1043 model, which has not yet been challenged in court, may gain traction in 2026 as a more constitutionally durable option. With some states that have robust protections for minors established in existing privacy law, perhaps the AB 1043 model may serve as an attractive model for facilitating compliance with such obligations.

– Daniel Hales

  1. Is 2026 shaping up to be another “Year of the Chatbots,” or is a legislative plot twist on the horizon?

If 2025 taught us anything, it’s that chatbots have stepped out of the supporting cast and into the starring role in AI policy debates. This year marked the first time multiple states (including Utah, New York, California, and Maine) enacted laws that explicitly address AI chatbots. Much of that momentum followed a wave of high-profile incidents involving “companion chatbots,” systems designed to simulate emotional relationships. Several families alleged that these tools encouraged their children to self-harm, sparking litigation, congressional testimony, and inquiries from both the Federal Trade Commission (FTC) and Congress and carrying chatbots to the forefront of policymakers’ minds.

States responded quickly. California (SB 243) and New York (S-3008C) enacted disclosure-based laws requiring companion chatbot operators to maintain safety protocols and clearly tell users when they are interacting with AI, with California adding extra protections for minors. Importantly, neither state opted for a ban on chatbot use, setting their focus on transparency and notice rather than prohibition.

And the story isn’t slowing down in 2026. Several states have already pre-filed chatbot bills, most centering once again on youth safety and mental health. Some may build on California’s SB 243 with stronger youth-specific requirements or tighter ties to age assurance frameworks. It is possible other states may broaden the conversation, like looking at chatbot use in elders, education, or employment, as well as diving deeper into questions of sensitive data.

The big question for the year ahead: Will policymakers stick with disclosure-first models, or pivot toward outright use restrictions on chatbots, especially for minors? Congress is now weighing in with three bipartisan proposals (the GUARD Act, the CHAT Act, and the SAFE Act), ranging from disclosure-forward approaches to full restrictions on minors’ access to companion chatbots. With public attention high and lawmakers increasingly interested in action, 2026 may be the year Congress steps in, potentially reshaping, or even preempting, state frameworks adopted in 2025.

– Justine Gluck

4. Will health and location data continue to dominate conversations around sensitive data in 2026?

While 2025 did not produce the hoped-for holiday gift of compliance clarity for sensitive or health data, the year did supply flurries, storms, light dustings, and drifts of legislative and enforcement activity. In 2025, states focused heavily on health inferences, neural data, and location data, often targeting the sale and sharing of this information. 

For health, the proposed New York Health Information Privacy Act captured headlines and left us in waiting. That bill (still active at the time of writing) broadly defined “regulated health information” to include data such as location and payment information. It included a “strictly necessary” standard for the use of regulated health information and unique, heightened consent requirements. Health data also remains a topic of interest at the federal level. Senator Cassidy (R-LA) recently introduced the Health Information Privacy Reform Act (HIPRA / S. 3097), which would expand federal health privacy protections to include new technologies such as smartwatches and health apps. Enforcers, too, got in on the action. The California DOJ completed a settlement concerning the disclosure of consumers’ viewing history with respect to web pages that create sensitive health inferences.

Location was another sensitive data category singled out by lawmakers and enforcers in 2025. In Oregon, HB 2008 amended the Oregon Consumer Privacy Act to ban the sale of precise location data (as well as the personal data of individuals under the age of 16). Colorado also amended its comprehensive privacy law to add precise location data (defined as within 1,850’) to the definition of sensitive data, subjecting it to opt-in consent requirements. Other states, such as California, Illinois, Massachusetts, and Rhode Island, also introduced laws restricting the collection and use of location data, often by requiring heightened consent for companies to sell or share such data (if not outright banning it). Like with health data, enforcers were also looking at location data practices. In Texas, we saw the first lawsuit under a state comprehensive privacy law, and it focused on the collection and use of location data (namely, inadequate notice and failure to obtain consent). The FTC was likewise looking at location data practices throughout the year.

Sensitive data—health, location, or otherwise—is unlikely to get less complex in 2026. New laws are being enacted and enforcement activity is heating up. The regulatory climate is shifting—freezing out old certainties and piling on high-risk categories like health inferences, location data, and neural data. In light of drifting definitions, fractal requirements, technologist-driven investigations, and slippery contours, robust data governance may offer an option to glissade through a changing landscape. Accurately mapping data flows and having ready documentation seems like essential equipment for unfavorable regulatory weather. 

— Jordan Wrigley, Beth Do & Jordan Francis

5. Will a federal moratorium steer the AI policy conversation in 2026?

If there’s been one recurring plot point in 2025, it was the interest at the White House and among some congressional leaders in hitting the pause button on state AI regulation. The year opened with lawmakers attempting to tuck a 10-year moratorium on state AI laws into the “One Big Beautiful Bill,” a move that would have frozen enforcement of a wide swath of state frameworks. That effort fizzled due to push back from a range of Republican and Democratic leaders, but the idea didn’t: similar language resurfaced during negotiations over the annual defense spending bill (NDAA). Ultimately, in December, President Trump signed an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” with the goal of curbing state regulations on AI deemed excessive via an AI Litigation Task Force and restrictions on funding for states enforcing AI laws that conflict with the principles outlined in the EO. This EO tees up a moment where states, agencies, and industry may soon be navigating not just compliance with new laws, but also federal challenges to how those laws operate (as well as federal challenges to the EO itself).  

A core challenge of the EO is the question of what, exactly, qualifies as an “AI law.” While standalone statutes such as Colorado’s AI Act (SB 205) are explicit targets of the EO’s efforts, many state measures are not written as AI-specific laws at all. Instead they are embedded within broader privacy, safety, or consumer protection frameworks.  Depending on how “AI law” is construed, a wide range of existing state requirements could fall within scope and potentially face challenge, including AI-related updates to existing civil rights or anti-discrimination statutes, privacy law provisions governing automated decisionmaking, profiling, and the use of personal data for AI training, and criminal statutes addressing deepfakes and non-consensual intimate images. 

Notably, however, the EO also identifies specific areas where future federal action would not preempt state laws, including child safety protections, AI compute and data-center infrastructure, state government procurement and use of AI, and (more open-endedly) “other topics as shall be determined.” That last carveout leaves plenty of room for interpretation and makes clear that the ultimate boundaries of federal preemption are still very much in flux. In practice, what ends up in or out of scope will hinge on how the EO’s text is interpreted and implemented. Technologies like chatbots highlight this ambiguity, as they can simultaneously trigger child safety regimes and AI governance requirements that the administration may seek to constrain.’

That breadth raises another big question for 2026: As the federal government steps in to limit state AI activity, will a substantive federal framework emerge in its place? Federal action on AI has been limited so far, which means a pause on state laws could arrive without a national baseline to fill the gaps, a notable departure from traditional preemption, where federal standards typically replace state ones outright. At the same time, Section 8(a) of the EO signals the Administration’s commitment to work with Congress to develop a federal legislative framework, while the growing divergence in state approaches has created a compliance patchwork that organizations operating nationwide must navigate.

With this EO, the role of state versus federal law in technology policy is likely to be the defining issue of 2026, with the potential to reshape not only state AI laws but the broader architecture of U.S. privacy regulation.

— Tatiana Rice & Justine Gluck

Youth Privacy in Australia: Insights from National Policy Dialogues

Throughout the fall of 2024, the Future of Privacy Forum (FPF), in partnership with the Australian Academic and Research Network (AARNet) and Australian Strategic Policy Institute (ASPI), convened a series of three expert panel discussions across Australia exploring the intersection of privacy, security, and online safety for young people. This event series built on the success of a fall 2023 one-day event that FPF hosted on privacy, safety, and security regarding industry standards promulgated by the Office of the eSafety Commissioner (eSafety).

These discussions took place in Sydney, Melbourne, and Canberra, and brought together leading academics, government representatives, industry voices, and civil society organizations. The discussions provide insight into the Australian approach to improving online experiences for young people through law and regulation, policy, and education. By bringing together experts across disciplines, the event series aimed to bridge divides between privacy, security, and safety conversations, and surface key tensions and opportunities for future work. This report summarizes key themes that emerged across these conversations for policymakers to consider as they develop forward-looking policies that support young people’s wellbeing and rights online.