FPF Releases Issue Brief on Vietnam’s Law on Protection of Personal Data and the Law on Data
Vietnam is undergoing a sweeping transformation of its data protection and governance framework. Over the past two years, the country has accelerated its efforts to modernize its regulatory architecture for data, culminating in the passage of two landmark pieces of legislation in 2025: the Law on Personal Data Protection (Law No. 91/2025/QH15) (PDP Law), which elevates the Vietnamese data protection framework from an executive act to a legislative act, while preserving many of the existing provisions, and the Law on Data (Law No. 60/2025/QH15) (Data Law). Notably, the PDP Law is expected to come into effect on January 1st, 2026.
The Data Law is Vietnam’s first comprehensive framework for the governance of digital data (both personal and non-personal), and applies to all Vietnamese agencies, organizations and individuals, as well as foreign agencies, organizations and individuals either in Vietnam or directly participating or are related to digital data activities in Vietnam. The data law became effective in July 2025. Together, these two laws mark a significant legislative shift in how Vietnam approaches data regulation, addressing overlapping domains of data protection, data governance, and emerging technologies.
This Issue Brief analyzes the two laws, which together define a new, comprehensive regime, for data protection and data governance in Vietnam. The key takeaways from this joint analysis show that:
The new PDP Law elevates and enhances data protection in Vietnam by preserving much of the existing regime, while introducing important refinements, such as taking a different, unique approach towards defining “basic” and “sensitive” personal data, or providing more nuance on the cross-border data transfers regime with new exceptions, even if it still revolves around Transfer Impact Assessments (TIAs).
However, the PDP Law continues to adopt a consent-focused regime, even as it provides clearer conditions for what constitutes valid consent.
The PDP Law outlines enhanced sector-specific obligations for high-risk processing activities, such as employment and recruitment, healthcare, banking, finance, advertising and social networking platforms.
The intersection of the PDP Law and the Data Law creates compliance implications for organizations navigating cross-border data transfers, as the present regulatory regime doubles down on the state-supervised model for such transfers.
Finally, risk and impact assessments are emerging as a central, albeit uncertain, aspect of the new regime.
This Issue Brief has three objectives. First, it summarizes key changes between the PDP Law and Vietnam’s existing data protection regime, and draws a comparison between the PDP Law and the EU’s General Data Protection Regulation (GDPR) (Section 1). Second, it analyzes the interplay between the Data Law and the PDP Law (Section 2). We then provide key takeaways for organizations as they navigate the implementation of these laws (Section 3).
You can view the updated version of this Issue Brief here.
Five Big Questions (and Zero Predictions) for the U.S. Privacy and AI Landscape in 2026
Introduction
For better or worse, the U.S. is heading into 2026 under a familiar backdrop: no comprehensive federal privacy law, plenty of federal rumblings, and state legislators showing no signs of slowing down. What has changed is just how intertwined privacy, youth, and AI policy debates have become, whether the issue is sensitive data, data-driven pricing, or the increasingly spirited discussions around youth online safety. And with a new administration reshuffling federal priorities, the balance of power between Washington and the states may shift yet again.
In a landscape this fluid, it’s far too early to make predictions (and unwise to pretend otherwise). Instead, this post highlights five key questions that will influence how legislators and regulators navigate the evolving intersection of privacy and AI policy in the year ahead.
No new comprehensive privacy laws in 2025: A portent of stability, or will 2026 increase legal fragmentation?
One of the major privacy storylines of 2025 is that no new state comprehensive privacy laws were enacted this year. Although that is a significant departure from the pace set in prior years, it is not due to an overall decrease in legislative activity on privacy and related issues. FPF’s U.S. Legislation team tracked hundreds of privacy bills, nine states amended their existing comprehensive privacy laws, and many more enacted notable sectoral laws dealing with artificial intelligence, health, and youth privacy and online safety. Nevertheless, the number of comprehensive privacy laws remains fixed for now at 19 (or 20, for those who count Florida).
Reading between the lines, there are several things this could mean for 2026. Perhaps the lack of new laws this year was more due to chance than anything else, and next year will return to business-as-usual. After all, Alabama, Arkansas, Georgia, Massachusetts, Oklahoma, Pennsylvania, Vermont, and West Virginia all had bills make it to a floor vote or progress into cross-chamber, and some of those bills have been carried over into the 2026 legislative session. Or perhaps this is indicative that a critical capacity of state laws has been reached and we should expect stability, at least in terms of which states do and do not have comprehensive privacy laws.
A third possibility is that next year promises something different. Although the landscape has come to be dominated by the “Connecticut model” for privacy, a growing bloc of other New England states are experimenting with bolder, more restrictive frameworks. Vermont, Maine, and Massachusetts all have live bills going into the 2026 legislative session that would, if enacted, represent some of the strictest state privacy laws on the books–many drawing heavily from Maryland’s substantive data minimization requirements. Vermont’s proposal would also include private right of action, and Massachusetts’ proposals, S.2619 and H.4746, would ban selling sensitive data and targeted advertising to minors. State privacy law is clearly at an inflection point, and what these states do in 2026—including whether they move in lock-step—could prove influential on the state privacy landscape.
— Jordan Francis
Are age signals the future of youth online protections in 2026?
As states have ramped up youth online privacy and safety legislation in recent years, a perennial question emerges each legislative session like clockwork: how can entities apply protections to minors if they don’t know who is a minor? Historically, legislatures have tried to solve this riddle with different approaches to knowledge standards that define when entities know, or should know, whether a user is a minor, while others tested age assurance requirements placed at the point of access to covered services. In 2025, however, that experimentation took a notable turn with the emergence of novel “age signals” frameworks.
Unlike earlier models that focused on service-level age assurance, age signals frameworks seek to shift age determination responsibilities upstream in the technology stack, relying on app stores or operating system providers to generate and transmit age signals to developers. In 2025, lawmakers enacted two distinct versions of this approach: the App Store Accountability Act (ASAA) model in Utah, Texas, and Louisiana; and the California AB 1043 model.
While both frameworks rely on age signaling concepts, they diverge significantly in scope and regulatory ambition. The ASAA model assigns app stores responsibility for age verification and parental consent, and requires them to send developers age signals that indicate (1) users’ ages and (2), for minors, whether parental consent has been obtained. These obligations introduce new and potentially significant technical challenges for companies, which must integrate age-signaling systems while reconciling these obligations with requirements under COPPA and state privacy laws. Meanwhile, the Texas’ ASAA law is facing two First Amendment challenges in federal court, with plaintiffs seeking to obtain preliminary injunctions before the law’s January 1 effective date.
California’s AB 1043 represents a different approach. The law requires operating system (OS) providers to collect age information during device setup and share this information with developers via the app store. This law does not require parental consent or additional substantive protections for minors; its sole purpose is to enable age data sharing to support compliance with laws like the CCPA and COPPA. The AB 1043 model—while still mandating novel age signaling dynamics between operating system providers, app stores, and developers— could be simpler to implement and received notable support from industry stakeholders prior to enactment.
So what might one ponder—but not dare predict—about the future of age signals in 2026? Two developments bear watching. The highly anticipated decision on the plaintiff’s request for an injunction against the Texas law may set the direction for how aggressively states will replicate this model—though momentum may continue, particularly given federal interest reflected in the House Energy & Commerce Committee’s introduction of H.R. 3149 to nationalize the ASAA framework. Second, the California AB 1043 model, which has not yet been challenged in court, may gain traction in 2026 as a more constitutionally durable option. With some states that have robust protections for minors established in existing privacy law, perhaps the AB 1043 model may serve as an attractive model for facilitating compliance with such obligations.
– Daniel Hales
Is 2026 shaping up to be another “Year of the Chatbots,” or is a legislative plot twist on the horizon?
If 2025 taught us anything, it’s that chatbots have stepped out of the supporting cast and into the starring role in AI policy debates. This year marked the first time multiple states (including Utah, New York, California, and Maine) enacted laws that explicitly address AI chatbots. Much of that momentum followed a wave of high-profile incidents involving “companion chatbots,” systems designed to simulate emotional relationships. Several families alleged that these tools encouraged their children to self-harm, sparking litigation, congressional testimony, and inquiries from both the Federal Trade Commission (FTC) and Congress and carrying chatbots to the forefront of policymakers’ minds.
States responded quickly. California (SB 243) and New York (S-3008C) enacted disclosure-based laws requiring companion chatbot operators to maintain safety protocols and clearly tell users when they are interacting with AI, with California adding extra protections for minors. Importantly, neither state opted for a ban on chatbot use, setting their focus on transparency and notice rather than prohibition.
And the story isn’t slowing down in 2026. Several states have already pre-filed chatbot bills, most centering once again on youth safety and mental health. Some may build on California’s SB 243 with stronger youth-specific requirements or tighter ties to age assurance frameworks. It is possible other states may broaden the conversation, like looking at chatbot use in elders, education, or employment, as well as diving deeper into questions of sensitive data.
The big question for the year ahead: Will policymakers stick with disclosure-first models, or pivot toward outright use restrictions on chatbots, especially for minors? Congress is now weighing in with three bipartisan proposals (the GUARD Act, the CHAT Act, and the SAFE Act), ranging from disclosure-forward approaches to full restrictions on minors’ access to companion chatbots. With public attention high and lawmakers increasingly interested in action, 2026 may be the year Congress steps in, potentially reshaping, or even preempting, state frameworks adopted in 2025.
– Justine Gluck
4.Will health and location data continue to dominate conversations around sensitive data in 2026?
While 2025 did not produce the hoped-for holiday gift of compliance clarity for sensitive or health data, the year did supply flurries, storms, light dustings, and drifts of legislative and enforcement activity. In 2025, states focused heavily on health inferences, neural data, and location data, often targeting the sale and sharing of this information.
For health, the proposed New York Health Information Privacy Act captured headlines and left us in waiting. That bill (still active at the time of writing) broadly defined “regulated health information” to include data such as location and payment information. It included a “strictly necessary” standard for the use of regulated health information and unique, heightened consent requirements. Health data also remains a topic of interest at the federal level. Senator Cassidy (R-LA) recently introduced the Health Information Privacy Reform Act (HIPRA / S. 3097), which would expand federal health privacy protections to include new technologies such as smartwatches and health apps. Enforcers, too, got in on the action. The California DOJ completed a settlement concerning the disclosure of consumers’ viewing history with respect to web pages that create sensitive health inferences.
Location was another sensitive data category singled out by lawmakers and enforcers in 2025. In Oregon, HB 2008 amended the Oregon Consumer Privacy Act to ban the sale of precise location data (as well as the personal data of individuals under the age of 16). Colorado also amended its comprehensive privacy law to add precise location data (defined as within 1,850’) to the definition of sensitive data, subjecting it to opt-in consent requirements. Other states, such as California, Illinois, Massachusetts, and Rhode Island, also introduced laws restricting the collection and use of location data, often by requiring heightened consent for companies to sell or share such data (if not outright banning it). Like with health data, enforcers were also looking at location data practices. In Texas, we saw the first lawsuit under a state comprehensive privacy law, and it focused on the collection and use of location data (namely, inadequate notice and failure to obtain consent). The FTC was likewise looking at location datapracticesthroughoutthe year.
Sensitive data—health, location, or otherwise—is unlikely to get less complex in 2026. New laws are being enacted and enforcement activity is heating up. The regulatory climate is shifting—freezing out old certainties and piling on high-risk categories like health inferences, location data, and neural data. In light of drifting definitions, fractal requirements, technologist-driven investigations, and slippery contours, robust data governance may offer an option to glissade through a changing landscape. Accurately mapping data flows and having ready documentation seems like essential equipment for unfavorable regulatory weather.
— Jordan Wrigley, Beth Do & Jordan Francis
5. Will a federal moratorium steer the AI policy conversation in 2026?
If there’s been one recurring plot point in 2025, it was the interest at the White House and among some congressional leaders in hitting the pause button on state AI regulation. The year opened with lawmakers attempting to tuck a 10-year moratorium on state AI laws into the “One Big Beautiful Bill,” a move that would have frozen enforcement of a wide swath of state frameworks. That effort fizzled due to push back from a range of Republican and Democratic leaders, but the idea didn’t: similar language resurfaced during negotiations over the annual defense spending bill (NDAA). Ultimately, in December, President Trump signed an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” with the goal of curbing state regulations on AI deemed excessive via an AI Litigation Task Force and restrictions on funding for states enforcing AI laws that conflict with the principles outlined in the EO. This EO tees up a moment where states, agencies, and industry may soon be navigating not just compliance with new laws, but also federal challenges to how those laws operate (as well as federal challenges to the EO itself).
A core challenge of the EO is the question of what, exactly, qualifies as an “AI law.” While standalone statutes such as Colorado’s AI Act (SB 205) are explicit targets of the EO’s efforts, many state measures are not written as AI-specific laws at all. Instead they are embedded within broader privacy, safety, or consumer protection frameworks. Depending on how “AI law” is construed, a wide range of existing state requirements could fall within scope and potentially face challenge, including AI-related updates to existing civil rights or anti-discrimination statutes, privacy law provisions governing automated decisionmaking, profiling, and the use of personal data for AI training, and criminal statutes addressing deepfakes and non-consensual intimate images.
Notably, however, the EO also identifies specific areas where future federal action would not preempt state laws, including child safety protections, AI compute and data-center infrastructure, state government procurement and use of AI, and (more open-endedly) “other topics as shall be determined.” That last carveout leaves plenty of room for interpretation and makes clear that the ultimate boundaries of federal preemption are still very much in flux. In practice, what ends up in or out of scope will hinge on how the EO’s text is interpreted and implemented. Technologies like chatbots highlight this ambiguity, as they can simultaneously trigger child safety regimes and AI governance requirements that the administration may seek to constrain.’
That breadth raises another big question for 2026: As the federal government steps in to limit state AI activity, will a substantive federal framework emerge in its place? Federal action on AI has been limited so far, which means a pause on state laws could arrive without a national baseline to fill the gaps, a notable departure from traditional preemption, where federal standards typically replace state ones outright. At the same time, Section 8(a) of the EO signals the Administration’s commitment to work with Congress to develop a federal legislative framework, while the growing divergence in state approaches has created a compliance patchwork that organizations operating nationwide must navigate.
With this EO, the role of state versus federal law in technology policy is likely to be the defining issue of 2026, with the potential to reshape not only state AI laws but the broader architecture of U.S. privacy regulation.
— Tatiana Rice & Justine Gluck
Youth Privacy in Australia: Insights from National Policy Dialogues
Throughout the fall of 2024, the Future of Privacy Forum (FPF), in partnership with the Australian Academic and Research Network (AARNet) and Australian Strategic Policy Institute (ASPI), convened a series of three expert panel discussions across Australia exploring the intersection of privacy, security, and online safety for young people. This event series built on the success of a fall 2023 one-day event that FPF hosted on privacy, safety, and security regarding industry standards promulgated by the Office of the eSafety Commissioner (eSafety).
These discussions took place in Sydney, Melbourne, and Canberra, and brought together leading academics, government representatives, industry voices, and civil society organizations. The discussions provide insight into the Australian approach to improving online experiences for young people through law and regulation, policy, and education. By bringing together experts across disciplines, the event series aimed to bridge divides between privacy, security, and safety conversations, and surface key tensions and opportunities for future work. This report summarizes key themes that emerged across these conversations for policymakers to consider as they develop forward-looking policies that support young people’s wellbeing and rights online.
FPF Releases Updated Report on the State Comprehensive Privacy Law Landscape
The state privacy landscape continues to evolve year-to-year. Although no new comprehensive privacy laws were enacted in 2025, nine states amended their existing laws and regulators increased enforcement activity, providing further clarity (and new questions) about the meaning of the law. Today FPF is releasing its second annual report on the state comprehensive privacy law landscape—Anatomy of a State Comprehensive Privacy Law: Charting The Legislative Landscape.
The updated version of this report builds on last year’s work and incorporates developments from the 2025 legislative session. Between 2018 and 2024, nineteen U.S. states enacted comprehensive consumer privacy laws. As the final state legislatures close for the year, 2025 looks poised to break that trend and see no new laws enacted. Nevertheless, nine U.S. states—California, Colorado, Connecticut, Kentucky, Montana, Oregon, Texas, Utah, and Virginia—passed amendments to existing laws this year. This report summarizes the legislative landscape. The core components that comprise the “anatomy” of a comprehensive privacy law include:
Definitions of covered entities (controllers and processors) and covered data (personal data and sensitive data);
Individual rights of access, correction, portability, deletion, and both opt-in and opt-out requirements for certain uses of personal data;
Business obligations such as transparency, data minimization, and data security; and
Enforcement by the attorney general.
The report concludes with an overview of ongoing legislative trends:
Changes to applicability thresholds;
Expanding scope of sensitive data;
Emergence of substantive data minimization requirements;
Heightened protections for adolescents’ personal data, consumer health data, biometrics, and location data;
New individual rights, like contesting adverse profiling decisions; and
A slowdown of legislative activity in 2025.
This report highlights the strong commonalities and the nuanced differences between the various state laws, showing how they can exist within a common, partially-interoperable framework while also creating challenging compliance difficulties for companies within their overlapping ambits. Until a federal privacy law materializes, this ever changing state landscape will continue to evolve as lawmakers iterate upon the existing frameworks and add novel obligations, rights, and exceptions to respond to changing societal, technological, and economic trends.
Future of Privacy Forum Appoints Matthew Reisman as Vice President of U.S. Policy
Washington, D.C. — (December 9, 2025) — The Future of Privacy Forum (FPF), a global non-profit focused on data protection, AI, and data governance, has appointed Matthew Reisman as Vice President, U.S. Policy.
Reisman brings extensive experience in privacy policy, data protection, and AI governance to FPF. He most recently served as a Director of Privacy and Data Policy at the Centre for Information Policy Leadership (CIPL), where he led research, public engagement, and programming on topics including accountable development and deployment of AI, privacy and data protection policy, cross-border data flows, organizational governance of data, and privacy-enhancing technologies (PETs). Prior to joining CIPL, he was a Director of Global Privacy Policy at Microsoft, where he helped shape the company’s approach to privacy and data policy, including its intersections with security, digital safety, trade, data governance, cross-border data flows, and emerging technologies such as AI and 5G. His work included close collaboration with Microsoft field teams and engagement with policymakers and regulators across Asia-Pacific, Latin America, the Middle East, Africa, and Europe.
“Matthew is joining FPF with a rare combination of policy expertise, practical experience, and a clear commitment to thoughtful privacy leadership,” said Jules Polonetsky, CEO of FPF. “He understands our mission, our community, and the complexities of data governance, which makes him an outstanding fit for this role. We’re delighted to have him on board.”
In his role as Vice President, U.S. Policy, Reisman will oversee FPF’s U.S. policy work, including legislative and regulatory engagement, research, and initiatives addressing emerging data protection, AI, and technology challenges. He will also lead FPF’s experts across youth privacy, data governance, health, and other portfolios to advance key FPF projects and priorities.
“FPF has long been a leader for thoughtful, pragmatic privacy and data policy and analysis,” said Reisman. “I’m honored to join the team and excited to help advance FPF’s mission of shaping smart policy that safeguards individuals and supports innovation.”
FPF welcomes Reisman at a critical time for data governance, as Congress, federal agencies, and states increase their focus on artificial intelligence, children’s privacy, data security, and privacy legislation. FPF’s U.S. policy team recently published an analysis of the package of youth privacy and online safety bills introduced in the U.S. House in November here and a landscape analysis of state chatbot legislation here.
To learn more about the Future of Privacy Forum, visit fpf.org.
###
Brussels Privacy Symposium 2025 Report – A Data Protection (R)evolution?
This year’s Brussels Privacy Symposium, held on 14 October 2025, brought together stakeholders from across Europe and beyond for a conversation about the GDPR’s role within the EU’s evolving digital framework. Co-organized jointly by the Future of Privacy Forum and the Brussels Privacy Hub of the Vrije Universiteit Brussel, the ninth edition convened experts from academia, data protection authorities, EU institutions, industry, and civil society to discuss Europe’s shifting regulatory landscape, under the umbrella title of A Data Protection (R)evolution?
The opening keynote delivered by Ana Gallego (Director General, DG JUST, European Commission) explored how the GDPR continues to anchor the EU’s digital rulebook, even as the European Commission pursues targeted simplification measures, and how the GDPR interacts other legislative instruments such as the DSA, DGA, and the AI Act, framing them not as overlapping frameworks, but rather complementary pillars that reinforce the EU’s evolving digital framework.
Across the three expert panels, the guest speakers underlined a shift from rewriting the GDPR to refining its implementation through targeted adjustments, stronger regulatory cooperation, and clarified guidance on issues such as legitimate interests for AI training and the CJEU decision on pseudonymization. The final panel placed Data Protection Authorities at the center of Europe’s future in AI governance, reinforcing GDPR safeguards and guiding AI Act harmonization.
A series of lightning talks looked at the challenges posed by large language models and automated decision-making, emphasizing the need for lifecycle-based risk management, robust oversight. In a guest speaker talk, Professor Norman Sadeh addressed the growing role of AI agents, and the need for interoperable standards and protocols to support user autonomy in increasingly automated environments.
European Data Protection Supervisor Wojciech Wiewiórowski and Professor Gianclaudio Malgieri closed the ninth edition of the Symposium with a dialogue reflecting on the need to safeguard fundamental rights amid ongoing calls for simplification.
In the Report of the Brussels Privacy Symposium 2025, readers will find insights from these discussions, along with additional highlights from the panels, workshops, and lightning talks that dived into the broader EU digital architecture.
FPF releases Issue Brief on Brazil’s Digital ECA: new paradigm of safety & privacy for minors online
This Issue Brief analyzes Brazil’s recently enacted children’s online safety law, summarizing its key provisions and how they interact with existing principles and obligations under the country’s general data protection law (LGPD). It provides insight into an emerging paradigm of protection for minors in online environments through an innovative and strengthened institutional framework, focusing on how it will align with and reinforce data protection and privacy safeguards for minors in Brazil and beyond.
This Issue Brief summarizes the Digital ECA’s most relevant provisions, including:
Broad extraterritorial scope: the law applies to all information technology products and services aimed at or likely to be accessed by minors, with extraterritorial application.
“Likelihood of access” of a technology service or product as a novel standard, composed of three elements: attractiveness, ease of use, and potential risks to minors.
Provisions governed by the principle of the “best interest of the child,” requiring providers to prioritize the rights, interests, and safety of minors from the design and throughout their operations.
Online safety by design and by default, mandating providers to adopt protective measures by design and monitor them throughout the operation of the service or product, including age verification mechanisms and parental supervision tools.
Age rating as novelty, requiring providers to maintain age rating policies and continuously assess their content based on such rating.
Enforcement of the law is assigned to the ANPD, which was transformed into a regulatory agency with increased and strengthened powers to monitor its compliance, in addition to its responsibilities under the data protection law.
Significant sanctions under the Digital ECA, which can range from warnings and fines up to 10% of a company’s revenue to the permanent suspension of activities in Brazil.
What’s New in COPPA 2.0? A Summary of the Proposed Changes
On November 25th, U.S. House Energy and Commerce introduced a comprehensive bill package to advance child online privacy and safety, which included its own version of the Children and Teens’ Online Privacy Protection Act (“COPPA 2.0”) to modernize COPPA. First enacted in 1998, the Children’s Online Privacy Protection Act (COPPA) is a federal law that provides important online protections for children’s data. Now that the law is nearly 30 years old, many advocates, stakeholders, and Congressional lawmakers are pushing to amend COPPA to ensure its data protections are reflective and befitting of the online environments youth experience today.
The new House version of COPPA 2.0, introduced by Reps. Tim Walberg (R-MI) and Laurel Lee (R-FL), would amend the law by adding new definitions, revising the knowledge standard, augmenting core requirements, and adding in new substantive provisions. Although the new COPPA 2.0 introduction marks meaningful progress in the House, it is not the first attempt to update COPPA. The Senate has pursued COPPA reforms since as early as 2021, and Senators Markey (D-MA) and Cassidy (R-LA) most recently reintroduced their version of this framework in March 2025–one that is distinguishable from this new House version in several meaningful ways. Note: For more information on the exact deviations between the current Senate and House versions of COPPA 2.0, click the button below for a redline comparison of these two proposals.
Putting all the dynamic COPPA 2.0 legislative activity into focus–this blog post summarizes notable changes to COPPA under the House proposal and notes key divergence points from the long-standing Senate framework. In sum, a few key takeaways include:
An evolving scope: proposed changes to raise the age threshold to include protections for teens, implement a two-tiered knowledge standard with a constructive knowledge component for large social media companies, and codify an expanded definition of personal information would significantly broaden the statute’s scope.
New takes on substantive obligations and rights: alongside augmenting several existing obligations, the House proposal would introduce significant new provisions, including a direct ban on targeted advertising, expanded data minimization standards, and new limits on international data transfers without consent.
Significant preemption language: significantly, the proposed language would broadly preempt state laws that relate to provisions of COPPA as amended by this legislation.
Scope and Definitions
While there are many technical amendments proposed in the House COPPA 2.0 legislation to clarify existing provisions in COPPA, there are four key additions and modifications in the bill that significantly alter its scope and application. First, the bill expands protections to teens. While current COPPA protections only cover children up to the age of 13, COPPA 2.0 would expand protections to include teens under the age of 17.
Second, the bill would revise the definition of “personal information” to match the expanded interpretation established through FTC regulations, which includes subcategories such as geolocation data, biometric identifiers, and persistent identifiers (e.g. IP address and cookies), among others. The proposed definitions for these categories largely follow the COPPA rule definitions, except for a notable difference to the definition of biometric identifiers.
Specifically, COPPA 2.0 includes a broader definition of biometric identifiers by removing the requirement that processed characteristics “can be used” for individual identification that was included in the COPPA Rule definition. Therefore, under the new text, any processing of an individual’s biological or behavioral traits–such as fingerprints, voiceprints, retinal scans, facial templates, DNA, and gait–would qualify as a biometric identifier, even if the information is not capable of or intended for identifying an individual. The broader definition of biometric identifiers embraced by the House may have noteworthy implications for state privacy laws, which typically limit definitions of biometric information to data “that is used” to identify an individual. In contrast, to the House approach, the Senate proposal for COPPA 2.0 adopts a definition of biometric identifiers that is limited to characteristics “that are used” to identify an individual.
Third, COPPA 2.0 would formally codify the long-standing school consent exception used in COPPA compliance and FTC guidance for over a decade. As a result, operators acting under an agreement with educational agencies or institutions would be exempted from the law’s parental consent requirements with respect to students, though notably, the proffered definition of “educational agency or institution” would only capture public schools, not private schools and institutions.
Lastly, one of the most significant proposed modifications to COPPA’s scope involves the knowledge standard. Currently, COPPA requires operators to comply with the law’s obligations when they have actual knowledge that they are collecting the personal information of children under 13 or when they operate a website or online service that is directed towards children. The House version of COPPA 2.0 would establish a two-tiered standard that largely maintains the actual knowledge threshold for operators, except for “high-impact social media companies” who would be subject to an actual knowledge or willful disregard standard. The House’s use of an actual knowledge or willful disregard standard for large social media companies tracks with the emerging trend in some state privacy laws that provide heightened online protections for youth, which have more broadly employed the actual knowledge or willful disregard standard. In contrast, the Senate COPPA 2.0 proposal includes a novel and untested “actual knowledge or knowledge fairly implied on the basis of objective circumstances” standard.
Substantive Obligations and Rights
The House version of COPPA 2.0 would both augment existing COPPA protections and add in new substantive obligations and provisions significant for compliance. Notable amendments proposed in this new legislation to augment COPPA protections include:
Prohibition on targeted advertising: COPPA 2.0 would outright ban targeted advertising practices, referred to as “individual-specific advertising,” with no consent exceptions. This ban on targeted advertising does not include search advertising, contextual advertising, or ad attribution.
Opt-in consent for Teens: Importantly, to balance considerations around teen autonomy, covered operators would need to obtain opt-in consent from teens aged 13-16 for PI collection and processing, but parental consent would still be required for children under the age of 13. The adoption of an opt-in consent model for teens largely aligns with state comprehensive privacy law approaches to teen consent.
New data minimization principles for PI collection: COPPA 2.0 would maintain the COPPA rule’s data retention limit, requiring operators to retain personal information collected from a child “for only as long as is reasonably necessary to fulfill the specific purpose(s) for which the information was collected.” However, COPPA 2.0 would mandate a different data minimization standard for the collection of child PI, requiring that operators limit collection of a child’s or teen’s PI to what is “consistent with the context of a particular transaction or service or the relationship of the child or teen with the operator, including any collection necessary to fulfill a transaction or provide a product or service requested by a child or teen.”
Expanding data access and deletion rights: Existing rights of parental review under COPPA are limited to a parent’s ability to request information on the types of child PI collected by an operator and obtain a copy of that PI, and to withdraw consent for further collection, use, and maintenance of collected data. COPPA 2.0 would bolster data access rights by providing parents and teens with the right to access, correct, or delete PI collected by covered operators upon request. The expansion of data rights proposed by COPPA 2.0 more closely aligns with data subject rights observed in state comprehensive privacy laws.
In addition to amendments that bolster existing COPPA protections, several amendments also add in notable substantive provisions:
Exploring the feasibility of a common consent mechanism: COPPA 2.0 would direct the FTC to study the feasibility of allowing operators to use a common verifiable consent mechanism to fulfill the statute’s consent obligations, which would allow a single operator to obtain consent from a parent or teen on behalf of multiple operators providing “joint or related services.” At a time when additional layers of parental consent for child data collection are required by COPPA regulations and applicable state laws, such a mechanism, if feasible, could help alleviate some of the frictions experienced under existing requirements.
International data transfer restrictions: COPPA 2.0 would make it illegal for an operator to, without providing notice to a child’s parent or teen, store the personal information of a child or teen in a covered nation (North Korea, China, Russia, or Iran), transfer such information to a covered nation, or provide a covered nation with access to such information. In contrast, the Senate’s COPPA 2.0 proposal does not include these same international data transfer restrictions.
Preemption: COPPA 2.0 proposes significant preemption language that would nullify any state laws or provisions that “relate to” the provisions and protections under the Act. Such far-reaching preemption language could impact many state privacy and online safety laws that have been enacted in the last few years. Comparatively, the House proposal takes a much broader approach to preemption than the Senate framework, which largely maintains COPPA’s existing preemption language.
Looking Ahead
Enacting COPPA 2.0 would expand online privacy protections for children and teens; and the fact that both chambers have introduced proposals underscores the growing legislative momentum to enshrine stronger youth privacy protections at the federal level. And yet, despite the Congressional motivation to advance legislation on youth privacy and safety this session, it is notable that the House version of COPPA 2.0 does not have the same bipartisan support as its Senate counterpart. What the exact impact of the lack of bipartisan support will mean for the future of the House’s COPPA 2.0 proposal remains subject to speculation. However, FPF will continue to monitor the development of COPPA 2.0 legislation alongside the progression of other bills included in the robust House Energy & Commerce youth online privacy and safety legislative package.
FPF Holiday Gift Guide for AI-Enabled, Privacy-Forward AgeTech
On Cyber Monday, giving supportive technology to an older loved one or caregiver is a great option. Finding the perfect holiday gift for an older adult who values their independence can be a challenge. This year, it might be worth exploring the exciting world of AI-enabled AgeTech. It’s not only gadgets; it’s also about giving the gift of autonomy, safety, and a little bit of futuristic fun. Here are three types of AI-enabled AgeTech to consider and information help pick the right privacy fit for the older adult and/or caregiver in your like this holiday season.
Mobility and Movement AgeTech
This category is all about keeping the adventure going, whether it’s a trip across town or safely navigating the living room. These gifts use AI-driven features to support physical activity and reduce the worry of falls or isolation. Think of them as the ultimate support tech for staying active.
For those who need a little help around the house, AI-powered home assistant robots can fetch items on command. For AI-driven transportation, AI is used to find the best route to a place, match riders with the best-suited drivers, and interpret voice commands. In wearables, AI continuously analyzes data like gait and heart rate to learn baseline health patterns, allowing it to detect a potential fall or an abnormal health trend and generate emergency alerts. In future AI-powered home assistant robots, AI might be the brain behind the natural language processing that understands commands and the mapping needed to safely find its way around and pick up tissues, medication bottles, or other small items.
Gift Guide: Mobility and Movement AgeTech
These capabilities rely on personal data to understand where a person is on a map or in their house. These devices collect location or spatial data to help individuals know where they are or let others know where they are. Sometimes these data are collected at the same time as when the person moves for features like “trip share” to help others know where they are. Many mobility AgeTech gifts may also collect body movement data, such as steps, balance, gait patterns, and alerts for inactivity or potential falls.
For any AgeTech gift receiver, be clear that, in some cases, AI may need to continuously analyze personal habit and location data to learn baseline patterns and be useful and even to reduce bias. They must consent to this ongoing collection and processing of data if they or their caregivers want to use certain features. Given sensitive data, how private information might be shared with others or how the company might share the data based on their policies, especially if the AI integrates with other services should be transparent and easy to understand.
State Consumer AI and Data Privacy Laws
Location data is protected under several state privacy laws, including when it is collected by AI. In states that have enacted privacy laws, special consent to collect or process “sensitive” data including location data may be requested by AgeTech devices or apps for certain features or functions. Currently, 20 states in the U.S. have enacted a consumer data privacy law. These laws generally provide consumers with rights to access, delete, and correct their personal data; and provide special protections for sensitive data, such as biometric identifiers, precise geolocation, and certain financial identifiers.
In 2025, state legislatures passed a number of AI-focused bills that covered issues such as chatbots, deepfakes, and more. These existing and proposed regulations may have impacts on AgeTech design and practices, as they determine safeguards and accountability mechanisms that developers must incorporate to ensure AgeTech tools remain compliant and safe for older adults.
Connection and Companionship AgeTech
These gifts leverage AI to offer sympathetic companionship, reduce the complexities of care coordination, and foster connection between individuals and their communities of care. They are specifically engineered to bridge the distance in modern caregiving, providing an essential safeguard against social isolation and loneliness.
Gift givers will find a mix of helpful tools here, like typical AI-driven calendar apps repurposed as digital care hubs for all family members to coordinate; simplified communication devices (tablets with custom, easy interfaces) paired with a friendly AI helper for calls and reminders; and even AI animatronic pets that respond to touch and voice, offering therapeutic benefits without the chores associated with a real pet.
Gift Guide: Connection and Companionship AgeTech
These devices may log personal routines, capturing medication times, appointments, and daily habits. They may also collect voice interactions with AI helpers and may collect data related to mood, emotions, pain, or cognition through check-ins or certain body-related data (including neural data) during check-ins.
Gift-givers should discuss potential gifts with older adults and consider the privacy of others who might be unintentionally surveilled, such as friends, workers, or bystanders. Since caregivers often view shared calendars and activity logs, ensure access controls are distinct, role-based, and align with the older adult’s preferences. The older adult should control data access (e.g., medical routines vs. social events). Be transparent about whether AI companions record conversations or check-in responses, and how that sensitive personal data is stored and analyzed.
Health and Wellness Data Protections
The Health Insurance Portability and Accountability Act (HIPAA), does not protect all health data. HIPAA very generally applies to health care professionals and plans providing payment and insurance for health care services. So an AI companion health device provided as part of treatment by your doctor will be covered by HIPAA, but one sold directly to a consumer is less likely to be protected by HIPAA. However, some states have recently passed laws providing certain rights to consumers for general health information that is not protected by HIPAA.
Daily Life and Task Support AgeTech
This tech category covers life’s essentials through AI-driven automation, focusing on managing finances, medication, and overall health with intelligent, passive monitoring. It’s about creating powerful, often invisible, digital safeguards, ranging from financial safeguard tools to connected health devices integrated into AI-based AI-driven homes, that offer profound peace of mind by anticipating and flagging risks.
This is where gift givers can find presents to help protect older adults and caregivers from increasingly common AI-driven leveraging machine learning to identify and flag suspicious activity targeting older populations. Look for AI financial tools that watch bank accounts for potential fraud or unusual activity, connected home devices that passively check for health changes, and AI-driven pill dispensers that light up and accurately sort medication.
Gift Guide: Daily Life and Task Support AgeTech
The sensitive data collected by these devices can be a major target of scammers and other bad actors. Some tools may collect transaction history and alerts for “unusual” spending to help reduce scam risks. Other AgeTech may log medication “adherence” (timestamps of dispensed doses) or need an older adult’s medical history to work well. In newer systems with advanced identification technologies, it could also include biometric data such as fingerprints or face scans used to ensure safe access to individual accounts.
Gift-givers need to consider how the AI determines something is “unusual” to avoid unnecessary worry from false alarms in banking or health. For devices like AI-driven pill dispensers, also ask what happens to the device’s functionality if the subscription is canceled. For passive monitoring devices, ensure meaningful consent; the older adult must have explicit, ongoing understanding and consent for continuous collection of highly sensitive data that happens while living their daily life from bathroom trips to spending habits.
Financial Data Protections
Those giving gifts of this kind may want to consult with a trusted financial institution or professional before purchasing. If a financial-monitoring tool is instead provided by a non-bank (such as a consumer-facing fintech app), consumer financial protections may not apply, even if the data is still highly sensitive. State privacy laws and FTC authority may offer protections that can vary in scope.
AI-enabled AgeTech Gift Checklist
We suggest evaluating AgeTech products through a practical lens:
What data is collected and how? For example: voice recordings via microphone in an AI-enabled companion bot.
Who will manage access data privacy, and consents for the device or account? The older adult? Caregivers? Both?
What protections apply? HIPAA, state AI or privacy laws.?
How does the AI ensure safety and reliability? For example: fall detection accuracy, avoiding false alarms for “unusual” activity.
Since multiple state and federal laws create a mix of protections, gift givers need to take an extra step to understand the protections and choose the best balance of safety, privacy, and support to go with their AgeTech present. A national privacy law could simplify the inconsistency and gaps, but does not seem to be on the agenda in Congress.
The United States population demographics point to an increasingly aged population, and AI-enabled AgeTech has shown promise in supporting the independence of older adults. Gift-givers have an opportunity to offer tools that support independence and strengthen autonomy, especially as AI continues to be adapted to older adults’ specific needs and preferences. In a recent national poll by the University of Michigan, 96% of older adults who used AI-powered home security devices and systems and 80% who used AI-powered voice assistants in the past year said these devices help them live independently and safely in their home.
Whether the device helps someone move confidently, stay socially engaged, or manage essential tasks, each category relies on sensitive personal data that must be handled thoughtfully. By thinking through how these technologies work, what information they collect, and the rights and safeguards that protect that data, you can ensure your presents are empowering and future-thinking.
Happy Holidays from the Future of Privacy Forum!
GPA 2025: AI development and human oversight of decisions involving AI systems were this year’s focus for Global Privacy regulators
The 47th Global Privacy Assembly (GPA), an annual gathering of the world’s privacy and data protection authorities, took place between September 15 and 19, 2025, hosted by South Korea’s Personal Information Protection Commission in Seoul. Over 140 authorities from more than 90 countries are members of the GPA, and its annual conferences serve as an excellent bellwether for the priorities of the global data protection and privacy regulatory community, providing the gathered authorities an opportunity to share policy updates, priorities, collaborate on global standards, and adopt joint resolutions on the most critical issues in data protection.
This year, the GPA adopted three resolutions after completing its five-day agenda, including two closed-session days for members and observers only:
The first key takeaway from the results of GPA’s Closed Session is a substantial difference in the scope of the resolutions relative to prior years. In contrast to the five resolutions adopted in 2024 or the seven adopted in 2023, which covered a wide variety of data protection topics from surveillance to the use of health data for scientific research, the 2025 resolutions are much more narrowly tailored and primarily focused on AI, with a pinch of digital literacy. Taken together with the meeting’s content and agenda, these resolutions provide insight into the current priorities of the global privacy regulatory community – and perhaps unsurprisingly, reflect a much-narrowed focus on AI issues compared to previous years.
Across all three resolutions adopted in 2025, a few core issues become apparent:
First, regulators are continuing to promote shared conceptual frameworks for data protection regulation, with a particular focus on raising awareness of privacy and data protection issues throughout the world.
Second, regulators are starting to zoom into specific issues related to AI and personal data processing, departing from the general, broad approach shown so far: training and fine-tuning of AI models and meaningful human oversight over individual decisions involving AI were the two concrete topics subject to convergence of regulatory perspectives this year.
Third, a risk-based consensus for evaluating AI seems to be holding, with all three resolutions framing discussions of AI policy in the context of risk, and discussing the specific problem of bias in the context of AI-related data processing.
Fourth, there remains great interest in mutual cooperation through the GPA or other international fora; all three of the 2025 resolutions explicitly promote this goal.
Finally, exploring what topics the Assembly didn’t address is also interesting. A deeper dive into each resolution is illustrative of some of the shared goals of the global privacy regulatory community – particularly in an age where major tech policymakers in the U.S., the European Union, and around the world are overwhelmingly focused on AI. It should be noted that the three resolutions passed quasi-unanimously, with only one abstention among GPA members noted in the public documents (US Federal Trade Commission).
Resolution on the collection, use and disclosure of personal data to pre-train, train and fine-tune AI models
The first resolution, covering the collection, use and disclosure of personal data to pre-train, train, and fine-tune AI models, was sponsored by the Office of the Australian Information Commissioner and co-sponsored by 15 other GPA member authorities. The GPA resolved to four specific steps after articulating a greater number of underlying concerns – specifically, that:
The collection, use and disclosure of personal data for the pre-training, training, and fine tuning of AI models is within the scope of data protection and privacy principles.
The members of the GPA will promote these privacy principles and engage with other policy makers and international bodies (specifically naming the OECD, Council of Europe, and the UN) to raise awareness and educate AI developers and deployers.
The members of the GPA will coordinate enforcement efforts on generative AI technologies in particular to ensure a “consistent standard of data protection and privacy” is applied.
The members of the GPA will commit to sharing developments on education, compliance and enforcement on generative AI technologies to foster the coherence of regulatory proposals.
The specific resolved steps indicate a particular focus on generative AI technologies, and a recognition that in order to be effective, it is likely that regulatory standards will need to be consistent across international boundaries. Three of the four steps also emphasize cooperation among international privacy enforcement authorities; although notably this resolution does not include any specific proposals for adopting shared terminology directly.
The broader document relies on a rights-based understanding of data protection rights and notes several times that the untrammeled collection and use of personal data in the development of AI technologies may imperil the fundamental right to privacy, but casts the development of AI technologies in a rights-consistent manner as “ensur[ing] their trustworthiness and facilitat[ing] their adoption.” The resolution repeatedly emphasizes that all stages of the algorithmic lifecycle are important in the context of processing personal data.
The resolution also provides eight familiar data protection principles that are reminiscent of the OECD’s data protection principles and the Fair Information Practice Principles that preceded them – under this resolution personal data should only be used throughout the AI lifecycle when its use comports with: a lawful and fair basis for processing; purpose specification and use limitation; data minimization; transparency; accuracy; data security; accountability and privacy by design; and the rights of data subjects.
The resolution does characterize some of these principles in ways specific to the training of AI models – critically noting that:
Related to the first principle of lawfulness, “the public availability of [personal] data does not automatically imply a lawful basis for its processing, which must always be assessed in light of the data subject’s reasonable expectation of privacy.”
Regarding the third principle of data minimisation, “consideration should be given to whether the AI model can be trained without the collection or use of personal data.”
Concerning the fifth principle, accuracy, that developers should “undertake appropriate testing to ensure a high degree of accuracy in [a] model’s outputs.”
A component of the sixth principle, data security, is an obligation on entities developing or deploying AI systems to put in place “effective safeguards to prevent and detect attempts to extract or reconstruct personal data from trained AI models.”
This articulation of traditional data protection principles demonstrates how the global data protection community is considering how the existing principles-based data privacy frameworks will specifically apply to AI and other emerging technologies.
Resolution on meaningful human oversight of decisions involving AI systems
The second resolution of 2025 was submitted by the Office of the Privacy Commissioner of Canada and was joined by thirteen co-sponsors, and focused on addressing how the members could synchronize their approaches to “meaningful human oversight” of AI decision-making. After explanatory text, the Assembly resolved four specific points:
GPA Members should promote a common understanding of the notion of meaningful human oversight of decisions, which includes the considerations set out in [the second] resolution.
GPA Members should encourage the designation of overseers with “necessary competence, training, resources, and awareness of contextual information and specific information regarding AI systems as a means of meaningful oversight.”
The Assembly should use the GPA Ethics and Data Protection in Artificial Intelligence Working group to share knowledge and best practices to support practical implementation of “meaningful human oversight” in their respective jurisdictions.
The Assembly should continue to promote the development of technologies or processes that advance explainability for AI systems.
This resolution, topically much more narrowly focused than the first one analyzed above, is based on the contention that AI systems’ decision-making processes may have “significant adverse effects on individuals’ rights and freedoms” if there is no “meaningful human oversight” of system decision-making and thus no effective recourse for an impacted individual to challenge such a decision. This is a notable premise, as only this resolution (of the three) also acknowledges that “some privacy and data protection laws” establish a right not to be subject to automated decision-making along the lines of Article 22 GDPR.
Ahead of the specifically resolved points, the second resolution appears to identify the potential for “timely human review” of automated decisions that “may significantly affect individuals’ fundamental rights and freedoms” as the critical threshold for ensuring that automated decisionmaking and AI technologies do not erode data protection rights. Another critical piece is the distinction the Assembly makes between “human oversight” – which may occur throughout the decision-making process, and “human review” – which may occur exclusively after the fact – the GPA explicitly identifies “human review” as only one activity within a broader concept of “oversight.”
Most critically, the GPA identifies specific considerations in evaluating whether a human oversight system is “meaningful”:
Agency – essentially, whether the overseer has effective control to make decisions and act independently.
Clarity of [overseer] role – preemptively setting forth what the overseer does with AI decisions – whether they are to accept, reject, or modify rejections, and how they are to consider AI system outputs.
Knowledge and expertise – ensuring that overseers have appropriate knowledge and training to evaluate an AI system’s decision, including awareness of specific circumstances where a system’s outputs may require additional scrutiny.
Resources – ensuring overseers have sufficient resources to oversee a decision.
Timing and effectiveness – ensuring oversight is appropriately integrated into decisionmaking processes such that overseers may “agree with, contest, or mitigate the potential impacts of the AI system’s decision.”
Evaluation and Accountability – ensuring overseers are evaluated on the basis of whether oversight was performed, rather than the outcome of the oversight decision.
The resolution also considers tools that organizations possess in order to ensure that “meaningful oversight” is actually occurring, including:
Clarifying the “intention” and value of oversight
Training
Designing the oversight process
Escalation
Documentation
Assessments
Evaluation and testing of the process
Evaluation of outcomes
Overall, the resolution notes that human oversight mechanisms are the responsibility of developers and deployers, and are critical in mitigating the risk to fundamental rights and freedoms posed by potential bias in algorithmic decision making, specifically noting the risks of self-reinforcing bias based on training data or the improper weighting of past decisions as threats meaningful oversight processes can counteract.
Resolution on Digital Education, Privacy and Personal Data Protection for Responsible Inclusive Digital Citizenship
The third and final resolution of 2025 was submitted by the Institute for Transparency, Access to Public Information and Protection of Personal Data of the State of Mexico and Municipalities (Infoem), a new body that has replaced Mexico’s former GPA representative, the the National Institute for Transparency, Access to Information and Personal Data Protection (INAI). This resolution was joined by only seven co-sponsors, and reflected the GPA’s commitment to developing privacy in the digital education space and promoting “inclusive digital citizenship.” Here, the GPA resolved five particular points, each accompanied by a number of recommendations for GPA Members:
GPA Members should promote privacy and technology ethics as cross-cutting issues across the full spectrum of education, from early childhood to university.
States and authorities should ensure education related to digital privacy promotes lawfulness and diversity for all, particularly children and vulnerable communities.
GPA Members should promote the “understanding, exercise, and defense of personal data rights” as well as consideration of ongoing issues around the use of emerging technologies.
GPA Members should work to strengthen regulatory frameworks, align strategies with international human rights and data protection instruments, and actively engage in international cooperation networks alongside other international bodies related to data protection and education.
Promote a “culture of privacy” relying on awareness-raising, continuous training, and capacity building.
The resolution also evidences the 2025 Assembly’s specific concerns relating to generative AI, including a statement “reaffirming that … generative artificial intelligence, pose[s] specific risks to vulnerable groups and must be addressed using an approach based on ethics and privacy by design” and recommending under the resolved points that GPA members “[p]romote the creation and inclusion of educational content that allows for understanding and exercising rights related to personal data — such as access, rectification, erasure, objection, and portability, among others — as well as critical reflection on the responsible use of emerging technologies.”
Among its generalized resolved points, the Assembly critically recommends that GPA Members may:
Promote the creation of a base or certification on data protection for educational institutions that integrate best practices in data protection and digital citizenship, in collaboration with networks such as the GPA or the Ibero-American Data Protection Network (RIPD).
Promote participation in international networks that foster cooperation on data protection in education, with the aim of sharing experiences, methodologies, and common frameworks for action – again referencing the GPA working group on Digital Education and the Ibero-American Data Protection Network specifically.
Finally, the third resolution also includes an optional “Glossary” that offers definitions for some of the terminology that it uses. Although the glossary does not seek to define “artificial intelligence”, “personal data,” or, indeed, “children,” the glossary does offer definitions for both “digital citizenship” – “the ability to participate actively, ethically, and responsibly in digital environments, exercising rights and fulfilling duties, with special attention to the protection of privacy and personal data” and “age assurance” – “a mechanism or procedure for verifying or estimating the age of users in digital environments, in order to protect children from online risks.” Glossaries such as this one are useful in evaluating where areas of conceptual agreement in terminology (and thus, regulatory scope) are emerging among the global regulatory community.
Sandboxes and Simplification: not yet in focus
It is also worth noting a few specific areas that the GPA did not address in this year’s resolutions. As previously noted, the topical range of the resolutions was more targeted than in prior years. Within the narrowed focus on AI, the Assembly did not make any mention of regulatory sandboxes for AI governance, nor challenged or referred to the ongoing push for regulatory simplification, both topics increasingly common to the discussion relative to AI regulation around the globe. Something to follow for next year’s GPA will be how privacy regulators will engage with these trends.
Concluding remarks
The resolutions adopted by the GPA in 2025 indicate increasing focus and specialization of the world’s privacy regulators onto AI issues, at least for the immediate future. In contrast to the multi-subject resolutions of previous years (some of which were AI related, true) this years’ GPA produced resolutions that were essentially only concerned with AI, although still approaching the new technology in the context of its impact on pre-existing data protection rights. Moving into 2026, it would be wise to observe whether the GPA (or other internationally cooperative bodies) pursue mutually consistent conceptual and enforcement frameworks, particularly concerning the definitions of AI systems and associated oversight mechanisms.