FPF Year in Review 2025
This year, FPF continued to broaden its footprint across priority areas of data governance, further expanding activities across a range of cross-sector topics, including AI, Youth, Conflict of Laws, AgeTech (seniors), and Cyber-Security. We have engaged extensively at the local level and national level in the United States, and we are increasingly active in every major global region.
Highlights from FPF work in 2025
2025 saw the release of a range of FPF reports and issue briefs highlighting top data protection and AI developments. A few highlights follow, showing the breadth of comprehensive coverage.

The State of State AI: Legislative Approaches to AI in 2025
FPF tracked and analyzed 210 bills in 42 states, highlighting five key takeaways which include, (1) states shifted from broad frameworks to narrower, transparency-driven approaches, (2) three main approaches to private sector AI regulation emerged: use or context-based, tech-specific, and liability/accountability, (3) the most commonly enacted frameworks focus on healthcare, chatbots, and innovation safeguards, (4) policymakers signaled an interest in balancing consumer protection with AI growth, (5) definitional uncertainty, agentic AI, and algorithmic pricing are likely to be key topics in 2026. Learn further in a LinkedIn Live event with the report’s authors here.

FPF Unveils Paper on State Data Minimization Trends
Several states have enacted “substantive” data minimization rules that aim to place default restrictions on the purposes for which personal data can be collected, used, or shared. What questions do these rules raise, and how might policymakers construct them in a forward-looking manner? FPF covers lawmakers’ turn towards substantive data minimization and addresses the relevant challenges and questions they pose. Watch a LinkedIn Live here on the topic.

Concepts in AI Governance: Personality vs. Personalization
The Concepts in AI Governance: Personality vs. Personalization issue brief explores the specific use cases of personalization and personality in AI, identifying their concrete risks to individuals and interactions with U.S. law, and proposes steps that organizations can take to manage these risks. Read Part 1 (exploring concepts), Part 2 (concrete uses and risks), and Part 3 (intersection with U.S. law) and Part 4 (Responsible Design and Risk Management).

Consent for Processing Personal Data in the Age of AI: Key Updates Across Asia-Pacific
From India’s DPDPA to Vietnam’s new Decree and Indonesia’s PDPL, the Asia-Pacific region is undergoing a shift in its data protection law landscape. This issue brief provides an updated view of evolving consent requirements and alternative legal bases for data processing across key APAC jurisdictions. The brief also explores how the rise of AI is impacting shifts in lawmaking and policymaking across the region regarding lawful grounds for processing personal data. Watch the LinkedIn Live panel discussion on key legislative developments in APAC since 2022.

Brazil’s Digital ECA: New Paradigm of Safety & Privacy for Minors Online
This Issue Brief analyzes Brazil’s recently enacted children’s online safety law, summarizing its key provisions and how they interact with existing principles and obligations under the country’s general data protection law (LGPD). It provides insight into an emerging paradigm of protection for minors in online environments through an innovative and strengthened institutional framework, focusing on how it will align with and reinforce data protection and privacy safeguards for minors in Brazil and beyond.

As digital trade accelerates, countries across Africa are adopting varied approaches to data transfers—some incorporating data localization measures, others prioritizing open data flows.
FPF examines the current regulatory landscape and offers a structured analysis of regional efforts, legal frameworks, and opportunities for interoperability, including a comparative annex covering Kenya, Nigeria, South Africa, Rwanda, and the Ivory Coast.
FPF Filings and Comments
Throughout the year, FPF provided expertise through filings and comments to government agencies on proposed rules, regulations, and policy changes in the U.S. and abroad.
FPF provided recommendations and filed comments with:
- California Privacy Protection Agency (CPPA) concerning draft regulations governing cybersecurity audits, risk assessments, automated decision-making technology (ADMT) access, and opt-out rights under the California Consumer Privacy Act.
- Colorado Attorney General regarding draft regulations for implementing the heightened minor protections within the Colorado Privacy Act (“CPA”).
- The Consumer Finance Protection Bureau’s Advance Notice of Proposed Rulemaking (ANPR) for its Personal Financial Data Rights Reconsideration exploring certain significant components of the final rule, with a view to improve the regulation for consumers and industry.
- New York Office of the Governor to highlight certain ambiguities in the proposed New York Health Information Privacy Act.
- New Jersey Division of Consumer Affairs on implementing the New Jersey Data Privacy Act (‘NJDPA’).
- India’s Ministry of Electronics and Information Technology (MeitY) on the draft Digital Personal Data Protection Rules.
- Kenya’s Office of the Data Protection Commissioner (ODPC) on the Draft Data Sharing Code.
- The White House Office of Science and Technology Policy (OSTP) providing feedback on how the U.S. federal government’s proposed Development of an Artificial Intelligence Action Plan could include provisions to protect consumer privacy.
The FPF Center for Artificial Intelligence
This year, the FPF Center for Artificial Intelligence expanded its resources, releasing insightful blogs, comprehensive issue briefs, detailed infographics, and a flagship report on issues related to AI agents, assessment, and risk, as well as key concepts in AI governance.
In addition, the Center for AI hosted two events, convening top scholars specializing in complex technical questions that impact law and policy:
- July’s Technologist Roundtable covered AI Unlearning and Technical Guardrails. Welcoming a range of academic and technical experts, as well as data protection regulators from around the world, the team explored the extent to which information can be “removed” or “forgotten” from an LLM or similar generative AI model, or from an overall generative AI system.
- This fall, the Center hosted the “Responsible AI Management & CRAIG Webinar”, exploring responsible AI and the role of industry-academic research in promoting responsible AI management. Guests from the recently established Center on Responsible AI and Governance (CRAIG), the first industry-university cooperative research center to focus exclusively on responsible AI supported by the National Science Foundation in collaboration with Ohio State University, Baylor University, Northeastern University, and Rutgers University.
Check out some other highlights of FPF’s AI work this year:
- Defined the key distinction between two trends emerging in hyperpersonalizing conversational AI technologies through a four-part blog series and accompanying issue brief discussing “personalization” and “personality.”
- Produced a comprehensive brief breaking down the key technologies, business practices, and policy implications of Data-Driven Pricing.
- Released a report examining the considerations, emerging practices, and challenges that organizations face in attempting to harness AI’s potential while mitigating potential harms.
- Discussed the landscape of U.S. states working to enact regulations concerning “neural data” or “neurotechnology data”, information about people’s thoughts and mental activity.
- Provided an overview of emerging trends in key proposals to regulate AI across the Latin American region.
- Explored the concept of “regulatory sandboxes” in a blog covering key characteristics, justifications, and policy considerations for the development of frameworks that offer participating organizations the opportunity to experiment with emerging technologies within a controlled environment.
- Released an updated guide on Conformity Assessments under the EU AI Act, providing a step-by-step roadmap for organizations that are seeking to understand whether they must conduct a Conformity Assessment.
- Highlighted the wide range of current use cases for AI in education and future possibilities and constraints through a new infographic
- Argued that data protection legislation offers a powerful tool for regulating AI.
- Summarized the 47th Global Privacy Assembly (GPA), specifically its three resolutions throughout the five-day agenda with regulators focusing on how AI shapes and interact with personal data and individual rights.
Global
In 2025, FPF’s global work focused on how jurisdictions worldwide are adapting privacy and data protection frameworks to keep pace with AI and shifting geopolitical and regulatory landscapes. From children’s privacy and online safety to cross-border data flows and emerging AI governance frameworks, FPF’s teams engaged across regions to provide thought leadership, practical guidance, and stakeholder engagement, helping governments, organizations, and practitioners navigate complex developments while balancing innovation with fundamental rights.
In APAC, FPF analyzed South Korea’s AI Framework Act and Japan’s AI Promotion Act, highlighting differing approaches to innovation, risk management, and oversight. A comparative overview of the EU, South Korean, and Japanese frameworks provided practical insights into global AI policy trends. The evolution of consent was also a key focus. Our experts examined Vietnam’s rapidly evolving data framework, analyzing the newly adopted Personal Data Protection Law and Law on Data and their implications for a comprehensive approach to data protection and governance. From Japan to New Zealand, the team engaged on timely issues and contributed to major regional forums, demonstrating leadership in advancing privacy and AI governance across the region.
In India, FPF engaged with key stakeholders and conducted peer-to-peer sessions on the Digital Personal Data Protection (DPDP) rules. Notably, FPF’s analysis of the DPDPA and generative AI systems helped inform India’s newly released AI Governance Guidelines, demonstrating the local impact of FPF’s resources.
In Latin America, FPF tracked developments such as Chile’s new data protection law and Brazil’s children’s privacy legislation. FPF also participated in regional events on age verification for minors, discussing technologies like facial recognition and emerging legal trends in the region. We also examined how data protection authorities are responding to AI, reviewing developments across Latin America and Europe.
In Africa, FPF examined cross-border data flows and regulatory interoperability, emphasizing regional coordination for responsible data transfers. This year, we launched the Africa Council Membership, a dedicated platform for companies operating in the continent. FPF also hosted its first in-person side event in Africa at the 2025 NADPA Convening in Abuja, Nigeria, centered on “Securing Safe and Trustworthy Cross-Border Data Flows in Africa.” The positive feedback from the session underscored the value of convening stakeholders around Africa’s evolving data protection landscape.
FPF’s flagship European event, the Brussels Privacy Symposium, co-organized with the Brussels Privacy Hub, brought together stakeholders to examine the GDPR’s role in the EU’s evolving digital framework. In partnership with OneTrust, FPF also published an updated Conformity Assessment under the EU AI Act: A Step-by-Step Guide and infographic, providing a roadmap for organizations to assess high-risk AI systems and meet accountability requirements. FPF closely followed the European Commission’s Digital Omnibus proposals, offering exclusive member analysis and public insights, including a rapid first-reaction discussion that became one of its most engaging LinkedIn posts.
State and Federal U.S. Legislation
In 2025, FPF continued to track and analyze critical legislation in the privacy landscape from AI chatbots to neural data across various states in the U.S.
We unpacked the new wave of state chatbot legislation, focusing specifically on California SB 243, which became the first state to pass legislation governing companion chatbots with protections explicitly tailored to minors, and Utah’s SB 332, SB 226, and HB 452, where the state proved to be an early mover in state AI legislation as lawmakers signed three generative AI bills, amending Utah’s 2024 Artificial Intelligence Policy Act (AIPA) and establishing new regulations for mental health chatbots.
FPF compared California’s SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA) to the New York Raise Act anticipating where U.S. policy on frontier model safety may be headed, as this was signed into law making California the first state to enact a statute specifically targeting frontier AI safety and transparency.
We also looked at how amendments to previous state privacy laws, such as the Montana Consumer Data Privacy Act (MCDPA), were modified to create new protections for minors and examined how SB 1295 will amend the Connecticut Data Privacy Act (CTDPA), including how it expanded its scope, added a new consumer right, heightened the already strong protections for minors, and more.
Data-driven pricing also became a critical topic as states across the U.S. are introducing new legislation to regulate how companies use algorithms and personal data to set consumer prices as these modern pricing models can personalize pricing over time at scale, and are under increasing scrutiny. FPF looked at how legislation varies from state to state, and potential consequences of legislation, and the future of enforcement against these practices.
We explored “neural data”, or information about people’s central and/or peripheral nervous system activity. As of July 2025, four states have passed laws that seek to regulate “neural data.” FPF detailed in a blog why, given the nature of “neural data,” it is challenging to get the definition just right for the sake of regulation.
Building off of last year’s “Anatomy of State Comprehensive Privacy Law,” our recent report breaks down the critical commonalities and differences in the laws’ components that collectively constitute the “anatomy” of a state comprehensive privacy law.
Also this year, FPF hosted its 15th Annual Privacy Papers for Policymakers Award, recognizing cutting-edge privacy scholarship, bringing together brilliant minds at a critical time for data privacy amid the rise of AI. We listened to insightful discussions between our awardees and an exceptional lineup of privacy academics and industry leaders, while connecting with our awardees through a networking session with privacy professionals, policymakers and others.
U.S. Policy
AgeTech
FPF was awarded a grant from the Alfred P. Sloan Foundation to lead the two-year research project, “Aging at Home: Caregiving, Privacy, and Technology,” in partnership with the University of Arizona’s Eller College of Management. FPF launched the project in April, setting out to explore the complex intersection of privacy, economics, and the use of emerging technologies designed to support aging populations (“AgeTech”). In July, we released our first blog as part of the project, posing five essential privacy questions for older adults and caregivers to consider when utilizing tech to support aging populations.
During the holiday season, FPF also put together three types of AI-enabled agetech and the privacy and data protection considerations to navigate when gift-giving to older individuals and caregivers.
Youth Privacy
The start of 2025 was marked by significant policy activity at both the federal and state levels, focusing on legislative proposals aimed at strengthening online safeguards for minors.
FPF kicked off the year by releasing a redline comparison of the Federal Trade Commission’s notice of proposed changes to the Children’s Online Privacy Protection Act (COPPA) Rule. Later in the spring, an amendment to the COPPA Rule was reintroduced in the Senate and FPF completed a second redline, comparing the newly proposed COPPA 2.0 bill to the original COPPA Rule.
Towards the end of the year, the U.S. House Energy & Commerce Committee introduced a comprehensive bill package to advance child online privacy and safety, including its own version of COPPA 2.0, marking the latest step toward modernizing the nearly 30-year-old Children’s Online Privacy Protection Act.
FPF analyzed how the new House proposal compares to long-standing Senate efforts, what’s changing, and what it means for families, platforms, and policymakers navigating today’s digital landscape.
States across the U.S. also took action, introducing legislation to enhance the privacy and safety of kids’ and teens’ online experiences. Using the federal COPPA framework as a guide, FPF analyzed Arkansas’s proposed “Arkansas Children and Teens’ Online Privacy Protection Act”, describing how the bill establishes new privacy protections for teens aged 13 to 16. Other states, such as Vermont and Nebraska, took a different approach, opting to pass Age-Appropriate Design Code Acts (AADCs). FPF discussed how these new bills take two very different approaches to a common goal, crafting a design code that can withstand First Amendment scrutiny.
We utilized infographics to visually illustrate complex issues related to technology and children’s online experiences. In celebration of Safer Internet Day 2025, we released an infographic explaining how encryption technology plays a crucial role in ensuring data privacy and online safety for a new generation of teens and children. We also illustrated the Spectrum of Artificial Intelligence, exploring the wide range of current use cases for Artificial Intelligence (AI) in education and future possibilities and constraints. Finally, we released an infographic and readiness checklist that details the various types of deepfakes and the varied risks and considerations posed by each in a school setting, ranging from the potential for fabricated phone calls and voice messages impersonating teachers to the sharing of forged, non-consensual intimate imagery (NCII).
As agencies face increasing pressure to leverage sensitive student and institutional data for analysis and research, Privacy Enhancing Technologies (PETs) offer a unique potential solution as they are advanced technologies designed to protect data privacy while maintaining the utility of results yielded from analyses. FPF released a landscape report of the adoption of Privacy Enhancing Technologies (PETs) by State Education Agencies (SEAs).
Data Sharing for Research Tracker
In March, we celebrated Open Data Day by launching the Data Sharing for Research Tracker, a growing list of organizations that make data available for researchers. The tracker helps researchers locate data for secondary analysis and organizations seeking to raise awareness about their data-sharing programs, benchmarking them against what other organizations offer.
Foundation Support
FPF’s funding is broad across every industry sector and includes funded competitive projects from the U.S. National Science Foundation and leading private foundations. We work to support ethical access to data by researchers, responsible uses of technology in K-12 education, and we seek to advance the uses of Privacy Enhancing Technologies in the private and public sectors.
FPF Membership
FPF Membership provides the leading community for privacy professionals to meet, network, and engage in discussions on top issues in the privacy landscape.
The Privacy Executives Network (PEN) Summit
We held our 2nd annual PEN Summit in Berkeley, California, which showcased the power of quality peer-to-peer conversations, focusing on the most pressing global privacy and AI issues. The event opened with the latest from CPPA Executive Director Tom Kemp, followed by dynamic peer-to-peer roundtables, and closed with a lively half-day privacy simulation- participants were challenged to pool their knowledge and identify potential solutions to a scenario that privacy executives may face in their career.
New Trainings for FPF Members
FPF Membership expanded its benefits with complimentary trainings for all members. FPF members are able to attend live virtual trainings, along with access to training recordings and presentation slides via the FPF Member Portal. We had our first course for members in late September on De-Identification and subsequent training on running a Responsible AI program. Stay tuned for more courses next year and be sure to join the FPF Training community in the Member Portal to receive updates on future trainings and view existing training materials.
FPF convenes top privacy and data protection minds and can give your company access to our outstanding network through FPF membership. Learn more on how to become an FPF member.
Top-level FPF Convenings and Engagements from 2025
FPF’s DC Privacy Forum: Governance for Digital Leadership and Innovation
This year, FPF hosted two major events gathering leading experts and policymakers for critical discussions on privacy, AI, and digital regulation. In D.C., FPF hosted our second annual DC Privacy Forum, convening a broad audience of key government, civil society, academic, and corporate privacy leaders to discuss AI policy, critical topics in privacy, and other priority issues for the new administration and policymakers.
Brussels Privacy Symposium
Our ninth edition of the Brussels Privacy Symposium focused on the impact of the European Commission’s competitiveness and simplification agenda on digital regulation, including data protection. This year’s event featured bold discussions on refining the GDPR, strengthening regulatory cooperation, and shaping the future of AI governance. Read the report here.
FPF experts also took the stage across the globe:
- FPF’s APAC team participated in several events during the Personal Data Protection Commission Singapore’s (PDPC) Personal Data Protection Week, both moderating and speaking on panels focused on the latest developments in data protection and use of data tech.
- In South Korea, FPF leadership and experts attended the 47th Global Privacy Assembly, the leading international forum that brings together data protection and privacy authorities from around the world.
- In Ghana, we joined leaders at the Africa AI Stakeholder Meeting to discuss AI Governance, data protection & infrastructure for AI in Africa.
- FPF leadership spoke at an official side event of the AI Action Summit in Paris, discussing the role of Privacy-Enhancing Technologies (PETs) in data sharing for AI development.
- FPF joined Turkey’s first privacy summit, the Istanbul Privacy Summit, contributing to a panel on “Regulating Intelligence” that explored approaches to responsible AI from the EU to the Middle East.
- FPF spoke at the OECD high-level symposium “Smart Rules, Stronger Business,” discussing how governments can design and implement agile regulatory frameworks that foster innovation while effectively managing risks.
- In Washington, we kicked off the IAPP’s Global Privacy Summit (GPS) with our annual Spring Social, a night full of great company, engaging discussions, and new connections. At GPS, our team hosted and spoke on several panels on topics ranging from understanding U.S. state and global privacy governance to the future of technological innovation, policy, and professions.
New initiatives and expanding FPF’s network:
- FPF restarted its Privacy Book Club series with a special conversation with Professor Simon Chesterman, author of Artifice – a novel of AI. They discussed how speculative fiction can illuminate real-world challenges in AI governance, privacy, and trust, and what policymakers, technologists, and the public can learn from imagining possible futures. Watch the LinkedIn Live discussion, check out previous book club chats, and sign up for our newsletter to receive updates. Also check out FPF CEO Jules Polonetsky’s weekly LinkedIn Live series.
- FPF was pleased to announce the election of Anne Bradley, Peter Lefkowitz, Nuala O’Connor, and Harriet Pearson to its Board of Directors. Julie Brill, Jocelyn Aqua, Haksoo Ko, Yeong Zee Kin, and Ann Waldo also joined FPF as senior fellows.
- We honored Julie Brill at our 2nd Annual PEN Summit with a Lifetime Achievement Award, which recognizes Brill’s decades of leadership and profound impact on the fields of consumer protection, data protection, and digital trust in her public and private sector roles.
Please continue to follow FPF’s work by subscribing to our monthly briefing and following us on LinkedIn, Twitter/X, and Instagram. On behalf of the FPF team, we wish you a very Happy New Year and look forward to 2026!
This material is based upon work supported by the Alfred P. Sloan Foundation under Grant No. G-2025-2519, Aging at Home: Caregiving, Privacy, and Technology.