We are pleased to introduce FPF’s 16th annual Privacy Papers for Policymakers. Each year, we invite privacy scholars and authors to submit scholarship for consideration by a committee of reviewers and judges from the FPF Advisory Board. The selected papers are those judged to contain practical analyses of emerging issues that policymakers in Congress, in federal agencies, at the state level, and internationally will find useful.
We thank the scholars, advocates, and Advisory Board members who are engaged with us to explore the future of privacy.
Awarded Papers
AI Agents and Memory: Privacy and Power in the Model Context Protocol (MCP) Era
New America
Available here: https://www.newamerica.org/oti/briefs/ai-agents-and-memory/
Executive Summary
This policy brief examines how AI agents — autonomous systems that use the Model Context Protocol (MCP) to connect across applications – will reshape the boundaries of privacy, security, and governance. By standardizing how AI systems access external tools and retain memory across sessions, MCP allows personal data, context, and identity to move fluidly across digital environments. This interoperability delivers powerful opportunities, but also undermines traditional privacy safeguards built around app-specific data silos. Frameworks such as COPPA, GDPR, and the FTC Act rely on principles of consent, purpose limitation, and minimization — assumptions that break down when agents autonomously share and retain information across multiple services.
This article argues that policymakers and privacy regulators must treat orchestration protocols like MCP as emerging data infrastructure and develop governance standards that ensure privacy resilience at the systems level. Key priorities include interoperable memory dashboards, scoped permissions, cryptographically signed audit trails, and strong portability and deletion rights to prevent user data from becoming locked within proprietary ecosystems. By updating privacy law and technical standards to address persistent, crosscontext memory, regulators can ensure that AI agents remain transparent, accountable, and aligned with user consent in a rapidly converging digital ecosystem.
Authors
Prem Trivedi, Open Technology Institute (New America)
Matt Steinberg, Knight-Georgetown Institute
AI and Doctrinal Collapse
GW Law
Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5384965
Executive Summary
Artificial intelligence runs on data. But the two legal regimes that govern data—information privacy law and copyright law—are under pressure. Formally, each regime demands different things. Functionally, the boundaries between them are blurring, and their distinct rules and logics are becoming illegible.
This Article identifies this phenomenon, which I call “inter-regime doctrinal collapse,” and exposes the individual and institutional consequences. Through analysis of pending litigation, discovery disputes, and licensing agreements, this Article exposes two dominant exploitation tactics enabled by collapse: Companies “buy” data through business-to-business deals that sidestep individual privacy interests, or “ask” users for broad consent through privacy policies and terms of service that leverage notice-and choice frameworks. Left unchecked, the data acquisition status quo favors established corporate players and impedes law’s ability to constrain the arbitrary exercise of private power.
Doctrinal collapse poses a fundamental challenge to the rule of law. When a leading AI developer can simultaneously argue that data is public enough to scrape—diffusing privacy and copyright controversies—and private enough to keep secret—avoiding disclosure or oversight of its training data—something has gone seriously awry with how law constrains power. To manage these costs and preserve space for salutary innovation, we need a law of collapse. This Article offers institutional responses, drawn from conflict of laws and legal pluralism, to create one.
Author
Alicia Solow-Niederman, George Washington University Law School
AI as Normal Technology
Knight First Amendment Institute at Columbia University
Available here: https://knightcolumbia.org/content/ai-as-normal-technology
Executive Summary
We articulate a vision of Artificial Intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in this conception. This frame stands in contrast to both utopian and dystopian visions that treat AI as a separate species or a highly autonomous, potentially superintelligent entity. The statement “AI is normal technology” is three things: a description of current AI, a prediction about its foreseeable future, and, most importantly, a prescription for how society should treat it. It rejects technological determinism and emphasizes the role of institutions in shaping AI’s trajectory, guided by continuity between the past and the future.
Authors
Arvind Narayanan, Princeton University
Sayash Kapoor, Ph.D. candidate at Princeton University’s Center for Information Technology Policy
Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms
GW Law
Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5212510
Executive Summary
AI regulations are popping up around the world, and they mostly involve ex-ante risk assessment and mitigating those risks. But even with careful risk assessment, harms inevitably occur. This leads to algorithmic remedies: what to do once algorithmic harms occur, especially when traditional remedies are ineffective. What makes a particular algorithmic remedy appropriate for a given algorithmic harm?
I explore this question through case study of a prominent algorithmic remedy: algorithmic disgorgement—destruction of models tainted by illegality. Since the FTC first used it in 2019, it has garnered significant attention, and other enforcers and litigants around the country and the world have started to invoke it. Alongside its increasing popularity came a significant expansion in scope. Initially, the FTC invoked it in cases where data was allegedly collected unlawfully and ordered deletion of models created using such data. The remedy’s scope has since expanded; regulators and litigants now invoke it against AI whose use, not creation, causes harm. It has become a remedy many turn to for all things algorithmic.
Author
Christina Lee, George Washington University Law School
Can Consumers Protect Themselves Against Privacy Dark Patterns?
Northwestern Pritzker School of Law
Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5084827
Executive Summary
Dark patterns have emerged in the last few years as a major target of legislators and regulators. Dark patterns are online interfaces that manipulate, confuse, or trick consumers into purchasing goods or services that they do not want, or into surrendering personal information that they would prefer to keep private. As new laws and regulations to restrict dark patterns have emerged, skeptics have countered that motivated consumers can and will protect themselves against these manipulative interfaces, making government intervention unnecessary. This debate occurs alongside active legislative and regulatory discussion about whether to prohibit dark patterns in newly enacted comprehensive consumer privacy laws. Our interdisciplinary paper provides experimental evidence showing that consumer self-help is unlikely to fix the dark patterns problem. Several common dark patterns (obstruction, interface interference, preselection, and confusion), which we integrated into the privacy settings for a video-streaming website, remain strikingly effective at manipulating consumers into surrendering private information even when consumers were charged with maximizing their privacy protections and understood that objective. We also provide the first published evidence of the independent potency of “nagging” dark patterns, which pester consumers into agreeing to an undesirable term. These findings strengthen the case for legislation and regulation to address dark patterns. Our paper also highlights the broad popularity of a feature of the recent California Consumer Privacy Act (CCPA), which gives consumers the ability to opt-out of the sale or sharing of their personal information with third parties. As long as consumers see the Do Not Sell option, a super-majority of them will exercise their rights, and a substantial minority will even overcome dark patterns in order to do so.
Authors
Matthew B. Kugler, Northwestern University – Pritzker School of Law
Lior Strahilevitz, University of Chicago Law School
Marshini Chetty, University of Chicago
Chirag Mahapatra, Harvard University – Harvard Kennedy School (HKS)
Yaretzi Ulloa, Yale University
De-identification Guidelines for Structured Data
Information and Privacy Commissioner of Ontario
Available here: https://www.ipc.on.ca/en/resources/de-identification-guidelines-structured-data
Executive Summary
De-identification is increasingly becoming an essential tool for enabling responsible data access and reuse. The Information and Privacy Commissioner of Ontario’s De-identification Guidelines for Structured Data (“the Guidelines”) provide organizations with comprehensive, operational, and clear techniques for applying de-identification effectively while minimizing the risks of re-identification. These Guidelines support the responsible use of data for public good, by balancing data utility with data privacy.
Focusing on structured data, identity disclosure, and model-based re-identification risk assessment, the Guidelines respond to today’s privacy challenges and international regulatory developments, building on contemporary best practices and real-world experiences of de-identifying structured data. Developed through extensive consultations with interested parties, the Guidelines fill key gaps, promote good de-identification practices, and operationalize the principles in the ISO/IEC 27559 standard on de-identification.
The Guidelines explain how modern de-identification methods and strategies can reduce re-identification risk to a very low level across different data release contexts, including in contexts of non-public and public data sharing. Designed to support practitioners, the Guidelines include clear explanations of core de-identification terminology, data transformation techniques, and ways of measuring identity disclosure data vulnerability. Concrete step-by-step processes for de-identifying structured data, practical checklists, and case studies further support implementation in practice.
While intended for organizations subject to Ontario’s privacy laws, the Guidelines may serve as a valuable model for organizations in other jurisdictions. By adopting the Guidelines, organizations can better demonstrate their commitment to privacy, transparency, accountability, and maintaining public trust in their use of data, while enabling data use and data-driven innovation.
How the Legal Basis for AI Training Is Framed in Data Protection Guidelines
International Data Privacy Law
Available here: https://academic.oup.com/idpl/advance-article/doi/10.1093/idpl/ipaf032/8471305
Executive Summary
This paper investigates how the legal basis for AI training is framed within data protection guidelines and regulatory interventions, drawing on a comparative analysis of approaches taken by authorities across multiple jurisdictions.
Focusing on the EU’s General Data Protection Regulation (GDPR) and analogous data protection frameworks globally, the study systematically maps guidance, statements, and actions to identify areas of convergence and divergence in the conceptualization and operationalization of lawful grounds for personal data processing—particularly legitimate interest and consent—in the context of AI model development.
The analysis reveals a trend toward converging on the recognition of legitimate interest as the predominant legal basis for AI training. However, this convergence is largely superficial, as guidelines rarely resolve deeper procedural and substantive ambiguities, and enforcement interventions often default to minimal safeguards. This disconnect between regulatory rhetoric and practical compliance leaves significant gaps in protection and operational clarity for data controllers, calling into question the reliability and legitimacy of the existing framework for lawful AI training. It warns that, without clearer operational standards and more coherent cross-border enforcement, there is a risk that legal bases such as legitimate interest will serve as little more than formalities.
Reflecting on these findings, the paper explores the prospects and limitations for achieving greater alignment or harmonization at the global level. Specifically, it reflects on pathways for global AI governance, emphasizing that the progress would benefit from distinguishing issues amendable to international convergence from those that require context-sensitive, locally adaptive solutions. It considers how regulators, practitioners, civil society activists, and scholars might leverage these insights to prioritize evidence-based avenues rather than seeking uniformity at conceptual level for its own sake, with the ultimate goal of advancing both principled and practical frameworks for lawful AI training across diverse legal landscapes.
Authors
Wenlong Li, Research Professor, Guanghua Law School, Zhejiang University
Yueming Zhang, Research Affiliate, Law and Technology Research Group, Faculty of Law & Criminology, Ghent University
Qingqing Zheng, Undergraduate Student, School of Law, Shandong University
Aolan Li, Doctoral Candidate, Center for Commercial Law Studies, Department of Law, Queen Mary University London
Honorable Mentions
Brokering Safety
SSRN
Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5143114
Executive Summary
For victims of abuse, safety means hiding. Not just hiding themselves, but also hiding their contact details, their address, their workplace, their roommates, and any other information that could enable their abuser to target them. Yet today, no number of name changes and relocations can prevent data brokers from sharing a victim’s personal information online. Thanks to brokers, abusers can find what they need with a single search, a few clicks, and a few dollars. For many victims, then, the best hope for safety lies in obscurity—that is, making themselves and their information harder to find.
This Article exposes privacy law’s complicity in this phenomenon of “brokered abuse.” Today, victims seeking obscurity can ask data brokers to remove their online information. But a web of privacy laws props up a fragmented and opaque system that forces victims to navigate potentially hundreds of distinct opt-out processes, wait months for their information to be removed, and then repeat this process continuously to ensure their information doesn’t resurface. The status quo compels victims to manage their own privacy, placing the burden of maintaining obscurity on already-overburdened shoulders.
In response, this Article pitches a new regulatory regime premised on a transformative reallocation of responsibility. In short, it proposes a techno-legal system that would enable victims to obscure their information across all data brokers with a single request, redistributing the burden away from victims and onto brokers. Such a system is justified, feasible, and constitutional—despite what brokers might say. The industry is eager to assert that it has a First Amendment right to exploit people’s data, but this Article develops a trio of arguments to confront this controversial claim of corporate power. By blending theory, policy, and technical design, this Article charts a path toward meaningful privacy protections for victims and, ultimately, a more empathetic legal landscape for those most at risk.
Authors
Chinmayi Sharma, Fordham University School of Law
Thomas Kadri, University of Georgia School of Law
Sam Adler, Fordham University, School of Law
Focusing Privacy Law
Available here: https://btlj.org/wp-content/uploads/2025/09/40-2_Ohm.pdf
Executive Summary
If the United States ever enacts an omnibus privacy law akin to the European Union’s General Data Protection Regulation, we might declare “mission accomplished.” We should not be so quick to celebrate, as this alone would not solve enough of our manifest privacy problems. Given the dysfunctions of our technological, political, economic, and social institutions, any omnibus law this country would enact is likely to be watered down, managerial, and incomplete.
While we continue to pursue an elusive omnibus ideal, we should at the same time focus on enacting new and better focused privacy laws, such as laws that govern narrowly drawn categories of sensitive information or specific uses of information. Enacting focused privacy laws will harness three great themes in how scholars conceptualize privacy, by centering harms, rights, and context. The great mistake of the omnibus era has been thinking we could write one law to properly account for information’s contextual variation. It is better to enact rules and enforcement strategies tailored for each context.
Focused privacy laws also play to the strengths and address the weaknesses of our presentday, complex mix of governance institutions. Narrower laws place the onus on Congress to define rules clearly and in detail, freeing up agencies to prioritize enforcement. This will strengthen privacy protection in the face of attacks on agency authority and resources in recent years. State legislatures can enact focused privacy laws to serve as laboratories of privacy law experimentation, meaning any federal omnibus law should not preempt states from enacting them. Public choice theory suggests that debates over focused laws may inspire less corporate lobbying and other interest group politics that have watered down omnibus proposals.
Finally, focused privacy laws can be used to help us reshape and redesign our broken information economy. Laws that apply special rules for specific uses or categories of information compel firms to reshape their products, services, org charts, and internal operations around values-centric lines. These laws thus give people outside the firm a say in the design of these vital pieces of information infrastructure and can empower those inside the company who focus beyond the bottom line.
Author
Paul Ohm, Georgetown University Law Center.
Student Paper Award
Decoding Consent Managers under the Digital Personal Data Protection Act, 2023: Empowerment Architecture, Business Models and Incentive Alignment
SSRN
Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5291354
Executive Summary
The Digital Personal Data Protection Act, 2023 (DPDP Act) marks a transformative shift in India’s data governance landscape by anchoring personal data processing in user consent. Within this framework, consent managers emerge as pivotal intermediaries that address structural issues in data sharing, reduce consent fatigue, and enable data portability. While earlier models—such as the Justice Srikrishna Committee’s ‘dashboard model’—envisioned them as passive record-keepers, the Data Empowerment and Protection Architecture (DEPA significantly expands their role.
Under DEPA, consent managers actively enable secure, interoperable data flows between data fiduciaries, breaking monopolistic silos and catalysing competition. However, their viability depends on overcoming critical economic and operational hurdles—notably, the absence of clear revenue models, fiduciary incentives, and the ongoing challenge of standardisation.
This paper critically evaluates the evolving consent manager ecosystem, examining how tools like personalised consent dashboards, behavioural nudges, and privacy-by-design protocols can empower data principals. It explores mechanisms—such as reciprocity, operational efficiencies, and exemptions for inferred data—that may nudge large fiduciaries toward voluntary participation.
Ultimately, consent managers are situated within India’s broader Digital Public Infrastructure (DPI). Their effective deployment could establish a global benchmark for decentralised, citizen-centric data governance. Realising this potential will require regulatory clarity, unified technical standards, and scalable market-based solutions. Done right, consent managers could become the linchpin of India’s digital economy—balancing individual autonomy with innovation in an increasingly data-driven world.
Author
Aditya Sushant Jain, O.P. Jindal Global University – Jindal Global Law School
To learn more about FPF’s Privacy Papers for Policymakers and submission guidelines, please visit here: https://fpf.org/privacy-papers-for-policy-makers/