Future of Privacy Forum to Honor Top Scholarship at Annual Privacy Papers for Policymakers Event
Washington D.C. — (January 26th, 2026) — Today, the Future of Privacy Forum (FPF) — a global non-profit that advances principled and pragmatic data protection, AI, and digital governance practices — announced the winners of its 16th annual Privacy Papers for Policymakers (PPPM) Awards.
The PPPM Awards recognize leading research and analytical scholarship in privacy relevant to policymakers in the U.S. and internationally. The award highlights important work that analyzes current and emerging privacy and AI issues and proposes achievable short-term solutions or means of analysis that have the potential to lead to real-world policy solutions. Seven winning papers, two honorable mentions, and one student submission were selected by a select group of FPF staff members and advisors based on originality, applicability to policymaking, and overall quality of writing.
Winning authors will have the opportunity to present their work at virtual webinars scheduled for March 4, 2026, and March 11, 2026.
“As artificial intelligence and data protection increasingly shape global policy discussions, high-quality academic research is more important than ever,” says FPF CEO Jules Polonetsky. “This year’s award recipients offer the kind of careful analysis and independent thinking policymakers rely on to address complex issues in the digital environment. We are pleased to recognize scholars whose work helps ensure that technological innovation develops in ways that remain grounded in privacy and responsible data governance.”
FPF’s 2026 Privacy Papers for Policymakers Award winners are:
- AI As Normal Technology by Arvind Narayanan and Sayash Kapoor
- The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. This paper proposes viewing artificial intelligence as “normal technology,” framing it as a human-controlled tool rather than an autonomous, superintelligent entity. By drawing parallels to historical innovations like electricity, the authors argue that AI’s societal impact will be gradual, institutional, and manageable through resilient policy rather than drastic intervention.
- AI and Doctrinal Collapse by Alicia Solow-Niederman
- Artificial intelligence runs on data. But the two legal regimes that govern data—information privacy law and copyright law—are under pressure. This Article identifies this phenomenon, which the author calls “inter-regime doctrinal collapse,” and exposes the individual and institutional consequences. Through analysis of pending litigation, discovery disputes, and licensing agreements, this Article highlights two dominant exploitation tactics enabled by collapse: Companies “buy” data through business-to-business deals that sidestep individual privacy interests, or “ask” users for broad consent through privacy policies and terms of service that leverage notice-and-choice frameworks. Left unchecked, the data acquisition status quo favors established corporate players and impedes the law’s ability to constrain the arbitrary exercise of private power.
- AI Agents and Memory: Privacy and Power in the Model Context Protocol (MCP) Era by Matt Steinberg and Prem M. Trivedi
- The shift from chatbots to autonomous agents is underway in both business and consumer applications. These systems are built on a new technical standard called the Model Context Protocol (MCP), which enables AI systems to connect to external tools such as calendars, email, and file storage. By standardizing these connections, MCP enables memory and context to move seamlessly from one app to another. This brief examines how agents and MCP work, where risks emerge, and why current guardrails fall short. It proposes targeted interventions to ensure that the systems remain understandable, accountable, and aligned with the people they serve. Finally, it examines how existing privacy frameworks align with this new architecture and identifies areas where new interpretations or protections may be needed.
- Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms by Christina Lee
- As global AI regulations shift from risk assessment to addressing realized harms, “algorithmic disgorgement”—the mandated destruction of models—has emerged as a primary remedy. While this tool has expanded from punishing illegal data collection to addressing harmful model usage, a critical examination reveals that it is often a poor fit for the complexities of the modern algorithmic supply chain. Because AI systems involve interconnected data flows and multiple stakeholders, disgorgement frequently penalizes innocent parties without effectively burdening the blameworthy. To ensure more equitable outcomes, the author argues that appropriate algorithmic remedies must be responsive to the specific harm and account for their full impact across the entire supply chain. This analysis highlights the pressing need for a more comprehensive and nuanced toolkit of legal remedies, one that extends beyond simple model destruction.
- Can Consumers Protect Themselves Against Privacy Dark Patterns? by Matthew B. Kugler, Lior Strahilevitz, Marshini Chetty, Chirag Mahapatra, and Yaretzi Ulloa
- As dark patterns have become a primary target for global regulators, skeptics argue that government intervention is unnecessary because motivated consumers can protect themselves. This interdisciplinary study challenges that assumption, providing experimental evidence that manipulative interfaces, including obstruction, preselection, and “nagging”, remain strikingly effective even when users are actively trying to maximize their privacy. The Article argues that although a super-majority of consumers will exercise opt-out rights when clearly presented with a “Do Not Sell” option, the overall persistence of these patterns suggests that consumer self-help alone is insufficient. Consequently, the paper concludes that robust legislation and regulation, such as the California Consumer Privacy Act (CCPA), are crucial in countering digital manipulation.
- How the Legal Basis for AI Training Is Framed in Data Protection Guidelines by Wenlong Li, Yueming Zhang, Qingqing Zheng, and Aolan Li (link forthcoming)
- This paper investigates how the legal basis for AI training is framed within data protection guidelines and regulatory interventions, drawing on a comparative analysis of approaches taken by authorities across multiple jurisdictions. Focusing on the EU’s General Data Protection Regulation (GDPR) and analogous data protection frameworks globally, the study systematically maps guidance, statements, and actions to identify areas of convergence and divergence in the conceptualisation and operationalisation of lawful grounds for personal data processing—particularly legitimate interest and consent—in the context of AI model development. The analysis reveals a trend toward converging on the recognition of legitimate interest as the predominant legal basis for AI training. However, this convergence is largely superficial, as guidelines rarely resolve deeper procedural and substantive ambiguities, and enforcement interventions often default to minimal safeguards. This disconnect between regulatory rhetoric and practical compliance leaves significant gaps in protection and operational clarity for data controllers, raising questions about the reliability and legitimacy of the existing framework for lawful AI training. It warns that, without clearer operational standards and more coherent cross-border enforcement, there is a risk that legal bases such as legitimate interest will serve as little more than formalities.
- De-Identification Guidelines for Structured Data by Information and Privacy Commissioner of Ontario
- As the demand for government-held data increases, institutions require effective processes and techniques for removing personal information. An important tool in this regard is deidentification. These guidelines introduce institutions to the basic concepts and techniques of deidentification. They outline the key issues to consider when de-identifying personal information in the form of structured data and provide a step-by-step process that institutions can follow to remove personal information from datasets. This update of the IPC’s globally recognized guidelines, originally released in 2016, provides practical steps to help organizations maximize the benefits of data while protecting privacy.
In addition to the winning papers, FPF awarded two papers as Honorable Mentions: Brokering Safety by Chinmayi Sharma, Fordham University School of Law; Thomas Kadri, University of Georgia School of Law; and Sam Adler, Fordham University, School of Law; and Focusing Privacy Law by Paul Ohm, Georgetown University Law Center.
FPF also selected a paper for the Student Paper Award: Decoding Consent Managers under the Digital Personal Data Protection Act, 2023: Empowerment Architecture, Business Models and Incentive Alignment by Aditya Sushant Jain of O.P. Jindal Global University – Jindal Global Law School.
In reviewing the submissions, winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.
The 2026 Privacy Papers for Policymakers Awards will take place over two virtual events on March 4 and 11. Attendance is free, and registration is open to the public. Find more information to register for the March 4 webinar here, and the March 11 webinar here.