15th Annual Privacy Papers for Policymakers

FREE (In-Person Only) March 12, 2025 @ 5:30pm ET

Overview

FPF is excited to announce the 15th Annual Privacy Papers for Policymakers winners and in-person award ceremony! The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and international data protection authorities. 

About the Privacy Papers for Policymakers Award

The selected papers highlight important work that analyzes current and emerging privacy issues and proposes achievable short-term solutions or new means of analysis that could lead to real-world policy solutions.

From the many nominated papers, the winning papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were ultimately selected because they contain solutions that are relevant for policymakers in the U.S. and abroad. To learn more about the submission and review process, read our Call for Nominations

About the Privacy Papers for Policymakers Event

The winning authors will join FPF to present their work at an in-person-only event with policymakers from around the world, academics, and industry privacy professionals. We will announce our keynote speaker shortly.

The event will be held on Wednesday, March 12, 2025 at FPF Headquarters, 1350 I St, NW, Washington, D.C. 20005. This event is free and open to the general public. Register for this event by clicking here!

To learn more about the 14th Annual Privacy Papers for Policymakers, click here.

About the Winning Papers

The winners of the 15th Annual Privacy Papers for Policymakers Award are listed below. To learn more about the papers, judges, and authors, check back for our 2024 PPPM Digest coming soon.  

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?, by Alice Xiang

Navigating Demographic Measurement for Fairness and Equity, by Miranda Bogen

Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online, by Steven Adler and Zoë Hitzig

The Great Scrape: The Clash Between Scraping and Privacy, by Daniel J. Solove and Woodrow Hartzog

The Overton Window and Privacy Enforcement, by Alicia Solow-Niederman

Agenda

Agenda

Time

Item

Speakers

5:30 pm –
5:40 pm ET

Welcome Remarks

Daniel Hales, Policy Fellow, Future of Privacy Forum

5:40 pm –
6:00 pm ET

Opening Keynote Address

TBA

6:00 pm –
6:15 pm ET

Authoritarian Privacy

Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first discusses several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together, they tell only a partial story.

Presenting Author

  • Mark Jia, Georgetown Law

Discussant

  • TBA

6:15 pm –
6:30 pm ET

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?

Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property.

Presenting Author

  • Alice Xiang, Sony

Discussant

  • TBD

6:30 pm –
6:45 pm ET

Navigating Demographic Measurement for Fairness and Equity

Governments and policymakers increasingly expect practitioners developing and using AI systems in both consumer and public sector settings to proactively identify and address bias or discrimination that those AI systems may reflect or amplify. Central to this effort is the complex and sensitive task of obtaining demographic data to measure fairness and bias within and surrounding these systems. This report provides methodologies, guidance, and case studies for those undertaking fairness and equity assessments — from approaches that involve more direct access to data to ones that don’t expand data collection. Practitioners are guided through the first phases of demographic measurement efforts, including determining the relevant lens of analysis, selecting what demographic characteristics to consider, and navigating how to hone in on relevant sub-communities. The report then explores a variety of approaches to uncover demographic patterns and responsibly handle demographic data. While there is no one-size-fits-all solution, the report makes clear that the lack of obvious access to raw demographic data should not be considered an insurmountable barrier to assessing AI systems for fairness, nor should it provide a blanket justification for widespread or incautious data collection efforts.

Presenting Author

  • Miranda Bogen, Center for Democracy & Technology

Discussant

  • TBA

6:45 pm –
7:00 pm ET

Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online

Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions—governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI’s increasing indistinguishability from people online (i.e., lifelike content and avatars, agentic activity), and AI’s increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and “proof-of-personhood” systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. By contrast, existing countermeasures to automated deception—such as CAPTCHAs—are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.

Presenting Co-Authors

  • Zoë Hitzig, OpenAI
  • Shrey Jain, Microsoft

Discussant

  • Brenda Leong, ZwillGen

7:00 pm –
7:15 pm ET

The Great Scrape: The Clash Between Scraping and Privacy

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.

Presenting Author

  • Daniel Solove, The George Washington University Law School

Discussant

  • Jennifer Huddleston, Cato Institute

7:15 pm –
7:30 pm ET

The Overton Window and Privacy Enforcement

On paper, the FTC’s consumer protection authority seems straightforward: the agency is to investigate and prevent unfair or deceptive acts or practices. The contemporary question is whether the FTC can draw on this authority to curtail data-driven harms.

This Essay contends that the legal answer is yes and argues that the key determinants of whether the FTC will be able to confront emerging digital technologies are social, institutional, and political. It proposes that the FTC’s privacy enforcement occurs within an “Overton Window of Enforcement Possibility.” Picture the FTC Act’s legal standards as setting forth a range of lawful enforcement behavior for the agency. Within this lawful space, just as a politician’s “Overton Window of Political Possibility” will not include every possible policy option, the FTC’s Window will not include every possible enforcement option. Rather, the space within which the agency might operate will be sharply informed by four critical forces: social norms; institutional norms within the agency; the courts; and Congress.

This approach highlights how an agency’s enforcement actions do not occur in a rigidly fixed domain; rather, they unfold within a dynamic space that is subject to forces both inside and outside of the agency. Understanding enforcement as a process in this way reveals a sobering lesson for proposed federal legislation that seeks to endow new or existing agencies with new regulatory authority: Without a sufficiently large Window within which an agency can operate, theoretical grants of power will have little real-world impact. But this framing can be empowering. It reveals strategies for administrative officials who seek to exercise their enforcement authority, such as attempting to ground more progressive or novel actions in topics with thick social consensus. And it creates space for policy interventions to account for institutional design and the practical realities that an agency confronts over time.

Presenting Author

  • Alicia Solow-Niederman, The George Washington University Law School

Discussant

  • James Cooper, Antonin Scalia Law School, George Mason University

7:30 pm –
7:35 pm ET

Closing Remarks

John Verdi, Senior Vice President for Policy, Future of Privacy Forum

7:35 pm –
8:30 pm ET

Food & Wine Reception

Speakers

Daniel Hales

Policy Fellow for Youth & Education Privacy, FPF

Daniel Hales is a Policy Fellow with the Youth and Education Privacy Team. His work contributes to FPF’s ongoing projects relating to education technology, legal research, legislative analysis, and student & youth privacy.

Prior to joining FPF, Daniel worked as a privacy analyst at a leadership development and human resources consulting firm where he contributed to a cross-functional team tasked with operationalizing privacy practices across the organization. Before getting into the privacy industry, Daniel worked as a legislative aide for a representative in the Virginia House of Delegates.

Daniel earned his Juris Doctor from the University of Richmond School of Law where he focused on Compliance, Privacy Law, and Technology Regulation. During his J.D. program, Daniel was awarded the CALI Excellence for the Future Award in Information Privacy Law. Daniel earned his Bachelor’s Degree in Political Science from Virginia Commonwealth University.

John Verdi

Senior Vice President for Policy, FPF

John Verdi is Senior Vice President for Policy at the Future of Privacy Forum (FPF). John supervises FPF’s policy portfolio, which advances FPF’s agenda on a broad range of issues, including: Artificial Intelligence & Machine Learning; Algorithmic Decision-Making; Ethics; Connected Cars; Smart Communities; Student Privacy; Health; the Internet of Things; Wearable Technologies; De-Identification; and Drones.

John previously served as Director of Privacy Initiatives at the National Telecommunications and Information Administration, where he crafted policy recommendations for the US Department of Commerce and President Obama regarding technology, trust, and innovation. John led NTIA’s privacy multistakeholder process, which established best practices regarding unmanned aircraft systems, facial recognition technology, and mobile apps. Prior to NTIA, he was General Counsel for the Electronic Privacy Information Center (EPIC), where he oversaw EPIC’s litigation program. John earned his J.D. from Harvard Law School and his B.A. in Philosophy, Politics, and Law from SUNY-Binghamton.

Location

Future of Privacy Forum Headquarters - 1350 I Street NW, Washington, D.C. 20005