This year’s Winning Privacy Papers to be Honored at the Future of Privacy Forum’s 15th Annual Privacy Papers for Policymakers Event
The Future of Privacy Forum’s 15th Annual Privacy Papers for Policymakers Award Recognizes Influential Privacy Research
February 3, 2025 — Today, the Future of Privacy Forum (FPF) — a global non-profit focused on data protection headquartered in Washington, D.C. — announced the winners of its 15th annual Privacy Papers for Policymakers (PPPM) Awards.
The PPPM Awards recognize leading U.S. and international privacy scholarship that is relevant to policymakers in the U.S. Congress, federal agencies, and international data protection authorities. Six winning papers, two honorable mentions, one student submission, and a student honorable mention were selected by a diverse group of leading academics, advocates, and industry privacy professionals from FPF’s Advisory Board.
Authors of the papers will have the opportunity to showcase their work at the Privacy Papers for Policymakers ceremony on March 12, in conversations with discussants, including James Cooper, Professor of Law, Director, Program on Economics & Privacy, Antonin Scalia Law School, George Mason University, Jennifer Huddleston, Senior Fellow in Technology Policy, Cato Institute, and Brenda Leong, Director, AI Division, ZwillGen.
“Data protection and artificial intelligence regulations are increasingly at the forefront of global policy conversations,” said FPF CEO Jules Polonetsky. “And it’s important to recognize the academic research that explores the nuances surrounding data privacy, data protection, and artificial intelligence issues. Our award winners have explored these complex areas ― to all of our benefits.”
FPF’s 2025 Privacy Papers for Policymakers Award winners are:
- Authoritarian Privacy by Mark Jia, Georgetown University Law Center
- Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first discusses several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together, they tell only a partial story. Through privacy law, China’s leaders have sought to interpose themselves as benevolent guardians of privacy rights against other intrusive actors—individuals, firms, and even state agencies and local governments. This Article adds to our understanding of privacy law, complicates the relationship between privacy and democracy, and points toward a general theory of authoritarian privacy.
- The Great Scrape: The Clash between Scraping And Privacy by Daniel J. Solove, George Washington University Law School and Woodrow Hartzog, Boston University School of Law and Stanford Law School Center for Internet and Society
- Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society. Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others. This Article explores the fundamental tension between scraping and privacy law.
- Mirror, Mirror, on the Wall, Who’s the Fairest of Them All? by Alice Xiang, Global Head of AI Ethics, Sony Group Corporation and Lead Research Scientist, Sony AI
- Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property.
- The Overton Window and Privacy Enforcement by Alicia Solow-Niederman, George Washington University Law School
- On paper, the Federal Trade Commission’s consumer protection authority seems straightforward: the agency is empowered to investigate and prevent unfair or deceptive acts or practices. This flexible and capacious authority, coupled with the agency’s jurisdiction over the entire economy, has allowed the FTC to respond to privacy challenges both online and offline. The contemporary question is whether the FTC can draw on this same authority to curtail the data-driven harms of commercial surveillance or emerging technologies like artificial intelligence. This Essay contends that the legal answer is yes and argues that the key determinants of whether an agency like the Federal Trade Commission will be able to confront emerging digital technologies are social, institutional, and political. Specifically, it proposes that the FTC’s privacy enforcement occurs within an “Overton Window of Enforcement Possibility.”
- Personhood Credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online by Steven Adler, et al.
- Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
- Navigating Demographic Measurement for Fairness and Equity by Miranda Bogen, Director, CDT AI Governance Lab
- Governments and policymakers increasingly expect practitioners developing and using AI systems in both consumer and public sector settings to proactively identify and address bias or discrimination that those AI systems may reflect or amplify. Central to this effort is the complex and sensitive task of obtaining demographic data to measure fairness and bias within and surrounding these systems. This report provides methodologies, guidance, and case studies for those undertaking fairness and equity assessments — from approaches that involve more direct access to data to ones that don’t expand data collection. Practitioners are guided through the first phases of demographic measurement efforts, including determining the relevant lens of analysis, selecting what demographic characteristics to consider, and navigating how to hone in on relevant sub-communities. The report then delves into several approaches to uncover demographic patterns.
In addition to the winning papers, FPF selected for Honorable Mentions: The Law of AI for Good by Orly Lobel, University of San Diego School of Law; and Aligning Algorithmic Risk Assessment Values with Criminal Justice Values by Dennis D. Hirsch, Angie Westover-Munoz, Christopher B. Yaluma, and Jared Ott from the The Ohio State University – Moritz College of Law.
FPF also selected a paper for the Student Paper Award: Data Subjects’ Reactions to Exercising Their Right of Access by Arthur Borem, Elleen Pan, Olufunmilola Obielodan, Aurelie Roubinowitz, Luca Dovichi, and Blase Ur at the University of Chicago; and Michelle L. Mazurek from the University of Maryland. A Student Paper Honorable Mention went to Artificial Intelligence is like a Perpetual Stew by Nathan Reitinger, University of Maryland – Department of Computer Science.
In reviewing the submissions, winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.
The Privacy Papers for Policymakers Award event will be held on March 12, 2025, at FPF’s offices in Washington, D.C. The event is free and registration is open to the public.
###
About Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, and Singapore. Learn more at fpf.org.