14th Annual Privacy Papers for Policymakers

FREE (In-Person Only) February 27, 2024 @ 5:00pm ET

Overview

FPF is excited to announce the 14th Annual Privacy Papers for Policymakers winners and in-person award ceremony! The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and international data protection authorities. 

Washington, D.C. – U.S. Capitol Visitor’s Center, Room SVC- 201-00

About the Privacy Papers for Policymakers Award

The selected papers highlight important work that analyzes current and emerging privacy issues and proposes achievable short-term solutions or new means of analysis that could lead to real-world policy solutions.

From the many nominated papers, the winning papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. This year’s papers explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The winning papers were ultimately selected because they contain solutions that are relevant for policymakers in the U.S. and abroad. To learn more about the submission and review process, read our Call for Nominations

About the Privacy Papers for Policymakers Event

The winning authors will join FPF on Capitol Hill to present their work at an in-person-only event with policymakers from around the world, academics, and industry privacy professionals. We are pleased to announce Senator Peter Welch will provide this year’s opening keynote remarks.

The event will be held on February 27, 2024; at the U.S. Capitol Visitor’s Center, Room SVC- 201-00. This event is free and open to the general public. Any attendees not registered in advance will not be granted entry into the event room by the Senate Appointments Desk. Register for this event by clicking here!

Thank you to Honorary Co-Hosts Congresswoman Diana DeGette, Co-Chair of the Congressional Privacy Caucus and Senator Ed Markey.

To learn more about the 13th Annual Privacy Papers for Policymakers, click here.

About the Winning Papers

The winners of the 14th Annual Privacy Papers for Policymakers Award are listed below. To learn more about the papers, judges, and authors, download the 2023 PPPM Digest.  

Agenda

Agenda

Time

Item

Speakers

5:30 pm –
5:40 pm ET

Welcome Remarks

Jordan Francis, Elise Berkower Fellow, Future of Privacy Forum

John Verdi, Senior Vice President for Policy, Future of Privacy Forum

5:40 pm –
6:00 pm ET

Opening Keynote Address

U.S. Senator Peter Welch, (D-VT)

6:00 pm –
6:15 pm ET

Less Discriminatory Algorithms

Entities that use algorithmic systems in traditional civil rights domains like housing, employment, and credit should have a duty to search for and implement less discriminatory algorithms (LDAs). Why? Work in computer science has established that, contrary to conventional wisdom, for a given prediction problem there are almost always multiple possible models with equivalent performance—a phenomenon termed model multiplicity. Critically for our purposes, different models of equivalent performance can produce different predictions for the same individual, and, in aggregate, exhibit different levels of impacts across demographic groups. As a result, when an algorithmic system displays a disparate impact, model multiplicity suggests that developers may be able to discover an alternative model that performs equally well, but has less discriminatory impact. But without dedicated exploration, it is unlikely developers will discover potential LDAs.

Model multiplicity has profound ramifications for the legal response to discriminatory algorithms. Under disparate impact doctrine, it makes little sense to say that a given algorithmic system used by an employer, creditor, or housing provider is either “justified” or “necessary” if an equally accurate model that exhibits less disparate effect is available and possible to discover with reasonable effort. As a result, the law should place a duty of a reasonable search for LDAs on entities that develop and deploy predictive models in covered civil rights domains. The law should recognize this duty in at least two specific ways. First, under disparate impact doctrine, a defendant’s burden of justifying a model with discriminatory effects should be recognized to include showing that it made a reasonable search for LDAs before implementing the model. Second, new regulatory frameworks for the governance of algorithms should include a requirement that entities search for and implement LDAs as part of the model building process.

Presenting Co-Authors

  • Emily Black, Columbia University
  • Logan Koepke, Upturn

Discussant

  • Michael Akinwumi, National Fair Housing Alliance

6:15 pm –
6:30 pm ET

Do No Harm Guide: Applying Equity Awareness in Data Privacy Methods

Researchers and organizations can increase privacy in datasets through methods such as aggregating, suppressing, or substituting random values. But these means of protecting individuals’ information do not always equally affect the groups of people represented in the data. A published dataset might ensure the privacy of people who make up the majority of the dataset but fail to ensure the privacy of those in smaller groups. Or, after undergoing alterations, the data may be more useful for learning about some groups more than others. How entities protect data can have varying effects on marginalized and underrepresented groups of people.

To understand the current state of ideas, we completed a literature review of equity-focused work in statistical data privacy (SDP) and conducted interviews with nine experts on privacy-preserving methods and data sharing. These experts include researchers and practitioners from academia, government, and industry sectors with diverse technical backgrounds. We offer an illustrative example to highlight potential disparities that can result from applying SDP methods. We develop an equitable data privacy workflow that privacy practitioners and decision makers can utilize to explicitly make equity part of the standard data privacy process.

Presenting Co-Authors

  • Claire McKay Bowen, Urban Institute
  • Joshua Snoke, RAND Corporation

Discussant

  • Miranda Bogen, Center for Democracy & Technology

6:30 pm –
6:45 pm ET

AI Audits: Who, When, How…Or Even If?

Artificial intelligence (AI) tools are increasingly being integrated into decision-making processes in high-risk settings, including employment, credit, health care, housing, and law enforcement. Given the harms that poorly designed systems can lead to, including matters of life and death, there is a growing sense that crafting policies for using AI responsibly must necessarily include, at a minimum, assurances about the technical accuracy and reliability of the model design.

Because AI auditing is still in its early stages, many questions remain about how to best conduct them. While many people are optimistic that valid and effective best practice standards and procedures will emerge, some civil rights advocates are skeptical of both the concept and the practical use of AI audits. These critics are reasonably concerned about audit-washing—bad actors gaming loopholes and ambiguities in audit requirements to demonstrate compliance without actually providing meaningful reviews.

This chapter aims to explain why AI audits often are regarded as essential tools within an overall responsible governance system and how they are evolving toward accepted standards and best practices. We will focus most of our analysis on these explanations, including recommendations for conducting high-quality AI audits. Nevertheless, we will also articulate the core ideas of the skeptical civil rights position. This intellectually and politically sound view should be taken seriously by the AI community. To be well-informed about AI audits is to comprehend their positive prospects and be prepared to address their most serious challenges.

Presenting Co-Authors

  • Brenda Leong, Luminos.Law
  • Albert Fox Cahn, Surveillance Technology Oversight Project (S.T.O.P.)

Discussant

  • Edgar Rivas, U.S. Senator John Hickenlooper (D-CO)

6:45 pm –
6:55 pm ET

Data Is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data

Heightened protection for sensitive data is trendy in privacy laws. Originating in EU data protection law, sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, and biometric and genetic data.

Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use nonsensitive data as a proxy for certain types of sensitive data.

With Big Data and powerful machine learning algorithms, most nonsensitive data give rise to inferences about sensitive data. In many privacy laws, data giving rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.

This Article argues that the problems with sensitive data make it unworkable and counterproductive as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake—they embrace the idea that the nature of personal data is a sufficiently useful focal point. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does.

To be effective, privacy law must focus on harm and risk rather than on the nature of personal data. Privacy protections should be proportionate to the harm and risk involved with the data collection, use, and transfer.

Presenting Author

  • Daniel Solove, The George Washington University Law School

Discussant

  • Didier Barjon, U.S. Senate Majority Leader, Charles E. Schumer

6:55 pm –
7:05 pm ET

The Prediction Society: Algorithms and the Problems of Forecasting the Future

Today’s predictions are produced by machine learning algorithms that analyze massive quantities of data, and increasingly, important decisions about people are being made based on these predictions.

Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But predictions are different from other inferences and raise several unique problems. (1) Algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. (2) Algorithmic predictions often raise an unfalsifiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. (3) Algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. (4) Algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast.

More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future.

Data protection/privacy law do not adequately address these problems. Many laws lack a temporal dimension and do not distinguish between predictions about the future and inferences about the past or present. We argue that the use of algorithmic predictions is a distinct issue warranting different treatment from other types of inference.

Presenting Co-Authors

  • Daniel Solove, The George Washington University Law School
  • Hideyuki Matsumi, Vrije Universiteit Brussel

Discussant

  • Didier Barjon, U.S. Senate Majority Leader, Charles E. Schumer

7:05 pm –
7:20 pm ET

Beyond Memorization: Violating Privacy Via Inference with Large Language Models

Current privacy research on large language models (LLMs) primarily focuses on the issue of extracting memorized training data. At the same time, models’ inference capabilities have increased drastically. This raises the key question of whether current LLMs could violate individuals’ privacy by inferring personal attributes from text given at inference time.

In this work, we present the first comprehensive study on the capabilities of pretrained LLMs to infer personal attributes from text. We construct a dataset consisting of real Reddit profiles, and show that current LLMs can infer a wide range of personal attributes (e.g., location, income, sex), achieving up to 85% top-1 and 95% top-3 accuracy at a fraction of the cost (100×) and time (240×) required by humans. As people increasingly interact with LLM-powered chatbots across all aspects of life, we also explore the emerging threat of privacy-invasive chatbots trying to extract personal information through seemingly benign questions.

Finally, we show that common mitigations, i.e., text anonymization and model alignment, are currently ineffective at protecting user privacy against LLM inference. Our findings highlight that current LLMs can infer personal data at a previously unattainable scale. In the absence of working defenses, we advocate for a broader discussion around LLM privacy implications beyond memorization, striving for a wider privacy protection.

Presenting Co-Authors

  • Robin Staab, ETH Zurich SRI Lab
  • Mislav Balunovic, ETH Zurich SRI Lab

Discussant

  • Alicia Solow-Niederman, George Washington University Law School

7:20 pm –
7:25 pm ET

Closing Remarks

7:30 pm –
8:30 pm ET

Food & Wine Reception

Speakers

Jordan Francis

Elise Berkower Memorial Fellow, FPF

Jordan Francis is the Elise Berkower Memorial Fellow at the Future of Privacy Forum (FPF). As a member of FPF’s U.S. Legislation team, Jordan supports research and independent analysis concerning federal, state, and local privacy laws and regulations.

Prior to FPF, Jordan worked as a legal research fellow with the Cordell Institute for Policy in Medicine & Law at Washington University in St. Louis. In that role, Jordan wrote scholarship, submitted comments on regulatory initiatives, and developed model legislation concerning the intersection of data privacy, digital trust, and loyalty.

Jordan earned his J.D. from the University of Minnesota Law School in 2022, completing the Intellectual Property and Technology Law Concentration. While at Minnesota Law, Jordan cofounded the Privacy, Cybersecurity, and Technology Law Association, served as a managing editor of the Minnesota Law Review, and was awarded an IAPP Westin Scholar Award. Jordan earned his B.S. from the University of Wisconsin-Madison, where he double-majored in mathematics and economics.

Alan Raul

Board Member, FPF

Alan Raul is the founder and leader of Sidley’s highly ranked Privacy, Data Security and Information Law practice. He represents companies on federal, state and international privacy issues, including global data protection and compliance programs, data breaches, cybersecurity, consumer protection issues and Internet law. Alan’s practice involves litigation and counseling regarding consumer class actions, FTC, State Attorney General, Department of Justice and other government investigations, enforcement actions and regulation. Alan provides clients with perspective gained from extensive government service, as well. He previously served as Vice Chairman of the White House Privacy and Civil Liberties Oversight Board, General Counsel of the Office of Management and Budget, and of the U.S. Department of Agriculture, and Associate Counsel to the President.

Alan serves as a member of the Privacy, Intellectual Property, Technology, and Antitrust Litigation Advisory Committee of the National Chamber Litigation Center (affiliated with the U.S. Chamber of Commerce). Alan also serves on the American Bar Association’s Cybersecurity Legal Task Force by appointment of the ABA President, and the Practicing Law Institute’s (PLI) Privacy Law Advisors Group.

John Verdi

Senior Vice President for Policy, FPF

John Verdi is Senior Vice President for Policy at the Future of Privacy Forum (FPF). John supervises FPF’s policy portfolio, which advances FPF’s agenda on a broad range of issues, including: Artificial Intelligence & Machine Learning; Algorithmic Decision-Making; Ethics; Connected Cars; Smart Communities; Student Privacy; Health; the Internet of Things; Wearable Technologies; De-Identification; and Drones.

John previously served as Director of Privacy Initiatives at the National Telecommunications and Information Administration, where he crafted policy recommendations for the US Department of Commerce and President Obama regarding technology, trust, and innovation. John led NTIA’s privacy multistakeholder process, which established best practices regarding unmanned aircraft systems, facial recognition technology, and mobile apps. Prior to NTIA, he was General Counsel for the Electronic Privacy Information Center (EPIC), where he oversaw EPIC’s litigation program. John earned his J.D. from Harvard Law School and his B.A. in Philosophy, Politics, and Law from SUNY-Binghamton.

Peter Welch

U.S. Senator of Vermont

Senator Peter Welch has spent his life working to improve the lives of folks who too often get left behind. Peter was born in Springfield, Massachusetts to Edward and Mary Welch, a dentist and a homemaker. Peter observed from a young age that his father treated folks in the nearby jail the same way he treated the priests and nuns. His mother raised Peter and his siblings while helping keep the books of his father’s business.

Peter attended College of the Holy Cross in Worcester, Massachusetts. Before he finished his degree, Peter left school to hitchhike to Chicago and organize against housing discrimination in 1969. After returning to school and completing his degree, he attended and received his law degree from the University of California, Berkeley, in 1973. After law school, Peter settled in White River Junction, Vermont, becoming one of the county’s first public defenders for low-income folks. He then founded a small law practice, winning high profile cases to restore retirement benefits for workers, protect property rights for the elderly, and more.

He was first elected to the Vermont State Senate in 1980. Within five years, he was unanimously elected by his colleagues to lead the chamber, becoming the first Democrat in Vermont history to hold the position of President Pro Tempore.

In 2006, Peter ran and won Vermont’s sole seat in the U.S. House of Representatives in a race that received national attention as the only contested congressional race in the country where both candidates refused to air negative ads. Peter has worked throughout his career both to defend progressive values and also to find bipartisan compromises. He approaches his work in the Capitol the same way Vermonters do back home, focusing on getting the job done and not on getting the credit.

In the House, Peter worked across the aisle to lower costs for working families. He’s led on legislation to advance green technologies, bring down the cost of prescription drugs, expand broadband and telemedicine in rural America, and more.

After Senator Patrick Leahy decided to retire after more than four decades of service in 2022, Peter ran in and won the election to become Vermont’s next Senator. In this new role, he’s continued his work to lower costs for working families, combat the impacts of climate change, and invest in rural America.

Location

U.S. Capitol Visitor's Center Room SVC 201-00