5:30 pm – 5:40 pm ET
|
Welcome Remarks
|
Daniel Hales, Policy Fellow, Future of Privacy Forum
|
5:40 pm – 6:00 pm ET
|
Opening Keynote Address
|
TBA
|
6:00 pm – 6:15 pm ET
|
Authoritarian Privacy
Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first discusses several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together, they tell only a partial story.
|
Presenting Author
Discussant
|
6:15 pm – 6:30 pm ET
|
Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?
Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property.
|
Presenting Author
Discussant
|
6:30 pm – 6:45 pm ET
|
Navigating Demographic Measurement for Fairness and Equity
Governments and policymakers increasingly expect practitioners developing and using AI systems in both consumer and public sector settings to proactively identify and address bias or discrimination that those AI systems may reflect or amplify. Central to this effort is the complex and sensitive task of obtaining demographic data to measure fairness and bias within and surrounding these systems. This report provides methodologies, guidance, and case studies for those undertaking fairness and equity assessments — from approaches that involve more direct access to data to ones that don’t expand data collection. Practitioners are guided through the first phases of demographic measurement efforts, including determining the relevant lens of analysis, selecting what demographic characteristics to consider, and navigating how to hone in on relevant sub-communities. The report then explores a variety of approaches to uncover demographic patterns and responsibly handle demographic data. While there is no one-size-fits-all solution, the report makes clear that the lack of obvious access to raw demographic data should not be considered an insurmountable barrier to assessing AI systems for fairness, nor should it provide a blanket justification for widespread or incautious data collection efforts.
|
Presenting Author
- Miranda Bogen, Center for Democracy & Technology
Discussant
|
6:45 pm – 7:00 pm ET
|
Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online
Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions—governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI’s increasing indistinguishability from people online (i.e., lifelike content and avatars, agentic activity), and AI’s increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and “proof-of-personhood” systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. By contrast, existing countermeasures to automated deception—such as CAPTCHAs—are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
|
Presenting Co-Authors
- Zoë Hitzig, OpenAI
- Shrey Jain, Microsoft
Discussant
|
7:00 pm – 7:15 pm ET
|
The Great Scrape: The Clash Between Scraping and Privacy
Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.
|
Presenting Author
- Daniel Solove, The George Washington University Law School
Discussant
- Jennifer Huddleston, Cato Institute
|
7:15 pm – 7:30 pm ET
|
The Overton Window and Privacy Enforcement
On paper, the FTC’s consumer protection authority seems straightforward: the agency is to investigate and prevent unfair or deceptive acts or practices. The contemporary question is whether the FTC can draw on this authority to curtail data-driven harms.
This Essay contends that the legal answer is yes and argues that the key determinants of whether the FTC will be able to confront emerging digital technologies are social, institutional, and political. It proposes that the FTC’s privacy enforcement occurs within an “Overton Window of Enforcement Possibility.” Picture the FTC Act’s legal standards as setting forth a range of lawful enforcement behavior for the agency. Within this lawful space, just as a politician’s “Overton Window of Political Possibility” will not include every possible policy option, the FTC’s Window will not include every possible enforcement option. Rather, the space within which the agency might operate will be sharply informed by four critical forces: social norms; institutional norms within the agency; the courts; and Congress.
This approach highlights how an agency’s enforcement actions do not occur in a rigidly fixed domain; rather, they unfold within a dynamic space that is subject to forces both inside and outside of the agency. Understanding enforcement as a process in this way reveals a sobering lesson for proposed federal legislation that seeks to endow new or existing agencies with new regulatory authority: Without a sufficiently large Window within which an agency can operate, theoretical grants of power will have little real-world impact. But this framing can be empowering. It reveals strategies for administrative officials who seek to exercise their enforcement authority, such as attempting to ground more progressive or novel actions in topics with thick social consensus. And it creates space for policy interventions to account for institutional design and the practical realities that an agency confronts over time.
|
Presenting Author
- Alicia Solow-Niederman, The George Washington University Law School
Discussant
- James Cooper, Antonin Scalia Law School, George Mason University
|
7:30 pm – 7:35 pm ET
|
Closing Remarks
|
John Verdi, Senior Vice President for Policy, Future of Privacy Forum
|
7:35 pm – 8:30 pm ET
|
Food & Wine Reception
|
|