Notes from FPF
On December 12, 2017, FPF announced the winners of our 8th Annual Privacy Papers for Policymakers (PPPM) Award. This Award recognizes leading privacy scholarship that is relevant to policymakers in the United States Congress, at U.S. federal agencies, and for data protection authorities abroad.
In this special issue of the Scholarship Reporter, you will find this year’s six winning papers. From the many nominated privacy-related papers published in the last year, these six were selected by Finalist Judges, after having been first evaluated highly by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. Finalist Judges and Reviewers agreed that these papers demonstrate a thoughtful analysis of emerging issues and propose new means of analysis that can lead to real-world policy impact, making them the “must-read” privacy scholarship today.
This year’s winning papers grapple with a range of issues critical to regulators. The papers analyze broad conceptions of how regulators, markets, and society at large consider different conceptions of privacy, prompting the reader to think critically about assumed paradigms.
Striking the balance between foundational analysis and narrower proposals, “Artificial Intelligence Policy: A Roadmap and Primer,” outlines the role of artificial intelligence in law and policy, and how relevant governance strategies should develop. “Designing Against Discrimination in Online Markets” taxonomizes design and policy strategies for diminishing discriminatory mechanisms in online platforms and “Health Information Equality” proposes mechanisms to prevent the disproportionate effects of secondary health data usage on vulnerable populations.
“The Public Information Fallacy” argues that the definition of “public information” is unsettled and hazy, meriting a more rigorous analysis of the definition in legal determinations and policy discourse. In “The Undue Influence of Surveillance Technology Companies on Policing,” the author argues that the role of surveillance technology companies must be curtailed to facilitate meaningful transparency about how that technology is used by police departments. “Transatlantic Data Privacy Law”, calls for a coalesced understanding of how privacy is conceptualized between the rights-based model in Europe and the marketplace-based model in the United States.
As always, we would love to hear your feedback on this issue. You can email us at [email protected].
Artificial Intelligence Policy: A Primer and Roadmap
This paper provides a roadmap (not the road) to the major policy questions presented by AI today. The goal of the essay is to give sufficient detail to describe the challenge of AI without providing the policy outcome. It discusses the contemporary policy environment around AI and the key challenges it presents including: justice and equality; use of force; safety and certification; privacy and power; and taxation and displacement of labor. As it relates to privacy in particular, the author posits that the acceleration of artificial intelligence, which is intimately tied to the availability of data, will play a significant role in this evolving conversation in at least two ways: (1) the problem of pattern recognition and (2) the problem of data parity.
Talk of artificial intelligence is everywhere. People marvel at the capacity of machines to translate any language and master any game. Others condemn the use of secret algorithms to sentence criminal defendants or recoil at the prospect of machines gunning for blue, pink, and white-collar jobs. Some worry aloud that artificial intelligence will be humankind’s “final invention.”
This essay, prepared in connection with UC Davis Law Review’s 50th anniversary symposium, explains why AI is suddenly on everyone’s mind and provides a roadmap to the major policy questions AI raises. The essay is designed to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiate their own exploration.
Topics covered include: Justice and equity; Use of force; Safety and certification; Privacy (including data parity); and Taxation and displacement of labor. In addition to these topics, the essay will touch briefly on a selection of broader systemic questions: Institutional configuration and expertise; Investment and procurement; Removing hurdles to accountability; and Correcting mental models of AI.
Designing Against Discrimination in Online Markets
K. EC LEVY, S. BAROCAS
This article provides a conceptual framework for understanding how platforms’ design and policy choices introduce opportunities for users’ biases to affect how they treat one another. Through empirical review of design-oriented interventions used by a range of platforms, and the synthesis of this review into a taxonomy of thematic categories, the authors hope to prompt greater reflection on the stakes of such decisions made by platforms already, guide platforms’ future decisions, and provide a basis for empirical work measuring the impacts of design decisions on discriminatory outcomes. In Part I, the empirical review of platforms is described, and the strategies used to develop the taxonomy are presented. In Part II, the ten thematic categories that emerged from this review are detailed, and how platforms’ design interventions might mediate or exacerbate users’ biased behaviors is discussed. Part III describes the ethical dimensions of platforms’ design choices.
Platforms that connect users to one another have flourished online in domains as diverse as transportation, employment, dating, and housing. When users interact on these platforms, their behavior may be influenced by preexisting biases, including tendencies to discriminate along the lines of race, gender, and other protected characteristics. In aggregate, such user behavior may result in systematic inequities in the treatment of different groups. While there is uncertainty about whether platforms bear legal liability for the discriminatory conduct of their users, platforms necessarily exercise a great deal of control over how users’ encounters are structured—including who is matched with whom for various forms of exchange, what information users have about one another during their interactions, and how indicators of reliability and reputation are made salient, among many other features. Platforms cannot divest themselves of this power; even choices made without explicit regard for discrimination can affect how vulnerable users are to bias. This Article analyzes ten categories of design and policy choices through which platforms may make themselves more or less conducive to discrimination by users. In so doing, it offers a comprehensive account of the complex ways platforms’ design choices might perpetuate, exacerbate, or alleviate discrimination in the contemporary economy.
“Designing Against Discrimination in Online Markets” by K. EC Levy, S. Barocas Berkeley Technology Law Journal, Vol. 32, 2018.
Health Information Equity
This paper posits that the ability to collect and aggregate data about patients — including physical conditions, genetic information, treatments, responses, and outcomes — is changing medical research today. The author states that the collection of such information raises serious ethical concerns because it imposes special burdens on specific patients whose records form the data pool for queries and analyses. This article argues that laws should distribute information burdens across society in a just manner. Part I lays out how new laws and policies are facilitating the disproportionate collection and public use of data. Part II details the kinds of burdens such practices can impose. Part III provides an ethical framework to assess these inequities. Part IV then shows what regulatory and statutory levers can be used to render secondary research more equitable. Finally, the author outlines a framework to reorganize privacy risk in ways that are ethical and just. Where bioethics has sought only to incorporate autonomy concerns in health data collection, this framework provides a guide for moving beyond autonomy to equity concerns.
In the last few years, numerous Americans’ health information has been collected and used for follow-on, secondary research. This research studies correlations between medical conditions, genetic or behavioral profiles, and treatments, to customize medical care to specific individuals. Recent federal legislation and regulations make it easier to collect and use the data of the low-income, unwell, and elderly for this purpose. This would impose disproportionate security and autonomy burdens on these individuals. Those who are well-off and pay out of pocket could effectively exempt their data from the publicly available information pot. This presents a problem which modern research ethics is not well equipped to address. Where it considers equity at all, it emphasizes underinclusion and the disproportionate distribution of research benefits, rather than overinclusion and disproportionate distribution of burdens.
I rely on basic intuitions of reciprocity and fair play as well as broader accounts of social and political equity to show that equity in burden distribution is a key aspect of the ethics of secondary research. To satisfy its demands, we can use three sets of regulatory and policy levers. First, information collection for public research should expand beyond groups having the lowest welfare. Next, data analyses and queries should draw on data pools more equitably. Finally, we must create an entity to coordinate these solutions using existing statutory authority if possible. Considering health information collection at a systematic level—rather than that of individual clinical encounters—gives us insight into the broader role that health information plays in forming personhood, citizenship, and community.
“Health Information Equity” by C. Konnoth University of Pennsylvania Law Review.
The Public Information Fallacy
The goal of this article is to highlight the many possible meanings of “public” and make the case to clarify the concept in privacy law. The main thesis is that because there are so many different possible interpretations of “public information,” the concept cannot be used to justify data practices and surveillance without first articulating a more precise meaning that recognizes the values affected. The author believes the law of public information has failed to clarify whether the concept is a description, a designation, or just another way of saying something is “not private.” In this document, a review of the law and discourse of public information is provided, a survey of the law and literature to propose three different ways to conceptualize “public information” is discussed and finally, a case for clarity is made.
The concept of privacy in “public” information or acts is a perennial topic for debate. It has given privacy law fits. People struggle to reconcile the notion of protecting information that has been made public with traditional accounts of privacy. As a result, successfully labeling information as public often results in a free pass for surveillance and personal data practices. It has also given birth to a significant and persistent misconception—that public information is an established and objective concept.
In this article, I argue that the “no privacy in public” justification is misguided because nobody even knows what “public” even means. It has no set definition in law or policy. This means that appeals to the public nature of information and contexts in order to justify data and surveillance practices is often just guesswork. Is the criteria for determining publicness whether it was hypothetically accessible to anyone? Or is public information anything that’s controlled, designated, or released by state actors? Or maybe what’s public is simply everything that’s “not private?”
The main thesis of this article is that if the concept of “public” is going to shape people’s social and legal obligations, its meaning should not be assumed. Law and society must recognize that labeling something as public is both consequential and value-laden. To move forward, we should focus the values we want to serve, the relationships and outcomes we want to foster, and the problems we want to avoid.
“The Public Information Fallacy” by W. Hartzog Northeastern University School of Law Research Paper No. 309-2017.
The Undue Influence of Surveillance Technology Companies on Policing
E. E. JOH
This essay identifies three recent examples in which surveillance technology companies have exercised undue influence over policing: stingray cellphone surveillance, body cameras, and big data programs. By “undue influence,” the author is referring to the commercial self-interest of surveillance technology vendors that overrides principles of accountability and transparency normally governing the police. The article goes on to examine the harms that ensue when this influence goes unchecked, and suggests some means by which oversight can be imposed on these relationships.
Conventional wisdom assumes that the police are in control of their investigative tools. But with surveillance technologies, this is not always the case. Increasingly, police departments are consumers of surveillance technologies that are created, sold, and controlled by private companies. These surveillance technology companies exercise an undue influence over the police today in ways that aren’t widely acknowledged, but that have enormous consequences for civil liberties and police oversight. Three seemingly unrelated examples — stingray cellphone surveillance, body cameras, and big data software—demonstrate varieties of this undue influence. The companies which provide these technologies act out of private self-interest, but their decisions have considerable public impact. The harms of this private influence include the distortion of Fourth Amendment law, the undermining of accountability by design, and the erosion of transparency norms. This Essay demonstrates the increasing degree to which surveillance technology vendors can guide, shape, and limit policing in ways that are not widely recognized. Any vision of increased police accountability today cannot be complete without consideration of the role surveillance technology companies play.
Transatlantic Data Privacy Law
P. M. SCHWARTZ, K.N. PEIFER
In this paper, the authors state that because of data restrictions of two major EU mandates, bridging the transatlantic data divide is a matter of the greatest significance. On the horizon is a possible international policy solution around “interoperable,” or shared legal concepts. President Barack Obama and the Federal Trade Commission (FTC) promoted this approach. The extent of EU–U.S. data privacy interoperability, however, remains to be seen. In exploring this issue, this article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems’ image of the individual as bearer of legal interests.
International flows of personal information are more significant than ever, but differences in transatlantic data privacy law imperil this data trade. The resulting policy debate has led the EU to set strict limits on transfers of personal data to any non-EU country—including the United States—that lacks sufficient privacy protections. Bridging the transatlantic data divide is therefore a matter of the greatest significance.
In exploring this issue, this Article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems’ images of the individual as bearer of legal interests. The EU has created a privacy culture around “rights talk” that protects its “data subjects.” In the EU, moreover, rights talk forms a critical part of the postwar European project of creating the identity of a European citizen. In the United States, in contrast, the focus is on a “marketplace discourse” about personal information and the safeguarding of “privacy consumers.” In the United States, data privacy law focuses on protecting consumers in a data marketplace.
This Article uses its models of rights talk and marketplace discourse to analyze how the EU and United States protect their respective data subjects and privacy consumers. Although the differences are great, there is still a path forward. A new set of institutions and processes can play a central role in developing mutually acceptable standards of data privacy. The key documents in this regard are the General Data Protection Regulation, an EU-wide standard that becomes binding in 2018, and the Privacy Shield, an EU–U.S. treaty signed in 2016. These legal standards require regular interactions between the EU and United States and create numerous points for harmonization, coordination, and cooperation. The GDPR and Privacy Shield also establish new kinds of governmental networks to resolve conflicts. The future of international data privacy law rests on the development of new understandings of privacy within these innovative structures.
“Transatlantic Data Privacy Law” by P. M. Schwartz, K.N. Peifer 106 Georgetown Law Journal 115 (2017) UC Berkeley Public Law Research Paper.