Privacy Papers 2019
The winners of the 2019 Privacy Papers for Policymakers (PPPM) Award are:
by Ignacio N. Cofone, McGill University Faculty of Law
Abstract
Law often regulates the flow of information to prevent discrimination. It does so, for example, in Law often blocks sensitive personal information to prevent discrimination. It does so, however, without a theory or framework to determine when doing so is warranted. As a result, these measures produce mixed results. This article offers a framework for determining, with a view of preventing discrimination, when personal information should flow and when it should not. It examines the relationship between precluded personal information, such as race, and the proxies for precluded information, such as names and zip codes. It proposes that the success of these measures depends on what types of proxies exist for the information blocked and it explores in which situations those proxies should also be blocked. This framework predicts the effectiveness of antidiscriminatory privacy rules and offers the potential of a wider protection to minorities.
Privacy’s Constitutional Moment and the Limits of Data Protection
by Woodrow Hartzog, Northeastern University, School of Law and Khoury College of Computer Sciences and Neil M. Richards, Washington University, School of Law and the Cordell Institute for Policy in Medicine & Law
Abstract
America’s privacy bill has come due. Since the dawn of the Internet, Congress has repeatedly failed to build a robust identity for American privacy law. But now both California and the European Union have forced Congress’s hand by passing the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR). These data protection frameworks, structured around principles for Fair Information Processing called the “FIPs,” have industry and privacy advocates alike clamoring for a “U.S. GDPR.” States seemed poised to blanket the country with FIP-based laws if Congress fails to act. The United States is thus in the midst of a “constitutional moment” for privacy, in which intense public deliberation and action may bring about constitutive and structural change. And the European data protection model of the GDPR is ascendant.
In this article we highlight the risks of U.S. lawmakers embracing a watered-down version of the European model as American privacy law enters its constitutional moment. European-style data protection rules have undeniable virtues, but they won’t be enough. The FIPs assume data processing is always a worthy goal, but even fairly processed data can lead to oppression and abuse. Data protection is also myopic because it ignores how industry’s appetite for data is wrecking our environment, our democracy, our attention spans, and our emotional health. Even if E.U.-style data protection were sufficient, the United States is too different from Europe to implement and enforce such a framework effectively on its European law terms. Any U.S. GDPR would in practice be what we call a “GDPR-Lite.”
Our argument is simple: In the United States, a data protection model cannot do it all for privacy, though if current trends continue, we will likely entrench it as though it can. Drawing from constitutional theory and the traditions of privacy regulation in the United States, we propose instead a “comprehensive approach” to privacy that is better focused on power asymmetries, corporate structures, and a broader vision of human well-being. Settling for an American GDPR-lite would be a tragic ending to a real opportunity to tackle the critical problems of the information age. In this constitutional moment for privacy, we can and should demand more. This article offers a path forward to do just that.
Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations
by Margot E. Kaminski, University of Colorado Law and Gianclaudio Malgieri, Vrije Universiteit Brussel (VUB) – Faculty of Law
Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.
The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights.
After noting the potential shortcomings of DPIAs, this paper closes with a call — and some suggestions — for a Model Algorithmic Impact Assessment in the context of the GDPR. Our examination of DPIAs suggests that the current focus on the right to explanation is too narrow. We call, instead, for data controllers to consciously use the required DPIA process to produce what we call “multi-layered explanations” of algorithmic systems. This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively better fills potential gaps between the GDPR’s two approaches to algorithmic accountability.
Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites
by Arunesh Mathur, Princeton University; Gunes Acar, Princeton University; Michael Friedman, Princeton University; Elena Lucherini, Princeton University; Jonathan Mayer, Princeton University; Marshini Chetty, University of Chicago; and Arvind Narayanan, Princeton University
Abstract
Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions. We present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, we study shopping websites, which often use dark patterns to influence users into making more purchases or disclosing more information than they would otherwise. Analyzing ~53K product pages from ~11K shopping websites, we discover 1,818 dark pattern instances, together representing 15 types and 7 broader categories. We examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices. We also uncover 22 third-party entities that offer dark patterns as a turnkey solution. Finally, we develop a taxonomy of dark pattern characteristics that describes the underlying influence of the dark patterns and their potential harm on user decision-making. Based on our findings, we make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.
The Many Revolutions of Carpenter
by Paul Ohm, Georgetown University Law Center
Carpenter v. United States, the 2018 Supreme Court opinion that requires the police to obtain a warrant to access an individual’s historical whereabouts from the records of a cell phone provider, is the most important Fourth Amendment opinion in decades. Although many have acknowledged some of the ways the opinion has changed the doctrine of Constitutional privacy, the importance of Carpenter has not yet been fully appreciated. Carpenter works many revolutions in the law, not only through its holding and new rule, but in more fundamental respects. The opinion reinvents the reasonable expectation of privacy test as it applies to large databases of information about individuals. It turns the third-party doctrine inside out, requiring judges to scrutinize the products of purely private decisions. In dicta, it announces a new rule of technological equivalence, which might end up covering more police activity than the core rule. Finally, it embraces technological exceptionalism as a centerpiece for the interpretation of the Fourth Amendment, rejecting backwards-looking interdisciplinary methods such as legal history or surveys of popular attitudes. Considering all of these revolutions, Carpenter is the most important Fourth Amendment decision since Katz v. United States, a case it might end up rivaling in influence.
The 2019 PPPM Honorable Mentions are:
- Can You Pay for Privacy? Consumer Expectations and the Behavior of Free and Paid Apps by Kenneth Bamberger, University of California, Berkeley – School of Law; Serge Egelman, University of California, Berkeley – Department of Electrical Engineering & Computer Sciences; Catherine Han, University of California, Berkeley; Amit Elazari Bar On, University of California, Berkeley; and Irwin Reyes, University of California, Berkeley
Abstract
“Paid” digital services have been touted as straightforward alternatives to the ostensibly “free” model, in which users actually face a high price in the form of personal data, with limited awareness of the real cost incurred and little ability to manage their privacy preferences. Yet the actual privacy behavior of paid services, and consumer expectations about that behavior, remain largely unknown.
This Article addresses that gap. It presents empirical data both comparing the true cost of “paid” services as compared to their so-called “free” counterparts, and documenting consumer expectations about the relative behaviors of each.
We first present an empirical study that documents and compares the privacy behaviors of 5,877 Android apps that are offered both as free and paid versions. The sophisticated analysis tool we employed, AppCensus, allowed us to detect exactly which sensitive user data is accessed by each app and with whom it is shared. Our results show that paid apps often share the same implementation characteristics and resulting behaviors as their free counterparts. Thus, if users opt to pay for apps to avoid privacy costs, in many instances they do not receive the benefit of the bargain. Worse, we find that there are no obvious cues that consumers can use to determine when the paid version of a free app offers better privacy protections than its free counterpart.
We complement this data with a second study: surveying 1,000 mobile app users as to their perceptions of the privacy behaviors of paid and free app versions. Participants indicated that consumers are more likely to expect that the free version would share their data with advertisers and law enforcement agencies than the paid version, and be more likely to keep their data on the app’s servers when no longer needed for core app functionality. By contrast, consumers are more likely to expect the paid version to engage in privacy-protective practices, to demonstrate transparency with regard to its data collection and sharing behaviors, and to offer more granular control over the collection of user data in that context.
Together, these studies identify ways in which the actual behavior of apps fails to comport with users’ expectations, and the way that representations of an app as “paid” or “ad-free” can mislead users. They also raise questions about the salience of those expectations for consumer choices.
In light of this combined research, we then explore three sets of ramifications for policy and practice.
First, our findings that paid services often conduct equally extensive levels of data collection and sale as free ones challenges understandings about how the “pay for privacy” model operates in practice, its promise as a privacy-protective alternative, and the legality of paid app behavior.
Second, by providing empirical foundations for better understanding both corporate behavior and consumer expectations, our findings support research into ways that users’ beliefs about technology business models and developer behavior are actually shaped, and the manipulability of consumer decisions about privacy protection, undermining the legitimacy of legal regimes relying on fictive user “consent” that does not reflect knowledge of actual market behavior.
Third, our work demonstrates the importance of the kind of technical tools we use in our study — tools that offer transparency about app behaviors, empowering consumers and regulators. Our study demonstrates that, at least in the most dominant example of a free vs. paid market — mobile apps — there turns out to be no real privacy-protective option. Yet the failures of transparency or auditability of app behaviors deprive users, regulators, and law enforcement of any means to keep developers accountable, and privacy is removed as a salient concern to guide user behavior. Dynamic analysis of the type we performed can both allow users to go online and test, in real-time, an app’s privacy behavior, empowering them as advocates and informing their choices to better align expectations with reality. The same tools, moreover, can equip regulators, law enforcement, consumer protections organizations, and private parties seeking to remedy undesirable or illegal privacy behavior.
- Data Protection, Artificial Intelligence and Cognitive Services: Is the General Data Protection Regulation (GDPR) ‘Artificial Intelligence-Proof’? by Lilian Mitrou, University of the Αegean – Department of Information and Communication Systems Engineering
Abstract
AI – in its interplay with Big Data, ambient intelligence, ubiquitous computing and cloud computing – augments the existing major, qualitative and quantitative, shift with regard to the processing of personal information. The questions that arise are of crucial importance both for the development of AI and the efficiency of data protection arsenal: Is the current legal framework AI-proof ? Are the data protection and privacy rules and principles adequate to deal with the challenges of AI or do we need to elaborate new principles to work alongside the advances of AI technology? Our research focuses on the assessment of GDPR that, however, does not specifically address AI, as the regulatory choice consisted more in what we perceive as “technology – independent legislation.
The paper will give a critical overview and assessment of the provisions of GDPR that are relevant for the AI-environment, i.e. the scope of application, the legal grounds with emphasis on consent, the reach and applicability of data protection principles and the new (accountability) tools to enhance and ensure compliance.
- Usable and Useful Privacy Interfaces (book chapter to appear in An Introduction to Privacy for Technology Professionals, Second Edition) by Florian Schaub, University of Michigan School of Information and Lorrie Faith Cranor, Carnegie Mellon University
Abstract
The design of a system or technology, in particular its user experience design, affects and shapes how people interact with it. Privacy engineering and user experience design frequently intersect. Privacy laws and regulations require that data subjects are informed about a system’s data practices, asked for consent, provided with a mechanism to withdraw consent, and given access to their own data. To satisfy these requirements and address users’ privacy needs most services offer some form of privacy notices, privacy controls, or privacy settings to users.
However, too often privacy notices are not readable, people do not understand what they consent to, and people are not aware of certain data practices or the privacy settings or controls available to them. The challenge is that an emphasis on meeting legal and regulatory obligations is not sufficient to create privacy interfaces that are usable and useful for users. Usable means that people can find, understand and successfully use provided privacy information and controls. Useful means that privacy information and controls align with users’ needs with respect to making privacy-related decisions and managing their privacy. This chapter provides insights into the reasons why it can be difficult to design privacy interfaces that are usable and useful. It further provides guidance and best practices for user-centric privacy design that meets both legal obligations and users’ needs. Designing effective privacy user experiences not only makes it easier for users to manage and control their privacy, but also benefits organizations by minimizing surprise for their users and facilitating user trust. Any privacy notice and control is not just a compliance tool but rather an opportunity to engage with users about privacy, to explain the rationale behind practices that may seem invasive without proper context, to make users aware of potential privacy risks, and to communicate the measures and effort taken to mitigate those risks and protect users’ privacy.
Privacy laws, privacy technology, and privacy management are typically centered on information – how information is collected, processed, stored, transferred, how information can and must be protected, and how to ensure compliance and accountability. To be effective, designing privacy user experiences requires a shift in focus: while information and compliance are of course still relevant, user-centric privacy design focuses on people, their privacy needs, and their interaction with a system’s privacy interfaces.
Why is it important to pay attention to the usability of privacy interfaces? How do people make privacy decisions? What drives their privacy concerns and behavior? We answer these questions in this chapter and then provide an introduction to user experience design. We discuss common usability issues in privacy interfaces, and describe a set of privacy design principles and a user-centric process for designing usable and effective privacy interfaces, concluding with an overview of best practices.
The design of usable privacy notices and controls is not trivial, but this chapter hopefully motivated why it is important to invest the effort in getting the privacy user experience right – making sure that privacy information and controls are not only compliant with regulation but also address and align with users’ needs. Careful design of the privacy user experience can support users in developing an accurate and more complete understanding of a system and its data practices. Well-designed and user-tested privacy interfaces provide responsible privacy professionals and technologists with the confidence that an indication of consent was indeed an informed and freely-given expression by the user. Highlighting unexpected data practices and considering secondary and incidental users reduces surprise for users and hopefully prevents privacy harms, social media outcries, bad press, and fines from regulators. Importantly, a privacy interface is not just a compliance tool but rather an opportunity to engage with users about privacy, to explain the rationale behind practices that may seem invasive without proper context, to make users aware of potential privacy risks, and to communicate the measures and effort taken to mitigate those risks and protect users’ privacy.
The 2019 PPPM Student Paper Winner Is:
- Privacy Attitudes of Smart Speaker Users by Nathan Malkin, Joe Deatrick, Allen Tong, Primal Wijesekera, Serge Egelman, and David Wagner, University of California, Berkeley
Abstract
As devices with always-on microphones located in people’s homes, smart speakers have significant privacy implications. We surveyed smart speaker owners about their beliefs, attitudes, and concerns about the recordings that are made and shared by their devices. To ground participants’ responses in concrete interactions, rather than collecting their opinions abstractly, we framed our survey around randomly selected recordings of saved interactions with their devices. We surveyed 116 owners of Amazon and Google smart speakers and found that almost half did not know that their recordings were being permanently stored and that they could review them; only a quarter reported reviewing interactions, and very few had ever deleted any. While participants did not consider their own recordings especially sensitive, they were more protective of others’ recordings (such as children and guests) and were strongly opposed to use of their data by third parties or for advertising. They also considered permanent retention, the status quo, unsatisfactory. Based on our findings, we make recommendations for more agreeable data retention policies and future privacy controls.
Read more about the winners in the Future of Privacy Forum’s blog post.
For more information and to register, click here.