Privacy Papers 2018

|

The winners of the 2018 PPPM Award are:

Shattering One-Way Mirrors. Data Subject Access Rights in Practice

by Jef Ausloos, Postdoctoral Researcher, University of Amsterdam’s Institute for Information Law; and Pierre Dewitte, Researcher, KU Leuven Centre for IT & IP Law

Abstract:

The right of access occupies a central role in EU data protection law’s arsenal of data subject empowerment measures. It can be seen as a necessary enabler for most other data subject rights as well as an important role in monitoring operations and (en)forcing compliance. Despite some high-profile revelations regarding unsavoury data processing practices over the past few years, access rights still appear to be underused and not properly accommodated. It is especially this last hypothesis we tried to investigate and substantiate through a legal empirical study. During the first half of 2017, around sixty information society service providers were contacted with data subject access requests. Eventually, the study confirmed the general suspicion that access rights are by and large not adequately accommodated. The systematic approach did allow for a more granular identification of key issues and broader problematic trends. Notably, it uncovered an often-flagrant lack of awareness; organisation; motivation; and harmonisation. Despite the poor results of the empirical study, we still believe there to be an important role for data subject empowerment tools in a hyper-complex, automated and ubiquitous data-processing ecosystem. Even if only used marginally, they provide a checks and balances infrastructure overseeing controllers’ processing operations, both on an individual basis as well as collectively. The empirical findings also allow identifying concrete suggestions aimed at controllers, such as relatively easy fixes in privacy policies and access rights templates.


Sexual Privacy

by Danielle Keats Citron, Morton & Sophia Macht Professor of Law, University of Maryland Carey School of Law

Abstract:

Those who wish to control, expose, and damage the identities of individuals routinely do so by invading their privacy. People are secretly recorded in bedrooms and public bathrooms, and “up their skirts.” They are coerced into sharing nude photographs and filming sex acts under the threat of public disclosure of their nude images. People’s nude images are posted online without permission. Machine-learning technology is used to create digitally manipulated “deep fake” sex videos that swap people’s faces into pornography.

At the heart of these abuses is an invasion of sexual privacy—the behaviors and expectations that manage access to, and information about, the human body; intimate activities; and personal choices about the body and intimate information. More often, women, nonwhites, sexual minorities, and minors shoulder the abuse.

Sexual privacy is a distinct privacy interest that warrants recognition and protection. It serves as a cornerstone for sexual autonomy and consent. It is foundational to intimacy. Its denial results in the subordination of marginalized communities. Traditional privacy law’s efficacy, however, is eroding just as digital technologies magnify the scale and scope of the harm. This Article suggests an approach to sexual privacy that focuses on law and markets. Law should provide federal and state penalties for privacy invaders, remove the statutory immunity from liability for certain content platforms, and work in tandem with hate crime laws. Market efforts should be pursued if they enhance the overall privacy interests of all involved.


Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably not the Remedy you are Looking for

by Lilian Edwards, Professor of Law, Innovation and Society, Newcastle Law School; and Michael Veale, Researcher, Department of Science, Technology, Engineering & Public Policy at University College London

Abstract:

Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive.

However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric” explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers’ worries of intellectual property or trade secrets disclosure.

Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure (“right to be forgotten”) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human centered.


Privacy Localism

by Ira Rubinstein, Senior Fellow, Information Law Institute of the New York University School of Law

Abstract:

Privacy law scholarship often focuses on domain-specific federal privacy laws and state efforts to broaden them. This Article provides the first comprehensive analysis of privacy regulation at the local level (which it dubs “privacy localism”), using recently enacted privacy laws in Seattle and New York City as principle examples. It attributes the rise of privacy localism to a combination of federal and state legislative failures and three emerging urban trends: the role of local police in federal counter-terrorism efforts; smart city and open data initiatives; and demands for local police reform in the wake of widely reported abusive police practices.

Both Seattle and New York have enacted or proposed (1) a local surveillance ordinance regulating the purchase and use of surveillance equipment and technology by city departments (including the police) and (2) a law regulating city departments’ collection, use, disclosure and retention of personal data. In adopting these local laws, both cities have sought to fill two significant gaps in federal and state privacy laws: the public surveillance gap (which refers to the weak constitutional and statutory protections against government surveillance in public places) and the fair information practices gap (which refers to the inapplicability of the federal and state Privacy Acts to government records held by local government agencies).

Filling these gaps is a significant accomplishment and one that exhibits all of the values typically associated with federalism (diversity, participation, experimentation, responsiveness, and accountability). This Article distinguishes federalism and localism and shows why privacy localism should prevail against the threat of federal and (more importantly) state preemption. The Article concludes by suggesting that privacy localism has the potential to help shape emerging privacy norms for an increasingly urban future, inspire more robust regulation at the federal and state levels, and inject more democratic control into city deployments of privacy-invasive technologies.


Designing Without Privacy

by Ari Ezra Waldman, Professor of Law and Founding Director, Innovation Center for Law and Technology at New York Law School

Abstract:

In Privacy on the Ground, the law and information scholars Kenneth Bamberger and Deirdre Mulligan showed that empowered chief privacy officers (CPOs) are pushing their companies to take consumer privacy seriously by integrating privacy into the designs of new technologies. Their work was just the beginning of a larger research agenda. CPOs may set policies at the top, but they alone cannot embed robust privacy norms into the corporate ethos, practice, and routine. As such, if we want the mobile apps, websites, robots, and smart devices we use to respect our privacy, we need to institutionalize privacy throughout the corporations that make them. In particular, privacy must be a priority among those actually doing the work of design on the ground—namely, engineers, computer programmers, and other technologists.

This Article presents the initial findings from an ethnographic study of how, if at all, those designing technology products think about privacy, integrate privacy into their work, and consider user needs in the design process. It also looks at how attorneys at private firms draft privacy notices for their clients and interact with designers. Based on these findings, this Article suggests that Bamberger’s and Mulligan’s narrative is not yet fully realized. The account among some engineers and lawyers, where privacy is narrow, limited, and barely factoring into design, may help explain why so many products seem to ignore our privacy expectations. The Article then proposes a framework for understanding how factors both exogenous (theory and law) and endogenous (corporate structure and individual cognitive frames and experience) to the corporation prevent the CPOs’ robust privacy norms from diffusing throughout technology companies and the industry as a whole. This framework also helps suggest how specific reforms at every level—theory, law, organization, and individual experience—can incentivize companies to take privacy seriously, enhance organizational learning, and eliminate the cognitive biases that lead to discrimination in design.


The 2018 PPPM Honorable Mentions are:

  • Regulating Bot Speech, by Madeline Lamo, Law Clerk, United States Court of Federal Claims; and Ryan Calo, Lane Powell and D. Wayne Gittinger Associate Professor of LawUniversity of Washington School of Law

Abstract:

We live in a world of artificial speakers with real impact. Bots foment political strife, skew online discourse, and manipulate the marketplace. In response to concerns about the unique threats bots pose, legislators have begun to pass laws that require online bots to clearly indicate that they are not human. This work is the first to consider how such efforts to regulate bots might raise concerns about free speech and privacy.

While requiring a bot to self-disclose does not censor speech as such, it may nonetheless infringe upon the right to speak – including the right to speak anonymously – in the digital sphere. Specifically, complexities in the enforcement process threaten to unmask anonymous speakers, and requiring self-disclosure creates a scaffolding for censorship by private actors and other governments.

Ultimately, bots represent a diverse and emerging medium of speech. Their use for mischief should not overshadow their novel capacity to inform, entertain, and critique. We conclude by providing policymakers with a series of principles to bear in mind when regulating bots, so as not to inadvertently curtail an emerging form of expression or compromise anonymous speech.

  • The Intuitive Appeal of Explainable Machines by Andrew D. Selbst, Postdoctoral Scholar, Data & Society Research Institute; and Solon Barocas, Assistant Professor, Department of Information Science at Cornell University

Abstract:

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.

In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.


The 2018 PPPM Student Paper Winner Is:

Abstract:

There are two trends that are currently reshaping the online display advertising industry. First, the amount and precision of data that is being collected by Advertising and Analytics (A&A) companies about users as they browse the web is increasing. Second, there is a transition underway from “ad networks” to “ad exchanges”, where advertisers bid on “impressions” (empty advertising slots on websites) being sold in Real Time Bidding (RTB) auctions. The rise of RTB has forced A&A companies to collaborate with one another, in order to exchange data about users and facilitate bidding on impressions.

These trends have fundamental implications for users’ online privacy. It is no longer sufficient to view each A&A company, and the data it collects, in isolation. Instead, when a given user is observed by a single A&A company, that observation may be shared, in real time, with hundreds of other A&A companies within RTB auctions.

To understand the impact of RTB on users’ privacy, we propose a new model of the online advertising ecosystem called an Interaction Graph. This graph captures the business relationships between A&A companies, and allows us to model how tracking data is shared between companies. Using our Interaction Graph model, we simulate browsing behavior to understand how much of a typical web user’s browsing history can be tracked by A&A companies. We find that 52 A&A companies are each able to observe 91% of an average user’s browsing history, under modest assumptions about data sharing in RTB auctions. 636 A&A companies are able to observe at least 50% of an average user’s browsing history. Even under very strict simulation assumptions, the top 10 A&A companies still observe 89-99% of an average user’s browsing history.

Additionally, we investigate the effectiveness of several tracker-blocking strategies, including those implemented by popular privacy-enhancing browser extensions. We find that AdBlock Plus (the world’s most popular ad blocking browser extension), is ineffective at protecting users’ privacy because major ad exchanges are whitelisted under the Acceptable Ads program. In contrast, Disconnect blocks the most information flows to A&A companies of the extensions we evaluated. However, even with strong blocking, major A&A companies still observe 40-80% of an average users’ browsing history.

Read more about the winners in the Future of Privacy Forum’s December 17, 2018 Press Release on the Annual Privacy Papers for Policymakers Award.

For more information and to register, click here.