Privacy Papers 2017

> Privacy Papers 2017

The winners of the 2017 PPPM Award are:

Artificial Intelligence Policy: A Primer and Roadmap
by Ryan Calo, Associate Professor of Law, University of Washington
Abstract:
Talk of artificial intelligence is everywhere. People marvel at the capacity of machines to translate any language and master any game. Others condemn the use of secret algorithms to sentence criminal defendants or recoil at the prospect of machines gunning for blue, pink, and white-collar jobs. Some worry aloud that artificial intelligence will be humankind’s “final invention.”
This essay, prepared in connection with UC Davis Law Review’s 50th anniversary symposium, explains why AI is suddenly on everyone’s mind and provides a roadmap to the major policy questions AI raises. The essay is designed to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiate their own exploration.
Topics covered include: Justice and equity; Use of force; Safety and certification; Privacy (including data parity); and Taxation and displacement of labor. In addition to these topics, the essay will touch briefly on a selection of broader systemic questions: Institutional configuration and expertise; Investment and procurement; Removing hurdles to accountability; and Correcting mental models of AI.


The Public Information Fallacy
by Woodrow Hartzog, Professor of Law and Computer Science, Northeastern University
Abstract:

The concept of privacy in “public” information or acts is a perennial topic for debate. It has given privacy law fits. People struggle to reconcile the notion of protecting information that has been made public with traditional accounts of privacy. As a result, successfully labeling information as public often results in a free pass for surveillance and personal data practices. It has also given birth to a significant and persistent misconception—that public information is an established and objective concept.
In this article, I argue that the “no privacy in public” justification is misguided because nobody even knows what “public” even means. It has no set definition in law or policy. This means that appeals to the public nature of information and contexts in order to justify data and surveillance practices is often just guesswork. Is the criteria for determining publicness whether it was hypothetically accessible to
anyone? Or is public information anything that’s controlled, designated, or released by state actors? Or maybe what’s public is simply everything that’s “not private?”
The main thesis of this article is that if the concept of “public” is going to shape people’s social and legal obligations, its meaning should not be assumed. Law and society must recognize that labeling something as public is both consequential and value-laden. To move forward, we should focus the values we want to serve, the relationships and outcomes we want to foster, and the problems we want to avoid.

The Undue Influence of Surveillance Technology Companies on Policing
by Elizabeth E. Joh, Professor of Law, U.C. Davis School of Law
Abstract:

Conventional wisdom assumes that the police are in control of their investigative tools. But with surveillance technologies, this is not always the case. Increasingly, police departments are consumers of surveillance technologies that are created, sold, and controlled by private companies. These surveillance technology companies exercise an undue influence over the police today in ways that aren’t widely acknowledged, but that have enormous consequences for civil liberties and police oversight. Three seemingly unrelated examples — stingray cellphone surveillance, body cameras, and big data software—demonstrate varieties of this undue influence. The companies which provide these technologies act out of private self-interest, but their decisions have considerable public impact. The harms of this private influence include the distortion of Fourth Amendment law, the undermining of accountability by design, and the erosion of transparency norms. This Essay demonstrates the increasing degree to which surveillance technology vendors can guide, shape, and limit policing in ways that are not widely recognized. Any vision of increased police accountability today cannot be complete without consideration of the role surveillance technology companies play.

Health Information Equity 
by Craig Konnoth, Associate Professor of Law, Colorado Law, University of Colorado, Boulder
Abstract:
In the last few years, numerous Americans’ health information has been collected and used for follow-on, secondary research. This research studies correlations between medical conditions, genetic or behavioral profiles, and treatments, to customize medical care to specific individuals. Recent federal legislation and regulations make it easier to collect and use the data of the low-income, unwell, and elderly for this
purpose. This would impose disproportionate security and autonomy burdens on these individuals. Those who are well-off and pay out of pocket could effectively exempt their data from the publicly available information pot. This presents a problem which modern research ethics is not well equipped to address. Where it considers equity at all, it emphasizes underinclusion and the disproportionate distribution of research benefits, rather than overinclusion and disproportionate distribution of burdens.
I rely on basic intuitions of reciprocity and fair play as well as broader accounts of social and political equity to show that equity in burden distribution is a key aspect of the ethics of secondary research. To satisfy its demands, we can use three sets of regulatory and policy levers. First, information collection for public research should expand beyond groups having the lowest welfare. Next, data analyses and queries should draw on data pools more equitably. Finally, we must create an entity to coordinate these solutions using existing statutory authority if possible. Considering health information collection at a systematic level—rather than that of individual clinical encounters—gives us insight into the broader role that health information plays in forming personhood, citizenship, and community.


Designing Against Discrimination in Online Markets
by Karen Levy, Assistant Professor, Department of Information Science at Cornell University; and Solon Barocas, Assistant Professor in the Department of Information Science at Cornell University
Abstract:
Platforms that connect users to one another have flourished online in domains as diverse as transportation, employment, dating, and housing. When users interact on these platforms, their behavior may be influenced by preexisting biases, including tendencies to discriminate along the lines of race, gender, and other protected characteristics. In aggregate, such user behavior may result in systematic inequities in the treatment of different groups. While there is uncertainty about whether platforms bear legal liability for the discriminatory conduct of their users, platforms necessarily exercise a great deal of control over how users’ encounters are structured—including who is matched with whom for various forms of exchange, what information users have about one another during their interactions, and how indicators of reliability and reputation are made salient, among many other features. Platforms cannot divest themselves of this power; even choices made without explicit regard for discrimination can affect how vulnerable users are to bias. This Article analyzes ten categories of design and policy choices through which platforms may make themselves more or less conducive to discrimination by users. In so doing, it offers a comprehensive account of the complex ways platforms’ design choices might perpetuate, exacerbate, or alleviate discrimination in the contemporary economy.


Transatlantic Data Privacy Law
by Paul M. Schwartz, Jefferson E. Peyser Professor of Law, Berkeley Law School; and Karl-Nikolaus Peifer, Director of the Institute for Media Law and Communications Law of the University of Cologne and Director of the Institute for Broadcasting Law at the University of Cologne
Abstract:
International flows of personal information are more significant than ever, but differences in transatlantic data privacy law imperil this data trade. The resulting policy debate has led the EU to set strict limits on transfers of personal data to any non-EU country—including the United States—that lacks sufficient privacy protections. Bridging the transatlantic data divide is therefore a matter of the greatest significance.
In exploring this issue, this Article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems’ images of the individual as bearer of legal interests. The EU has created a privacy culture around “rights talk” that protects its “data subjects.” In the EU, moreover, rights talk forms a critical part of the postwar European project of creating the identity of a European citizen. In the United States, in contrast, the focus is on a “marketplace discourse” about personal information and the safeguarding of “privacy consumers.” In the United States, data privacy law focuses on protecting consumers in a data marketplace.
This Article uses its models of rights talk and marketplace discourse to analyze how the EU and United States protect their respective data subjects and privacy consumers. Although the differences are great, there is still a path forward. A new set of institutions and processes can play a central role in developing mutually acceptable standards of data privacy. The key documents in this regard are the General Data Protection Regulation, an EU-wide standard that becomes binding in 2018, and the Privacy Shield, an EU–U.S. treaty signed in 2016. These legal standards require regular interactions between the EU and United States and create numerous points for harmonization, coordination, and cooperation. The GDPR and Privacy Shield also establish new kinds of governmental networks to resolve conflicts. The future of international data privacy law rests on the development of new understandings of privacy within these innovative structures.


The 2017 PPPM Honorable Mentions are:

Abstract:

‘The whole is more than the sum of its parts.’
This article applies lessons from the concept of ‘emergent properties’ in systems thinking to data privacy law. This concept, rooted in the Aristotelian dictum ‘the whole is more than the sum of its parts’, where the ‘whole’ represents the ‘emergent property’, allows systems engineers to look beyond the properties of individual components of a system and understand the system as a single complex. Applying this concept, the article argues that the current EU data privacy rules focus on individual processing activity based on a specific and legitimate purpose, with little or no attention to the totality of the processing activities – i.e. the whole – based on separate purposes. This implies that when an entity processes personal data for multiple purposes, each processing must comply with the data privacy principles separately, in light of the specific purpose and the relevant legal basis.
This (atomized) approach is premised on two underlying assumptions: (i) distinguishing among different processing activities and relating every piece of personal data to a particular processing is possible; and (ii) if each processing is compliant, the data privacy rights of individuals are not endangered.
However, these assumptions are untenable in an era where companies process personal data for a panoply of purposes, where almost all processing generates personal data and where data are combined across several processing activities. These practices blur the lines between different processing activities and complicate attributing every piece of data to a particular processing. Moreover, when entities engage in these practices, there are privacy interests independent of and/or in combination with the individual processing activities. Informed by the discussion about emergent property, the article calls for a holistic approach with enhanced responsibility for certain actors based on the totality of the processing activities and data aggregation practices.

Abstract:

This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically-driven form of Jim Crow. What has been referred to as “extreme vetting”
utilizes newly developed big data vetting methods and algorithm-dependent database screening tools deployed by the U.S. Department of Homeland Security. Under the “separate but equal”
discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an
Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact.
Currently, security-related vetting protocols often begin with an algorithm-anchored technique of biometric identification—for example, the collection and database screening of scanned fingerprints and irises, digital photographs for facial recognition technology, and DNA. Immigration reform efforts, however, call for the biometric data collection of the entire citizenry in the United States to enhance border security efforts and to increase the accuracy of the algorithmic screening process. Newly developed big data vetting tools fuse biometric data with biographic data and Internet/Social Media profiling to algorithmically assess risk. This Article concludes that those individuals and groups disparately impacted by mandatory vetting and screening protocols will largely fall within traditional classifications—race, color, ethnicity, national origin, gender, and religion. Disparate impact consequences may survive judicial review if based upon threat risk assessments, terroristic classifications, data screening results deemed suspect, and characteristics establishing anomalous data and perceived-foreignness or dangerousness data—non-protected categories that fall outside of the current equal protection framework. Thus, Algorithmic Jim Crow will require an evolution of equality law.

Abstract:

In July 2015, two researchers gained control of a Jeep Cherokee by hacking wirelessly into its dash-board connectivity system. The resulting recall of over 1.4 million Fiat Chrysler vehicles marked the first-ever security-related automobile recall. In its wake, other researchers demonstrated the capacity for remote takeovers of automobiles. By September, it became public that GM had initiated a quiet over-the-air (OTA) update program to fix security vulnerabilities in millions of their vehicles.
These incidents reveal the critical security issues of modern automobiles, so-called “connected cars,” and other Internet of Things (IoT) devices, and underscore the importance of regulatory structures that incentivize greater attention to security during production, and the management of security vulnerabilities discovered after connected devices are in circulation. In particular, it highlights the importance of incentivizing the development of OTA update systems to support safety and security critical updates to patch vulnerabilities. OTA update systems are essential to IoT security and the health and safety of humans who rely upon it.
Today’s connected cars can have more than a 100 million lines of software code, and this code base is growing. This code plays a significant role in compliance with regulatory obligations, and a crucial role in automotive safety and security systems. Embedded sensors and algorithms trigger and modulate airbag deployment, seatbelt engagement, anti-skid systems, and anti-lock breaks, identify the size, weight, and position of people to inform airbag and seatbelt behavior, and inform parking assistance systems, anti-skid and anti-lock break systems, among others. Software’s role in automotive safety is growing making the assumptions and calibrations of the code governing critical safety systems, as well as its security, increasingly important to saving lives. Addressing the vulnerabilities in automotive code — such as the ones exploited by the Jeep hackers — and specifically the capacity for remote exploits, are an essential element of the future of automotive safety and security.
The design of OTA update systems implicates crucial issues of governance, and the balance of a variety of values — both public and private. Developing systems intended to ensure automotive safety and security involves both choosing among competing visions of security, and determining how to protect other values in the process. The articulation of cybersecurity goals, and the way they are balanced against other values, must occur in a public participatory process beforehand that includes relevant public and private stakeholders.
This paper sets forth principles that should in-form the agenda of regulatory agencies such as the National Highway Transportation (NHTSA) that play an essential role in ensuring that the IoT, and specifically the OTA update functionality it requires, responds to relevant cybersecurity and safety risks while attending to other public values. It explains the importance of OTA security and safety update functionality in the automotive industry, and barriers to its development. It explores challenges posed by the interaction between OTA update functionality, consumer protections — including repair rights and privacy — and competition. It proposes a set of principles to guide the regulatory approach to OTA updates, and automobile cybersecurity, in light of these challenges. The principles promote the development of cybersecurity expertise and shared cybersecurity objectives across relevant stakeholders, and ensure that respect for other values, such as competition and privacy is built into the design of OTA up-date technology. In conclusion, we suggest reforms to existing efforts to improve automotive cybersecurity.


The 2017 PPPM Student Paper Honor Is:

Abstract:

This paper examines the hypothesis that it may be possible for individual actors in a marketplace to drive the adoption of particular privacy and security standards. It aims to explore the diffusion of privacy and security technologies in the marketplace. Using HTTPS, Two-Factor Authentication, and End-to-End Encryption as case studies, it tries to ascertain which factors are responsible for successful diffusion which improves the privacy of a large number of users. Lastly, it explores whether the FTC may view a widely diffused standard as a necessary security feature for all actors in a particular industry.
Based on the case studies chosen, the paper concludes that while single actors/groups often do drive the adoption of a standard, they tend to be significant players in the industry or otherwise well positioned to drive adoption and diffusion. The openness of a new standard can also contribute significantly to its success. When a privacy standard becomes industry dominant on account of a major actor, the cost to other market participants appears not to affect its diffusion.
A further conclusion is that diffusion is also easiest in consumer facing products when it involves little to no inconvenience to consumers, and is carried out at the back end, yet results in tangible and visible benefits to consumers, who can then question why other actors in that space are not implementing it. Actors who do not adopt the standard may also potentially face reputational risks on account of non-implementation, and lose out on market share.

ADDITIONAL INFORMATION



WINNING PAPERS