The winners of the 2017 PPPM Award are:
The concept of privacy in “public” information or acts is a perennial topic for debate. It has given privacy law fits. People struggle to reconcile the notion of protecting information that has been made public with traditional accounts of privacy. As a result, successfully labeling information as public often results in a free pass for surveillance and personal data practices. It has also given birth to a significant and persistent misconception—that public information is an established and objective concept.
In this article, I argue that the “no privacy in public” justification is misguided because nobody even knows what “public” even means. It has no set definition in law or policy. This means that appeals to the public nature of information and contexts in order to justify data and surveillance practices is often just guesswork. Is the criteria for determining publicness whether it was hypothetically accessible to
anyone? Or is public information anything that’s controlled, designated, or released by state actors? Or maybe what’s public is simply everything that’s “not private?”
The main thesis of this article is that if the concept of “public” is going to shape people’s social and legal obligations, its meaning should not be assumed. Law and society must recognize that labeling something as public is both consequential and value-laden. To move forward, we should focus the values we want to serve, the relationships and outcomes we want to foster, and the problems we want to avoid.
by Craig Konnoth, Associate Professor of Law, Colorado Law, University of Colorado, Boulder
by Karen Levy, Assistant Professor, Department of Information Science at Cornell University; and Solon Barocas, Assistant Professor in the Department of Information Science at Cornell University
by Paul M. Schwartz, Jefferson E. Peyser Professor of Law, Berkeley Law School; and Karl-Nikolaus Peifer, Director of the Institute for Media Law and Communications Law of the University of Cologne and Director of the Institute for Broadcasting Law at the University of Cologne
International flows of personal information are more significant than ever, but differences in transatlantic data privacy law imperil this data trade. The resulting policy debate has led the EU to set strict limits on transfers of personal data to any non-EU country—including the United States—that lacks sufficient privacy protections. Bridging the transatlantic data divide is therefore a matter of the greatest significance.
In exploring this issue, this Article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems’ images of the individual as bearer of legal interests. The EU has created a privacy culture around “rights talk” that protects its “data subjects.” In the EU, moreover, rights talk forms a critical part of the postwar European project of creating the identity of a European citizen. In the United States, in contrast, the focus is on a “marketplace discourse” about personal information and the safeguarding of “privacy consumers.” In the United States, data privacy law focuses on protecting consumers in a data marketplace.
This Article uses its models of rights talk and marketplace discourse to analyze how the EU and United States protect their respective data subjects and privacy consumers. Although the differences are great, there is still a path forward. A new set of institutions and processes can play a central role in developing mutually acceptable standards of data privacy. The key documents in this regard are the General Data Protection Regulation, an EU-wide standard that becomes binding in 2018, and the Privacy Shield, an EU–U.S. treaty signed in 2016. These legal standards require regular interactions between the EU and United States and create numerous points for harmonization, coordination, and cooperation. The GDPR and Privacy Shield also establish new kinds of governmental networks to resolve conflicts. The future of international data privacy law rests on the development of new understandings of privacy within these innovative structures.
The 2017 PPPM Honorable Mentions are:
- The Idea of ‘Emergent Properties’ in Data Privacy: Towards a Holistic Approach, by Samson Y. Esayas, Faculty of Law, University of Oslo, Norwegian Research Center for Computers and Law
‘The whole is more than the sum of its parts.’
This article applies lessons from the concept of ‘emergent properties’ in systems thinking to data privacy law. This concept, rooted in the Aristotelian dictum ‘the whole is more than the sum of its parts’, where the ‘whole’ represents the ‘emergent property’, allows systems engineers to look beyond the properties of individual components of a system and understand the system as a single complex. Applying this concept, the article argues that the current EU data privacy rules focus on individual processing activity based on a specific and legitimate purpose, with little or no attention to the totality of the processing activities – i.e. the whole – based on separate purposes. This implies that when an entity processes personal data for multiple purposes, each processing must comply with the data privacy principles separately, in light of the specific purpose and the relevant legal basis.
This (atomized) approach is premised on two underlying assumptions: (i) distinguishing among different processing activities and relating every piece of personal data to a particular processing is possible; and (ii) if each processing is compliant, the data privacy rights of individuals are not endangered.
However, these assumptions are untenable in an era where companies process personal data for a panoply of purposes, where almost all processing generates personal data and where data are combined across several processing activities. These practices blur the lines between different processing activities and complicate attributing every piece of data to a particular processing. Moreover, when entities engage in these practices, there are privacy interests independent of and/or in combination with the individual processing activities. Informed by the discussion about emergent property, the article calls for a holistic approach with enhanced responsibility for certain actors based on the totality of the processing activities and data aggregation practices.
- Algorithmic Jim Crow by Margaret Hu, Associate Professor of Law, Washington & Lee Law
This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically-driven form of Jim Crow. What has been referred to as “extreme vetting”
utilizes newly developed big data vetting methods and algorithm-dependent database screening tools deployed by the U.S. Department of Homeland Security. Under the “separate but equal”
discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an
Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact.
Currently, security-related vetting protocols often begin with an algorithm-anchored technique of biometric identification—for example, the collection and database screening of scanned fingerprints and irises, digital photographs for facial recognition technology, and DNA. Immigration reform efforts, however, call for the biometric data collection of the entire citizenry in the United States to enhance border security efforts and to increase the accuracy of the algorithmic screening process. Newly developed big data vetting tools fuse biometric data with biographic data and Internet/Social Media profiling to algorithmically assess risk. This Article concludes that those individuals and groups disparately impacted by mandatory vetting and screening protocols will largely fall within traditional classifications—race, color, ethnicity, national origin, gender, and religion. Disparate impact consequences may survive judicial review if based upon threat risk assessments, terroristic classifications, data screening results deemed suspect, and characteristics establishing anomalous data and perceived-foreignness or dangerousness data—non-protected categories that fall outside of the current equal protection framework. Thus, Algorithmic Jim Crow will require an evolution of equality law.
- Public Values, Private Infrastructure and the Internet of Things: The Case of Automobiles, by Deirdre K. Mulligan, Associate Professor in the School of Information at UC Berkeley; and Kenneth A. Bamberger, Professor of Law at the University of California, Berkeley, and co-director of the Berkeley Center for Law and Technology
In July 2015, two researchers gained control of a Jeep Cherokee by hacking wirelessly into its dash-board connectivity system. The resulting recall of over 1.4 million Fiat Chrysler vehicles marked the first-ever security-related automobile recall. In its wake, other researchers demonstrated the capacity for remote takeovers of automobiles. By September, it became public that GM had initiated a quiet over-the-air (OTA) update program to fix security vulnerabilities in millions of their vehicles.
These incidents reveal the critical security issues of modern automobiles, so-called “connected cars,” and other Internet of Things (IoT) devices, and underscore the importance of regulatory structures that incentivize greater attention to security during production, and the management of security vulnerabilities discovered after connected devices are in circulation. In particular, it highlights the importance of incentivizing the development of OTA update systems to support safety and security critical updates to patch vulnerabilities. OTA update systems are essential to IoT security and the health and safety of humans who rely upon it.
Today’s connected cars can have more than a 100 million lines of software code, and this code base is growing. This code plays a significant role in compliance with regulatory obligations, and a crucial role in automotive safety and security systems. Embedded sensors and algorithms trigger and modulate airbag deployment, seatbelt engagement, anti-skid systems, and anti-lock breaks, identify the size, weight, and position of people to inform airbag and seatbelt behavior, and inform parking assistance systems, anti-skid and anti-lock break systems, among others. Software’s role in automotive safety is growing making the assumptions and calibrations of the code governing critical safety systems, as well as its security, increasingly important to saving lives. Addressing the vulnerabilities in automotive code — such as the ones exploited by the Jeep hackers — and specifically the capacity for remote exploits, are an essential element of the future of automotive safety and security.
The design of OTA update systems implicates crucial issues of governance, and the balance of a variety of values — both public and private. Developing systems intended to ensure automotive safety and security involves both choosing among competing visions of security, and determining how to protect other values in the process. The articulation of cybersecurity goals, and the way they are balanced against other values, must occur in a public participatory process beforehand that includes relevant public and private stakeholders.
This paper sets forth principles that should in-form the agenda of regulatory agencies such as the National Highway Transportation (NHTSA) that play an essential role in ensuring that the IoT, and specifically the OTA update functionality it requires, responds to relevant cybersecurity and safety risks while attending to other public values. It explains the importance of OTA security and safety update functionality in the automotive industry, and barriers to its development. It explores challenges posed by the interaction between OTA update functionality, consumer protections — including repair rights and privacy — and competition. It proposes a set of principles to guide the regulatory approach to OTA updates, and automobile cybersecurity, in light of these challenges. The principles promote the development of cybersecurity expertise and shared cybersecurity objectives across relevant stakeholders, and ensure that respect for other values, such as competition and privacy is built into the design of OTA up-date technology. In conclusion, we suggest reforms to existing efforts to improve automotive cybersecurity.
The 2017 PPPM Student Paper Honor Is:
- The Market’s Law of Privacy: Case Studies in Privacy/Security Adoption, by Chetan Gupta, Associate, Baker McKenzie
This paper examines the hypothesis that it may be possible for individual actors in a marketplace to drive the adoption of particular privacy and security standards. It aims to explore the diffusion of privacy and security technologies in the marketplace. Using HTTPS, Two-Factor Authentication, and End-to-End Encryption as case studies, it tries to ascertain which factors are responsible for successful diffusion which improves the privacy of a large number of users. Lastly, it explores whether the FTC may view a widely diffused standard as a necessary security feature for all actors in a particular industry.
Based on the case studies chosen, the paper concludes that while single actors/groups often do drive the adoption of a standard, they tend to be significant players in the industry or otherwise well positioned to drive adoption and diffusion. The openness of a new standard can also contribute significantly to its success. When a privacy standard becomes industry dominant on account of a major actor, the cost to other market participants appears not to affect its diffusion.
A further conclusion is that diffusion is also easiest in consumer facing products when it involves little to no inconvenience to consumers, and is carried out at the back end, yet results in tangible and visible benefits to consumers, who can then question why other actors in that space are not implementing it. Actors who do not adopt the standard may also potentially face reputational risks on account of non-implementation, and lose out on market share.