John Verdi Featured in WTOP Story About Connected Consumer Tech

Vice President of Policy John Verdi was featured in a WTOP story about the privacy implications of popular consumer tech holiday gifts. He offered advice on adjusting the privacy settings of smartphones, tablets, connected toys, and other devices. Read more at WTOP.

FPF joins 14 other organizations to urge ED and the FTC to provide guidance on the intersection of COPPA and FERPA

In December 2017, the U.S. Department of Education and the Federal Trade Commission hosted the workshop, “Student Privacy and Ed Tech.” The workshop brought together a wide range of stakeholders interested in protecting student privacy, with speakers representing districts, companies, and advocates. Almost all participants agreed that more clarity is necessary on the Children’s Online Privacy Protection Act (COPPA) and Family Educational Rights and Privacy Act (FERPA) requirements. However, more than a year later, ED and the FTC have not yet provided that guidance.

This week, the Future of Privacy Forum joined 14 other organizations – groups representing education, business, and consumer advocates – to send a letter to the U.S. Department of Education and Federal Trade Commission urging them to provide additional guidance on the intersection of COPPA and FERPA.

 

A New Year’s Resolution For Your New Devices

A New Year’s Resolution For Your New Devices

Still thinking about your New Year’s resolutions? If so, the Future of Privacy Forum has a practical suggestion: Get to know the privacy implications of your new electronics. Early in the New Year, take a few moments to set up privacy features so you can be comfortable with how your personal data is collected, used and shared.

Here are some of this year’s hot electronics, the information they collect and what you can do to exert some control over how they use your data.

SMART TVS – Understand how to limit sharing data about what you watch.

Smart TVs connect to the Internet to allow users to access streaming video services (such as Netflix or Hulu), or other online media or entertainment, such as music, on-demand video, and web browsers. Almost all TVs on the market today are “smart” – and other devices can be purchased, at relatively small cost, to connect to a regular TV and enable certain video streaming services.

Smart TVs collect a lot of data about your viewing habits, and that information may be shared with companies other than the device manufacturer or connected apps. For example, some advertising companies buy viewing data and add it to detailed profiles that also include offline data, like spending patterns. The information in that profile may determine what ads you are served on your TV, or on other devices, like your phone.

VOICE ASSISTANTS – Learn how to manage audio recordings.

Voice assistants, often called smart speakers, offer an amazing amount of information; your wish is its command. They also record a lot of information. Most of these devices are voice or speech-enabled, meaning that they use microphones to detect a certain “wake phrase,” but do not activate and begin recording (and sending information) until they hear that phrase. Most devices keep audio recordings of your commands so they can get better at recognizing your voice over time.

WEARABLE TECH – Understand whether the app shares your health or location data.

Whether it’s a smart watch, a health monitor or an item of clothing, wearable tech is increasingly popular. Lots of wearable tech monitors your health or fitness, but unless it was prescribed by a health professional, it’s probably not covered by the strict privacy rules under the Health Insurance Portability and Accountability Act (HIPAA). Most wearables connect with an app, some of which share user information in unexpected ways. For example, some track a user’s location all the time and share that information with third parties.

CONSUMER GENETIC TESTS – Check the company’s privacy practices before you buy.

Genetic tests provide families with fascinating information about their heritage, and they also involve very sensitive personal information. Genetic data can be used to identify risk for future medical conditions, contain unexpected information that could be unsettling, or reveal sensitive information about the test taker’s family members.

Companies in the consumer genetic testing industry worked with the Future of Privacy to develop privacy and data principles. Here are some of the promises made by companies that endorsed FPF’s best practices:

  1. The company should always obtain your consent before sharing your personal genetic data with any third parties.
  2. The company should tell you how long it will keep your genetic data, and whether it will destroy your biological sample if you choose.
  3. The company should tell you if it requires a court order or subpoena before sharing genetic data with the government.
  4. The company should tell you whether it will limit marketing based on your genetic data, and how.

Read the full report, Privacy Best Practices for Consumer Genetic Testing Services.

CONNECTED TOYS – Be aware of the data they may collect.

If you’ve got kids, they may have new connected toys that respond to voice commands, link to an app, or have to be set up using an online account. Although electronics and data processing can create great experiences for kids, toys that connect to the Internet raise concerns about what kind of data is collected from children, how that data is handled, and whether the device itself is secure.

New Year’s Day is a great time to invest a few minutes with your new device’s privacy settings. If you do, you’re less likely to experience a data use disappointment in the future.

New FPF Study Documents More Than 150 European Companies Participating in the EU-US Data Transfer Mechanism

New FPF Study Documents More Than 150 European Companies Participating in the EU-US Data Transfer Mechanism

EU Companies’ Participation Grew by One Third Over the Past Year

By Jeremy Greenberg

Yesterday, the European Commission published its second annual review of the EU-U.S. Privacy Shield, finding that “the U.S. continues to ensure an adequate level of protection for personal data transferred under the Privacy Shield from the EU to participating companies in the U.S.” The decision preserves a key data transfer agreement, supporting transatlantic trade and ensuring meaningful privacy safeguards for consumers. It is also good news for EU employees and companies, many of whom rely on the agreement to retain and pay staff. The Commission’s review noted a key next step to support the Privacy Shield arrangement – urging the U.S. government to appoint a permanent Ombudsperson by the end of February 2019.

The Future of Privacy Forum conducted a study of the companies enrolled in the US-EU Privacy Shield program and determined that 152 European headquartered companies are active Privacy Shield Participants. This number is up from the 114 EU companies that were active Privacy Shield Participants last year. These European companies rely on the program to transfer data to their US subsidiaries or to essential vendors that support their business needs.

FPF also found that more than 3,700 companies have signed up for Privacy Shield – a nearly 70% increase in the number of participants from last year.

Leading EU companies that rely on Privacy Shield include:

FPF research also determined that more than 1,150 companies, nearly a third of the total number analyzed, use Privacy Shield to process their human resources data. Inhibiting the flow of HR data between the US and EU would mean delays for EU citizens receiving their paychecks, or a decline in global hiring by US companies. Therefore, employees win when the Privacy Shield is maintained and grows.

The research identified 152 Privacy Shield companies headquartered or co-headquartered in Europe. This is a conservative estimate of companies that rely on the Privacy Shield framework – FPF staff did not include global companies that have major European offices but are headquartered elsewhere. The 152 companies include some of Europe’s largest and most innovative employers, doing business across a wide range of industries and countries. EU-headquartered firms and major EU offices of global firms depend on the Privacy Shield program so that their related US entities can effectively exchange data for research, to improve products, to pay employees and to serve customers. Given the importance of this mechanism to companies and consumers on both sides of the Atlantic, FPF is pleased that the Privacy Shield arrangement has been preserved and urges the U.S. government to quickly appoint a permanent Ombudsperson at the U.S. State Department.

Methodology:

For the full list of European companies in the Privacy Shield program, or to schedule an interview with Jeremy Greenberg, John Verdi, or Jules Polonetsky, email [email protected].

FPF Releases Guide to Disclosing Information During School Emergencies

FOR IMMEDIATE RELEASE

December 20, 2018

Contact: Amelia Vance, Director of Education Privacy & Policy Counsel, [email protected], (202)-688-4161.

Nat Wood, [email protected], (410)-507-7898

FPF Releases Guide to Disclosing Information During School Emergencies

In Blog, FPF Expert Notes School Safety Report “Offers Little Guidance” on Privacy

WASHINGTON, DC – The Future of Privacy Forum released a guide to help school officials understand their ability under the law to share information about students in an emergency situation. The primary federal student privacy law, the Family Educational Rights and Privacy Act (FERPA), allows for exceptions to its general requirement that parents must approve information sharing during emergencies, including natural disasters, health crises, terrorist threats or active shootings. The guide explains:

FPF also published a blog post by Sara Collins, Tyler Park, and Amelia Vance of FPF’s Education Privacy Project, reviewing the very limited discussion of privacy issues in the Federal Commission on School Safety report released yesterday. While the report does include some information on acceptable data sharing during an emergency, it does not address how to implement security measures while including appropriate privacy protections. For example, the report recommends the use of “appropriate systems to monitor social media and mechanisms for reporting cyberbullying incidents” but does not mention the privacy implications of such monitoring or appropriate privacy protections, despite FPF’s comments on this issue.

“Unfortunately, the report offers little practical guidance to school officials on how to consider privacy safeguards as they implement programs to monitor threats, harden schools or train personnel,” said the authors. “Privacy doesn’t seem to have been a top concern for the Commission, even though its members heard testimony about ways to have both security and privacy.”

School Safety Report Neglects Privacy Concerns

By Sara Collins, Tyler Park, and Amelia Vance

Yesterday, the Federal Commission on School Safety released a report detailing its conclusions, after holding a series of meetings and hearings in the wake of school shootings such as the one at Marjory Stoneman Douglas High School in Florida in February 14th. Nearly every aspect of the Commission’s report focuses on sharing data and, thus, has privacy implications for students, teachers, and the public.

The Commission’s Privacy Recommendations Were Limited and Unhelpful for Districts Seeking to Balance Privacy and Safety

During the Commission’s deliberative process, FPF provided comments and was invited to testify about those privacy issues. We recommended that the Commission’s report consider the full range of privacy risks and harms, as well as the importance of privacy safeguards, in its efforts to improve school safety. Specifically, we underscored the need for better communication to stakeholders about current privacy laws, the importance of creating “privacy guardrails” in the context of school safety plans, and asked that the Commission provide districts with guidance on how to implement such guardrails. It was important for the Commission to provide privacy recommendations because districts may not realize that the school safety measures recommended have serious privacy concerns if implemented improperly. And for districts that do understand the privacy implications, no models or best practices have been provided. While the report recognizes the importance of privacy safeguards, unfortunately it does little to help schools improve safety in a manner that protects students’ privacy.

The report quotes FPF’s John Verdi, who noted during his testimony on July 11 that trust is crucial in the education context, and that “maintaining appropriate safeguards for students’ privacy helps create and maintain trust.” The report also cites the testimony of Jennifer Mathis, of the Bazelon Center for Mental Health Law, who spoke of the importance of HIPAA privacy protections for people with mental health disabilities. The Commission notes that “[w]ithout the assurance of privacy protections, students are less likely to seek help when needed and less likely to engage openly with mental health counselors or other service providers.” The Commission also states, “it is important to incorporate appropriate privacy protections and to comply with privacy laws” but does not elaborate.

Although several sections of the report acknowledge the need for privacy safeguards, the Commission unfortunately offers little guidance—except on acceptable data sharing during emergencies under the federal student privacy law, FERPA—to educators, districts, or states on how to implement security measures while including appropriate privacy protections. This is particularly unfortunate since, throughout the report, the Commission provides specific, useful examples of effective programs at the state and local levels focusing on reporting threats, hardening schools, and training personnel. For example, the report recommends the use of “appropriate systems to monitor social media and mechanisms for reporting cyberbullying incidents” but does not mention the privacy implications of such monitoring or appropriate privacy protections, despite FPF’s comments on this issue. The Commission’s relative neglect of privacy safeguards may indicate that privacy was not a top concern.

A Surprising Call for FERPA “Modernization”

The report articulates ways that schools can share information under FERPA, as it is currently written, to protect students’ safety, noting that the major issue (which FPF identified in our testimony) is that most schools are not aware of FERPA’s flexibility.  Unexpectedly, the Commission recommended that Congress revisit FERPA––and this was the only recommendation in the report that clearly called for Congressional action.

The report calls for FERPA revisions in order “to account for changes in technology since its enactment.” A rewrite of FERPA would affect information sharing far more broadly than in the context of school safety, with major implications for the use of data and technology in both K-12 and higher education. The report’s recommendation to revisit FERPA indicates that the Department of Education may plan to actively push for a FERPA revision in Congress in 2019.

No Recommendation for Empirical Research on Root Causes of School Shootings and Effective Prevention Measures

Unfortunately, the Commission did not recommend neutral, expert analysis of empirical data regarding the nature, extent, and leading causes of key privacy and safety risks facing students and schools. FPF’s testimony noted that, as a society, we have imperfect empirical understanding of the causes of school shootings and of measures taken to prevent them. Recommending such research would have been an important step toward improving school safety. Without more research, there is extremely limited evidence that the Commission’s recommended actions will help keep students safe.

Overall, this report is likely to be very useful to schools seeking a fairly comprehensive look at ways, including examples, to keep students safe. However, it seems unlikely that schools would understand from this report that many of the Commission’s recommendations and examples raise major privacy concerns. These concerns are not just about what is allowable or appropriate to share under FERPA; privacy is about more than the law. Schools and communities need resources and advice about what privacy guardrails look like in practice. Models for this are scarce, but the report should have more strongly emphasized the importance of privacy and encouraged districts to think beyond existing law to build privacy guardrails into their school safety programs.

New Resource on FERPA's Health and Safety Emergency

The Future of Privacy Forum has released a new guide, Disclosing Student Information During School Emergencies: A Primer for Schools, which offers four best practices for information disclosure and answers five frequently asked questions about FERPA’s requirements for sharing information during health or safety emergencies.

Read more about this guide in the Future of Privacy Forum’s December 20, 2018 press release.

 

Amelia Vance's Letter to the Editor in the New York Times

FPF Education Director and Policy Counsel Amelia Vance wrote a letter in response to a New York Times story on student privacy laws published earlier this week. She argued that the best way to address concerns over student privacy is to enforce and fully fund the implementation of existing laws, not add even more laws on top of the hundreds that have been passed in recent years. Read more in her Letter to the Editor.

Privacy Papers 2018: Spotlight on the Winning Authors

Today, FPF announced the winners of the 9th Annual Privacy Papers for Policymakers (PPPM) Award. This Award recognizes leading privacy scholarship that is relevant to policymakers in the United States Congress, at U.S. federal agencies, and for data protection authorities abroad.

From many nominated privacy-related papers published in the last year, five were selected by Finalist Judges, after having been first evaluated highly by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. Finalist Judges and Reviewers agreed that these papers demonstrate a thoughtful analysis of emerging issues and propose new means of analysis that can lead to real-world policy impact, making them “must-read” privacy scholarship for policymakers.


The winners of the 2018 PPPM Award are:

Shattering One-Way Mirrors, Data Subject Access Rights in Practice

by Jef Ausloos, Postdoctoral Researcher, University of Amsterdam’s Institute for Information Law; and Pierre Dewitte, Researcher, KU Leuven Centre for IT & IP Law

Jef Ausloos is a postdoctoral researcher at the University of Amsterdam’s Institute for Information law (IViR). His research centers around data-driven power asymmetries and the normative underpinnings of individual control empowerment and autonomy in today’s largely privatized information ecosystem. Before joining IViR in December 2018 Jef was a doctoral researcher at the University of Leuven’s Center for IT & IP Law (CiTiP), where he worked on a variety of projects in media and data protection law. In October 2018, he obtained his PhD entitled ‘The right to erasure: safeguard for informational self-determination in a digital society?’. Jef holds degrees in law from the Universities of Namur Leuven and Hong Kong. He has worked as an International Fellow at the Center for Democracy & Technology and the Electronic Frontier Foundation and has been on research stays at the Berkman Center for Internet & Society (Harvard University) in 2012, the Institute for Information Law (University of Amsterdam) in 2015 and the Centre for Intellectual Property and Information Law (Cambridge University) in 2017.

Pierre Dewitte (1993, Brussels) obtained his Bachelor and Master degree of Laws with a specialization in Corporate and Intellectual Property law from the Université Catholique de Louvain in 2016. As part of his Master program, he spent six month in the University of Helsinki where he strengthened his knowledge in European law. In 2017, he then completed the advanced Master of Intellectual Property and ICT law at the KU Leuven with a special focus on privacy, data protection and electronic communications law.

Pierre joined the KU Leuven Centre for IT & IP in October 2017 where he conducts interdisciplinary research on privacy engineering, smart cities and algorithmic transparency. Among other initiatives, his main research track seeks to bridge the gap between software engineering practices and data protection regulations by creating a common conceptual framework for both disciplines and providing decision and trade-off support for technical and organizational mitigation strategies in the software development life-cycle.


Sexual Privacy

by Danielle Keats Citron, Morton & Sophia Macht Professor of Law, University of Maryland Carey School of Law

Danielle Keats Citron is the Morton & Sophia Macht Professor of Law at the University of Maryland Carey School of Law where she teaches and writes about privacy, civil rights, and free speech. Her book Hate Crimes in Cyberspace (Harvard University Press) was named one of the “20 Best Moments for Women in 2014) by Cosmopolitan magazine. Her law review articles have appeared or are forthcoming in Yale Law Journal, California Law Review (twice), Michigan Law Review (twice), Texas Law Review, Boston University Law Review (three times), Notre Dame Law Review(twice), Washington University Law Review (three times), Southern California Law Review, Minnesota Law Review, Washington Law Review (twice), UC Davis Law Review, Fordham Law Review, and Hastings Law Journal. She is a frequent opinion writer for major media outlets including the New York Times, Slate, the Atlantic, and the Guardian.  Danielle is an Affiliate Scholar at the Stanford Center on Internet and Society, Affiliate Fellow at the Yale Information Society Project, a Tech Fellow at NYU’s Policing Project, and a member of the Principals Group for the Harvard-MIT AI Fund. Danielle works closely with tech companies such as Twitter and Facebook and federal and state lawmakers on issues of online safety, privacy, and free speech. She is the Chair of the Electronic Privacy Information Center’s Board of Directors. Danielle will be joining the faculty of Boston University School of Law as a Professor of Law in the fall of 2019.


Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably not the Remedy you are Looking for

by Lilian Edwards, Professor of Law, Innovation and Society, Newcastle Law School; and Michael Veale, Researcher, Department of Science, Technology, Engineering & Public Policy at University College London

Lilian Edwards is a leading UK-based academic and frequent speaker on issues of Internet law, intellectual property and artificial intelligence. She is on the Advisory Board of the Open Rights Group and the Foundation for Internet Privacy Research and is the Professor of Law, Innovation and Society at Newcastle Law School at Newcastle University, having previously held chairs at Southampton, Sheffield and Strathclyde. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985.

She has co-edited (both with Charlotte Waelde and alone) three editions of a bestselling textbook, Law and the Internet (later Law, Policy and the Internet); a new sole-edited collection, Law, Policy and the Internet  appeared in 2018. She won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy and data trusts. A collection of her essays, The New Legal Framework for E-Commerce in Europe, was published in 2005.She is Deputy Director, and was co-founder, of the Arts and Humanities Research Council (AHRC) Centre for IP and Technology Law (now SCRIPT). Edwards has consulted inter alia for the EU Commission, the OECD, and WIPO. Edwards co-chairs GikII, an annual series of international workshops on the intersections between law, technology and popular culture.

Michael Veale  is a researcher in responsible public sector machine learning at University College London, specializing in the fairness and accountability of data-driven tools in the public sector, the interplay between advanced technologies, data protection law, and human-computer interaction. His research has been cited by national and international governments and regulators, discussed in the media, as well as debated in Parliament. Michael has acted as expert consultant on machine learning and society for the World Bank, United Nations, European Commission, the Royal Society and the British Academy, and a range of national governments. Michael is a Fellow at the Centre for Public Impact, an Honorary Research Fellow at Birmingham Law School, University of Birmingham, a Visiting Researcher at the BBC DataLab, and a member of the Advisory Council for the Open Rights Group. He previously worked on IoT and ageing policy at the European Commission, and holds degrees from LSE (BSc) and Maastricht University (MSc). A full list of publications can be found at https://michae.lv. He tweets at @mikarv.


Privacy Localism

by Ira Rubinstein, Senior Fellow, Information Law Institute of the New York University School of Law

Ira Rubinstein is a Senior Fellow at the Information Law Institute (ILI) of the New York University School of Law. His research interests include privacy by design, electronic surveillance law, big data, voters’ privacy, and privacy regulation. Rubinstein lectures and publishes widely on issues of privacy and security and has testified before Congress on these topics on several occasions. Recent work includes papers on co-regulatory models of privacy regulation, anonymization and risk, voter privacy in the age of big data. Additionally, he co-authored a research report on Systematic Government Access to Personal Data: A Comparative Analysis, prepared for the Center for Democracy and Technology. Earlier papers include Big Data: The End of Privacy or a New Beginning published in International Data Privacy Law in 2013 and presented it at the 2013 Computer Privacy and Data Protection conference in Brussels; and Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents, co-authored with Nathan Good, which won the IAPP Privacy Law Scholars Award at the 5th Annual Privacy Law Scholars Conference in 2012 and was published in the Berkeley Technology Law Journal.

Prior to joining the ILI, Rubinstein spent 17 years in Microsoft’s Legal and Corporate Affairs department, most recently as Associate General Counsel in charge of the Regulatory Affairs and Public Policy group. Before coming to Microsoft, he was in private practice in Seattle, specializing in immigration law. From 2010-2016, he served on the Board of Directors of the Center for Democracy and Technology. He also served as Rapporteur, of the EU-US Privacy Bridges Project, which was presented at the 2015 International Conference of Privacy and Data Protection Commissioners in Amsterdam. He currently serves on the Board of Advisers of the American Law Institute for the Restatement Third, Information Privacy Principles and the Organizing Committee of the Privacy by Design Workshops sponsored by the Computing Research Association. Rubinstein graduated from Yale Law School in 1985.


Designing Without Privacy

by Ari Ezra Waldman, Professor of Law and Founding Director, Innovation Center for Law and Technology at New York Law School

Ari Ezra Waldman is a Professor of Law and the Founding Director of the Innovation Center for Law and Technology at New York Law School. Professor Waldman’s work is forthcoming or has been published in numerous leading scholarly journals, including Law & Social Inquiry (peer reviewed), the Washington University Law Review, the UC Irvine Law Review, and the Cornell Law Review, among many others. His first book, Privacy As Trust: Information Law for an Information Age (Cambridge University Press, 2018), reorients privacy law around sociological principles of trust and argues that privacy law should protect information disclosed in contexts of trust. In 2018, Professor Waldman was honored as the Deirdre G. Martin Memorial Lecturer on Privacy at the University of Ottawa. In 2017, he received the highest award in privacy law, the Best Paper Award at the Privacy Law Scholars Conference in Berkeley, CA. And in 2016, his scholarship was awarded the Otto L. Walter Distinguished Writing Award. Professor Waldman has testified before the U.S. House of Representatives on issues relating to privacy and online social networks. His opinion pieces have appeared in the New York Times, the New York Daily News, The Advocate, among other popular press. He has appeared on Nightline, Good Morning America, MSNBC’s “The Docket,” and appeared as an expert on Syfy’s miniseries, The Internet Ruined My Life. He holds a Ph.D. from Columbia University, a J.D. from Harvard Law School, and a B.A. from Harvard College. He also really loves dogs.


The Finalist Judges also selected two papers for Honorable Mention on the basis of their uniformly strong reviews from the Advisory Board.

The 2018 PPPM Honorable Mentions are:

Additionally, the 2018 Student Paper award goes to:


The winning authors have been invited to join FPF and Honorary Co-Hosts Senator Edward J. Markey, and Congresswoman Diana DeGette, to present their work at the U.S. Senate with policymakers, academics, and industry privacy professionals. This annual event will be held on February 06, 2019. FPF will subsequently publish a printed digest of summaries of the winning papers for distribution to policymakers, privacy professionals, and the public. RSVP here to join us.

Privacy Papers 2018

The winners of the 2018 PPPM Award are:

Shattering One-Way Mirrors. Data Subject Access Rights in Practice

by Jef Ausloos, Postdoctoral Researcher, University of Amsterdam’s Institute for Information Law; and Pierre Dewitte, Researcher, KU Leuven Centre for IT & IP Law

Abstract:

The right of access occupies a central role in EU data protection law’s arsenal of data subject empowerment measures. It can be seen as a necessary enabler for most other data subject rights as well as an important role in monitoring operations and (en)forcing compliance. Despite some high-profile revelations regarding unsavoury data processing practices over the past few years, access rights still appear to be underused and not properly accommodated. It is especially this last hypothesis we tried to investigate and substantiate through a legal empirical study. During the first half of 2017, around sixty information society service providers were contacted with data subject access requests. Eventually, the study confirmed the general suspicion that access rights are by and large not adequately accommodated. The systematic approach did allow for a more granular identification of key issues and broader problematic trends. Notably, it uncovered an often-flagrant lack of awareness; organisation; motivation; and harmonisation. Despite the poor results of the empirical study, we still believe there to be an important role for data subject empowerment tools in a hyper-complex, automated and ubiquitous data-processing ecosystem. Even if only used marginally, they provide a checks and balances infrastructure overseeing controllers’ processing operations, both on an individual basis as well as collectively. The empirical findings also allow identifying concrete suggestions aimed at controllers, such as relatively easy fixes in privacy policies and access rights templates.


Sexual Privacy

by Danielle Keats Citron, Morton & Sophia Macht Professor of Law, University of Maryland Carey School of Law

Abstract:

Those who wish to control, expose, and damage the identities of individuals routinely do so by invading their privacy. People are secretly recorded in bedrooms and public bathrooms, and “up their skirts.” They are coerced into sharing nude photographs and filming sex acts under the threat of public disclosure of their nude images. People’s nude images are posted online without permission. Machine-learning technology is used to create digitally manipulated “deep fake” sex videos that swap people’s faces into pornography.

At the heart of these abuses is an invasion of sexual privacy—the behaviors and expectations that manage access to, and information about, the human body; intimate activities; and personal choices about the body and intimate information. More often, women, nonwhites, sexual minorities, and minors shoulder the abuse.

Sexual privacy is a distinct privacy interest that warrants recognition and protection. It serves as a cornerstone for sexual autonomy and consent. It is foundational to intimacy. Its denial results in the subordination of marginalized communities. Traditional privacy law’s efficacy, however, is eroding just as digital technologies magnify the scale and scope of the harm. This Article suggests an approach to sexual privacy that focuses on law and markets. Law should provide federal and state penalties for privacy invaders, remove the statutory immunity from liability for certain content platforms, and work in tandem with hate crime laws. Market efforts should be pursued if they enhance the overall privacy interests of all involved.


Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably not the Remedy you are Looking for

by Lilian Edwards, Professor of Law, Innovation and Society, Newcastle Law School; and Michael Veale, Researcher, Department of Science, Technology, Engineering & Public Policy at University College London

Abstract:

Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive.

However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric” explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers’ worries of intellectual property or trade secrets disclosure.

Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure (“right to be forgotten”) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human centered.


Privacy Localism

by Ira Rubinstein, Senior Fellow, Information Law Institute of the New York University School of Law

Ira Rubenstein

Abstract:

Privacy law scholarship often focuses on domain-specific federal privacy laws and state efforts to broaden them. This Article provides the first comprehensive analysis of privacy regulation at the local level (which it dubs “privacy localism”), using recently enacted privacy laws in Seattle and New York City as principle examples. It attributes the rise of privacy localism to a combination of federal and state legislative failures and three emerging urban trends: the role of local police in federal counter-terrorism efforts; smart city and open data initiatives; and demands for local police reform in the wake of widely reported abusive police practices.

Both Seattle and New York have enacted or proposed (1) a local surveillance ordinance regulating the purchase and use of surveillance equipment and technology by city departments (including the police) and (2) a law regulating city departments’ collection, use, disclosure and retention of personal data. In adopting these local laws, both cities have sought to fill two significant gaps in federal and state privacy laws: the public surveillance gap (which refers to the weak constitutional and statutory protections against government surveillance in public places) and the fair information practices gap (which refers to the inapplicability of the federal and state Privacy Acts to government records held by local government agencies).

Filling these gaps is a significant accomplishment and one that exhibits all of the values typically associated with federalism (diversity, participation, experimentation, responsiveness, and accountability). This Article distinguishes federalism and localism and shows why privacy localism should prevail against the threat of federal and (more importantly) state preemption. The Article concludes by suggesting that privacy localism has the potential to help shape emerging privacy norms for an increasingly urban future, inspire more robust regulation at the federal and state levels, and inject more democratic control into city deployments of privacy-invasive technologies.


Designing Without Privacy

by Ari Ezra Waldman, Professor of Law and Founding Director, Innovation Center for Law and Technology at New York Law School

waldman ari 1 235x224 235x224

Abstract:

In Privacy on the Ground, the law and information scholars Kenneth Bamberger and Deirdre Mulligan showed that empowered chief privacy officers (CPOs) are pushing their companies to take consumer privacy seriously by integrating privacy into the designs of new technologies. Their work was just the beginning of a larger research agenda. CPOs may set policies at the top, but they alone cannot embed robust privacy norms into the corporate ethos, practice, and routine. As such, if we want the mobile apps, websites, robots, and smart devices we use to respect our privacy, we need to institutionalize privacy throughout the corporations that make them. In particular, privacy must be a priority among those actually doing the work of design on the ground—namely, engineers, computer programmers, and other technologists.

This Article presents the initial findings from an ethnographic study of how, if at all, those designing technology products think about privacy, integrate privacy into their work, and consider user needs in the design process. It also looks at how attorneys at private firms draft privacy notices for their clients and interact with designers. Based on these findings, this Article suggests that Bamberger’s and Mulligan’s narrative is not yet fully realized. The account among some engineers and lawyers, where privacy is narrow, limited, and barely factoring into design, may help explain why so many products seem to ignore our privacy expectations. The Article then proposes a framework for understanding how factors both exogenous (theory and law) and endogenous (corporate structure and individual cognitive frames and experience) to the corporation prevent the CPOs’ robust privacy norms from diffusing throughout technology companies and the industry as a whole. This framework also helps suggest how specific reforms at every level—theory, law, organization, and individual experience—can incentivize companies to take privacy seriously, enhance organizational learning, and eliminate the cognitive biases that lead to discrimination in design.


The 2018 PPPM Honorable Mentions are:

Abstract:

We live in a world of artificial speakers with real impact. Bots foment political strife, skew online discourse, and manipulate the marketplace. In response to concerns about the unique threats bots pose, legislators have begun to pass laws that require online bots to clearly indicate that they are not human. This work is the first to consider how such efforts to regulate bots might raise concerns about free speech and privacy.

While requiring a bot to self-disclose does not censor speech as such, it may nonetheless infringe upon the right to speak – including the right to speak anonymously – in the digital sphere. Specifically, complexities in the enforcement process threaten to unmask anonymous speakers, and requiring self-disclosure creates a scaffolding for censorship by private actors and other governments.

Ultimately, bots represent a diverse and emerging medium of speech. Their use for mischief should not overshadow their novel capacity to inform, entertain, and critique. We conclude by providing policymakers with a series of principles to bear in mind when regulating bots, so as not to inadvertently curtail an emerging form of expression or compromise anonymous speech.

Abstract:

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.

In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.


The 2018 PPPM Student Paper Winner Is:

Abstract:

There are two trends that are currently reshaping the online display advertising industry. First, the amount and precision of data that is being collected by Advertising and Analytics (A&A) companies about users as they browse the web is increasing. Second, there is a transition underway from “ad networks” to “ad exchanges”, where advertisers bid on “impressions” (empty advertising slots on websites) being sold in Real Time Bidding (RTB) auctions. The rise of RTB has forced A&A companies to collaborate with one another, in order to exchange data about users and facilitate bidding on impressions.

These trends have fundamental implications for users’ online privacy. It is no longer sufficient to view each A&A company, and the data it collects, in isolation. Instead, when a given user is observed by a single A&A company, that observation may be shared, in real time, with hundreds of other A&A companies within RTB auctions.

To understand the impact of RTB on users’ privacy, we propose a new model of the online advertising ecosystem called an Interaction Graph. This graph captures the business relationships between A&A companies, and allows us to model how tracking data is shared between companies. Using our Interaction Graph model, we simulate browsing behavior to understand how much of a typical web user’s browsing history can be tracked by A&A companies. We find that 52 A&A companies are each able to observe 91% of an average user’s browsing history, under modest assumptions about data sharing in RTB auctions. 636 A&A companies are able to observe at least 50% of an average user’s browsing history. Even under very strict simulation assumptions, the top 10 A&A companies still observe 89-99% of an average user’s browsing history.

Additionally, we investigate the effectiveness of several tracker-blocking strategies, including those implemented by popular privacy-enhancing browser extensions. We find that AdBlock Plus (the world’s most popular ad blocking browser extension), is ineffective at protecting users’ privacy because major ad exchanges are whitelisted under the Acceptable Ads program. In contrast, Disconnect blocks the most information flows to A&A companies of the extensions we evaluated. However, even with strong blocking, major A&A companies still observe 40-80% of an average users’ browsing history.

Read more about the winners in the Future of Privacy Forum’s December 17, 2018 Press Release on the Annual Privacy Papers for Policymakers Award.

For more information and to register, click here.