MythBusters: COPPA Edition

Following YouTube’s September settlement with the Federal Trade Commission (FTC) regarding the Children’s Online Privacy Protection Act (COPPA), YouTube released a video in late November explaining upcoming changes to their platform. The YouTube creator community responded in large numbers, with numerous explainer videos and almost two hundred thousand comments filed in response to the FTC’s just-closed call for comments on the COPPA rule. Some responses have been insightful and sophisticated. Others have indicated confusion and misunderstanding of COPPA’s requirements and scope. This blog addresses some of the most common myths circulating in YouTube videos, tweets, and Instagram posts.

Fact 1: You can’t “stop COPPA” by filing comments with the FTC

Some COPPA explainer videos call for viewers to write comments to the FTC asking them to “stop COPPA.” There are two problems with this call to action. 

The recent changes to YouTube occurred because the platform agreed to modify some of its practices (and pay a fine) rather than go to court against the FTC. The major change for creators involves the addition of the “made for kids” flag: all creators are required to select whether their channel or a specific video is created for kids or for a more general audience. The flag tells YouTube when they can and cannot place targeted advertising (which isn’t allowed for kids under COPPA). The settlement did not require the creation of an algorithm to look for misflagged content or the changes in viewer functionality for “made for kids” videos. YouTube did this, and the FTC can’t change the platform back to the way it was.

That doesn’t mean that creators shouldn’t engage with the FTC. The current FTC comment period will be over, but the process of creating new COPPA rules will likely take years, and there will be more opportunities to be heard. The FTC needs to know which data creators have about their users and how that data drives decision-making. Tell them about your income from advertisements, your level of control over the advertisements on your channel, and what you wish you knew about advertising on YouTube. The FTC needs to understand your business model and the business of content creators on platforms in general. So, tell them about how information from YouTube informs the content on your Twitch stream, whether it changes how you use your Patreon, or influences merchandising decisions. The best way to get useful rules that protect children’s privacy while supporting content creation on the internet is if the FTC knows how this portion of the economy works.

A note about commenting: be polite. When you make comments on a government website, a real person will read them. So don’t use all caps or profanity, and write in complete sentences. The point is not to fight the FTC but to inform them of your concerns. While asking your viewers to comment can be useful, make sure that you remind them to be kind as well. Civil, smart comments are more likely to have a positive influence.

Fact 2: COPPA is not infringing on First Amendment rights

COPPA regulates the collection, use, and sharing of data created by children. COPPA does have specific obligations for data collected from websites or online services directed to children, but those obligations do not dictate the content you can or cannot create. COPPA may make it more difficult to profit from child-directed content, but this does not mean that anyone’s rights have been infringed. YouTube’s Terms of Service and platform design have more influence than any American law over the kind of content users can create.

Fact 3: Not everything is child-directed

YouTube’s video explaining COPPA-related changes to the platform includes a list of factors that can indicate whether a video or channel is made for kids.

Screenshot of YouTube video saying "Made for Kids - Factors to Consider: The Subject Matter of the Video; If Children are the intended audience; if it includes child actors or models; if it includes characters, celebrities, or toys that appeal to children; if it uses language that is meant for children to understand; if it includes activities that appeal to children; if it includes songs, stories or poems that appeal to children.

This has caused some confusion. Just because a channel or video may check one or two of these boxes does not mean that it is child-directed. The FTC said as much in its blog post (if you have not read it yet, please do). The blog was written specifically to answer questions about what constitutes “child-directed” content.

If you’re still confused about whether your content is child-directed, ask yourself, “Who am I speaking to?” When you create a video, you should have an idea of who will watch it. And humans use different communication strategies for different kinds of people. For example, think about sitcoms created for major TV networks like CBS, NBC, or ABC versus sitcoms created for the Disney Channel or Nickelodeon. While the two types share similarities, including the format, some of the situations, and the presence of a laugh track, there are major differences that reflect the different audiences. Some sitcoms created for children have simpler language, vivid costumes, or include extensive over-acting (for a visual explanation, watch this clip from SNL). These features make the content more engaging for children. Think about your content. Where does it belong: the Disney Channel or ABC?

After you decide whether your channel or individual video is made for kids, write down why you made that decision. There are two reasons for doing this. First, if the FTC contacts you or if YouTube changes the flag on your video, you will have already prepared your response. You won’t have to try to create responses based on decisions you may have made months or even years earlier. Second, and more importantly, this will keep your actions consistent. As you continue to work and create new videos, you may not remember why you said one video was made for kids while another wasn’t. This not only creates a record of your decision-making process but should make it easier in the long run.

Fact 4: A COPPA violation probably won’t bankrupt your channel

The FTC has stated that when it determines fines for violations, it considers “a company’s financial condition and the impact a penalty could have on its ability to stay in business.” The FTC’s mission is not to put people out of business but to protect consumers. They have limited staff and limited resources; targeting small channels or channels about which a reasonable person could disagree whether the content is child-directed is not typically a good use of their time. But this does not excuse creators from reviewing videos and flagging content as appropriate. You must comply with COPPA, but if you make a mistake, the FTC’s likely first action would be to ask you to change your flag, rather than impose a large fine.

A final piece of advice: do not panic. Panic won’t help. Take a deep breath, review your channel, and stay informed about any other changes that YouTube may announce.

Privacy Papers 2019

The winners of the 2019 Privacy Papers for Policymakers (PPPM) Award are:

Antidiscriminatory Privacy

by Ignacio N. Cofone, McGill University Faculty of Law

Abstract

Law often regulates the flow of information to prevent discrimination. It does so, for example, in Law often blocks sensitive personal information to prevent discrimination. It does so, however, without a theory or framework to determine when doing so is warranted. As a result, these measures produce mixed results. This article offers a framework for determining, with a view of preventing discrimination, when personal information should flow and when it should not. It examines the relationship between precluded personal information, such as race, and the proxies for precluded information, such as names and zip codes. It proposes that the success of these measures depends on what types of proxies exist for the information blocked and it explores in which situations those proxies should also be blocked. This framework predicts the effectiveness of antidiscriminatory privacy rules and offers the potential of a wider protection to minorities.


Privacy’s Constitutional Moment and the Limits of Data Protection

by Woodrow Hartzog, Northeastern University, School of Law and Khoury College of Computer Sciences and Neil M. Richards, Washington University, School of Law and the Cordell Institute for Policy in Medicine & Law

Abstract

America’s privacy bill has come due. Since the dawn of the Internet, Congress has repeatedly failed to build a robust identity for American privacy law. But now both California and the European Union have forced Congress’s hand by passing the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR). These data protection frameworks, structured around principles for Fair Information Processing called the “FIPs,” have industry and privacy advocates alike clamoring for a “U.S. GDPR.” States seemed poised to blanket the country with FIP-based laws if Congress fails to act. The United States is thus in the midst of a “constitutional moment” for privacy, in which intense public deliberation and action may bring about constitutive and structural change. And the European data protection model of the GDPR is ascendant.

In this article we highlight the risks of U.S. lawmakers embracing a watered-down version of the European model as American privacy law enters its constitutional moment. European-style data protection rules have undeniable virtues, but they won’t be enough. The FIPs assume data processing is always a worthy goal, but even fairly processed data can lead to oppression and abuse. Data protection is also myopic because it ignores how industry’s appetite for data is wrecking our environment, our democracy, our attention spans, and our emotional health. Even if E.U.-style data protection were sufficient, the United States is too different from Europe to implement and enforce such a framework effectively on its European law terms. Any U.S. GDPR would in practice be what we call a “GDPR-Lite.”

Our argument is simple: In the United States, a data protection model cannot do it all for privacy, though if current trends continue, we will likely entrench it as though it can. Drawing from constitutional theory and the traditions of privacy regulation in the United States, we propose instead a “comprehensive approach” to privacy that is better focused on power asymmetries, corporate structures, and a broader vision of human well-being. Settling for an American GDPR-lite would be a tragic ending to a real opportunity to tackle the critical problems of the information age. In this constitutional moment for privacy, we can and should demand more. This article offers a path forward to do just that.


Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations

by Margot E. Kaminski, University of Colorado Law and Gianclaudio Malgieri, Vrije Universiteit Brussel (VUB) – Faculty of Law

Abstract

Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.

The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights.

After noting the potential shortcomings of DPIAs, this paper closes with a call — and some suggestions — for a Model Algorithmic Impact Assessment in the context of the GDPR. Our examination of DPIAs suggests that the current focus on the right to explanation is too narrow. We call, instead, for data controllers to consciously use the required DPIA process to produce what we call “multi-layered explanations” of algorithmic systems. This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively better fills potential gaps between the GDPR’s two approaches to algorithmic accountability.


Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites

by Arunesh Mathur, Princeton University; Gunes Acar, Princeton University; Michael Friedman, Princeton University; Elena Lucherini, Princeton University; Jonathan Mayer, Princeton University; Marshini Chetty, University of Chicago; and Arvind Narayanan, Princeton University  

Abstract

Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions. We present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, we study shopping websites, which often use dark patterns to influence users into making more purchases or disclosing more information than they would otherwise. Analyzing ~53K product pages from ~11K shopping websites, we discover 1,818 dark pattern instances, together representing 15 types and 7 broader categories. We examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices. We also uncover 22 third-party entities that offer dark patterns as a turnkey solution. Finally, we develop a taxonomy of dark pattern characteristics that describes the underlying influence of the dark patterns and their potential harm on user decision-making. Based on our findings, we make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.


The Many Revolutions of Carpenter

by Paul Ohm, Georgetown University Law Center

Abstract

Carpenter v. United States, the 2018 Supreme Court opinion that requires the police to obtain a warrant to access an individual’s historical whereabouts from the records of a cell phone provider, is the most important Fourth Amendment opinion in decades. Although many have acknowledged some of the ways the opinion has changed the doctrine of Constitutional privacy, the importance of Carpenter has not yet been fully appreciated. Carpenter works many revolutions in the law, not only through its holding and new rule, but in more fundamental respects. The opinion reinvents the reasonable expectation of privacy test as it applies to large databases of information about individuals. It turns the third-party doctrine inside out, requiring judges to scrutinize the products of purely private decisions. In dicta, it announces a new rule of technological equivalence, which might end up covering more police activity than the core rule. Finally, it embraces technological exceptionalism as a centerpiece for the interpretation of the Fourth Amendment, rejecting backwards-looking interdisciplinary methods such as legal history or surveys of popular attitudes. Considering all of these revolutions, Carpenter is the most important Fourth Amendment decision since Katz v. United States, a case it might end up rivaling in influence.


The 2019 PPPM Honorable Mentions are:

Abstract

“Paid” digital services have been touted as straightforward alternatives to the ostensibly “free” model, in which users actually face a high price in the form of personal data, with limited awareness of the real cost incurred and little ability to manage their privacy preferences. Yet the actual privacy behavior of paid services, and consumer expectations about that behavior, remain largely unknown.

This Article addresses that gap. It presents empirical data both comparing the true cost of “paid” services as compared to their so-called “free” counterparts, and documenting consumer expectations about the relative behaviors of each.

We first present an empirical study that documents and compares the privacy behaviors of 5,877 Android apps that are offered both as free and paid versions. The sophisticated analysis tool we employed, AppCensus, allowed us to detect exactly which sensitive user data is accessed by each app and with whom it is shared. Our results show that paid apps often share the same implementation characteristics and resulting behaviors as their free counterparts. Thus, if users opt to pay for apps to avoid privacy costs, in many instances they do not receive the benefit of the bargain. Worse, we find that there are no obvious cues that consumers can use to determine when the paid version of a free app offers better privacy protections than its free counterpart.

We complement this data with a second study: surveying 1,000 mobile app users as to their perceptions of the privacy behaviors of paid and free app versions. Participants indicated that consumers are more likely to expect that the free version would share their data with advertisers and law enforcement agencies than the paid version, and be more likely to keep their data on the app’s servers when no longer needed for core app functionality. By contrast, consumers are more likely to expect the paid version to engage in privacy-protective practices, to demonstrate transparency with regard to its data collection and sharing behaviors, and to offer more granular control over the collection of user data in that context.

Together, these studies identify ways in which the actual behavior of apps fails to comport with users’ expectations, and the way that representations of an app as “paid” or “ad-free” can mislead users. They also raise questions about the salience of those expectations for consumer choices.

In light of this combined research, we then explore three sets of ramifications for policy and practice.

First, our findings that paid services often conduct equally extensive levels of data collection and sale as free ones challenges understandings about how the “pay for privacy” model operates in practice, its promise as a privacy-protective alternative, and the legality of paid app behavior.

Second, by providing empirical foundations for better understanding both corporate behavior and consumer expectations, our findings support research into ways that users’ beliefs about technology business models and developer behavior are actually shaped, and the manipulability of consumer decisions about privacy protection, undermining the legitimacy of legal regimes relying on fictive user “consent” that does not reflect knowledge of actual market behavior.

Third, our work demonstrates the importance of the kind of technical tools we use in our study — tools that offer transparency about app behaviors, empowering consumers and regulators. Our study demonstrates that, at least in the most dominant example of a free vs. paid market — mobile apps — there turns out to be no real privacy-protective option. Yet the failures of transparency or auditability of app behaviors deprive users, regulators, and law enforcement of any means to keep developers accountable, and privacy is removed as a salient concern to guide user behavior. Dynamic analysis of the type we performed can both allow users to go online and test, in real-time, an app’s privacy behavior, empowering them as advocates and informing their choices to better align expectations with reality. The same tools, moreover, can equip regulators, law enforcement, consumer protections organizations, and private parties seeking to remedy undesirable or illegal privacy behavior.

Abstract

AI – in its interplay with Big Data, ambient intelligence, ubiquitous computing and cloud computing – augments the existing major, qualitative and quantitative, shift with regard to the processing of personal information. The questions that arise are of crucial importance both for the development of AI and the efficiency of data protection arsenal: Is the current legal framework AI-proof ? Are the data protection and privacy rules and principles adequate to deal with the challenges of AI or do we need to elaborate new principles to work alongside the advances of AI technology? Our research focuses on the assessment of GDPR that, however, does not specifically address AI, as the regulatory choice consisted more in what we perceive as “technology – independent legislation.

The paper will give a critical overview and assessment of the provisions of GDPR that are relevant for the AI-environment, i.e. the scope of application, the legal grounds with emphasis on consent, the reach and applicability of data protection principles and the new (accountability) tools to enhance and ensure compliance.

Abstract

The design of a system or technology, in particular its user experience design, affects and shapes how people interact with it. Privacy engineering and user experience design frequently intersect. Privacy laws and regulations require that data subjects are informed about a system’s data practices, asked for consent, provided with a mechanism to withdraw consent, and given access to their own data. To satisfy these requirements and address users’ privacy needs most services offer some form of privacy notices, privacy controls, or privacy settings to users. 

However, too often privacy notices are not readable, people do not understand what they consent to, and people are not aware of certain data practices or the privacy settings or controls available to them. The challenge is that an emphasis on meeting legal and regulatory obligations is not sufficient to create privacy interfaces that are usable and useful for users. Usable means that people can find, understand and successfully use provided privacy information and controls. Useful means that privacy information and controls align with users’ needs with respect to making privacy-related decisions and managing their privacy. This chapter provides insights into the reasons why it can be difficult to design privacy interfaces that are usable and useful. It further provides guidance and best practices for user-centric privacy design that meets both legal obligations and users’ needs. Designing effective privacy user experiences not only makes it easier for users to manage and control their privacy, but also benefits organizations by minimizing surprise for their users and facilitating user trust. Any privacy notice and control is not just a compliance tool but rather an opportunity to engage with users about privacy, to explain the rationale behind practices that may seem invasive without proper context, to make users aware of potential privacy risks, and to communicate the measures and effort taken to mitigate those risks and protect users’ privacy. 

Privacy laws, privacy technology, and privacy management are typically centered on information – how information is collected, processed, stored, transferred, how information can and must be protected, and how to ensure compliance and accountability. To be effective, designing privacy user experiences requires a shift in focus: while information and compliance are of course still relevant, user-centric privacy design focuses on people, their privacy needs, and their interaction with a system’s privacy interfaces. 

Why is it important to pay attention to the usability of privacy interfaces? How do people make privacy decisions? What drives their privacy concerns and behavior? We answer these questions in this chapter and then provide an introduction to user experience design. We discuss common usability issues in privacy interfaces, and describe a set of privacy design principles and a user-centric process for designing usable and effective privacy interfaces, concluding with an overview of best practices. 

The design of usable privacy notices and controls is not trivial, but this chapter hopefully motivated why it is important to invest the effort in getting the privacy user experience right – making sure that privacy information and controls are not only compliant with regulation but also address and align with users’ needs. Careful design of the privacy user experience can support users in developing an accurate and more complete understanding of a system and its data practices. Well-designed and user-tested privacy interfaces provide responsible privacy professionals and technologists with the confidence that an indication of consent was indeed an informed and freely-given expression by the user. Highlighting unexpected data practices and considering secondary and incidental users reduces surprise for users and hopefully prevents privacy harms, social media outcries, bad press, and fines from regulators. Importantly, a privacy interface is not just a compliance tool but rather an opportunity to engage with users about privacy, to explain the rationale behind practices that may seem invasive without proper context, to make users aware of potential privacy risks, and to communicate the measures and effort taken to mitigate those risks and protect users’ privacy.


The 2019 PPPM Student Paper Winner Is:

Abstract

As devices with always-on microphones located in people’s homes, smart speakers have significant privacy implications. We surveyed smart speaker owners about their beliefs, attitudes, and concerns about the recordings that are made and shared by their devices. To ground participants’ responses in concrete interactions, rather than collecting their opinions abstractly, we framed our survey around randomly selected recordings of saved interactions with their devices. We surveyed 116 owners of Amazon and Google smart speakers and found that almost half did not know that their recordings were being permanently stored and that they could review them; only a quarter reported reviewing interactions, and very few had ever deleted any. While participants did not consider their own recordings especially sensitive, they were more protective of others’ recordings (such as children and guests) and were strongly opposed to use of their data by third parties or for advertising. They also considered permanent retention, the status quo, unsatisfactory. Based on our findings, we make recommendations for more agreeable data retention policies and future privacy controls.

Read more about the winners in the Future of Privacy Forum’s blog post

For more information and to register, click here.

This Year’s Must-Read Privacy Papers: FPF Announces Recipients of Annual Award

Today, FPF announced the winners of the 10th Annual Privacy Papers for Policymakers (PPPM) Award. This Award recognizes leading privacy scholarship that is relevant to policymakers in the United States Congress, at U.S. federal agencies and for data protection authorities abroad. The winners of the 2019 PPPM Award are: 

These five papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. These papers demonstrate a thoughtful analysis of emerging issues and propose new means of analysis that can lead to real-world policy impact, making them “must-read” privacy scholarship for policymakers.

Three papers were selected for Honorable Mention: 

For the fourth year in a row, FPF also granted a Student Paper Award. To be considered, student work must meet similar guidelines as those set for the general Call for Nominations. The Student Paper Award is presented to:

The winning authors have been invited to join FPF and Honorary Co-Hosts Senator Ed Markey and Congresswoman Diana DeGette to present their work at the U.S. Senate with policymakers, academics, and industry privacy professionals. 

Held at the Hart Senate Office Building on February 6, 2020, this annual event will feature a keynote speech by FTC Commissioner Christine S. Wilson. FPF will subsequently publish a printed digest of summaries of the winning papers for distribution to policymakers, privacy professionals, and the public.

This event is free, open to the general public, and widely attended. For more information or to RSVP, please visit this page. This event is supported by a National Science Foundation grant. Any opinions, findings and conclusions or recommendations expressed in these papers are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Closer than Apart: Comparing Senate Commerce Committee Bills

Post By: Stacey Gray, Senior Counsel, and Polly Sanderson, Policy Counsel, Future of Privacy Forum

Over the Thanksgiving holiday, we saw for the first time a public Staff Discussion Draft of a federal comprehensive privacy law from the office of Senator Wicker (R-MS), the Chairman of the Senate Commerce Committee. Together with Senator Cantwell (D-WA)’s bill, the Consumer Online Privacy Rights Act, introduced last week with leading Democrat co-sponsors, Senator Wicker’s Discussion Draft represents a significant movement toward bipartisan negotiations in the Senate. We expect to see these negotiations play out during this Wednesday’s Senate Commerce Committee Hearing (12/4 at 10:00 AM), and through the New Year.

How do the two bills, one from leading Democrats, and one from the Republican Chairman, compare to each other? We find them to be closer together on most issues than they are apart: a promising sign for bipartisan progress. Here is FPF’s breakdown of all the major commonalities and differences in the two bills (as they currently exist). 

Significant Commonalities (with some differences):

Significant Differences:

Did we miss anything? Let us know at [email protected] as we continue tracking these developments.

Starting Point for Negotiation: An Analysis of Senate Democratic Leadership’s Landmark Comprehensive Privacy Bill

Today, Senate Commerce Committee Ranking Member Maria Cantwell (D-WA), joined by top Democrats on the Senate Commerce Committee – Senators Markey, Schatz and Klobuchar – introduced a new comprehensive federal privacy bill, the Consumer Online Privacy Rights Act (COPRA). The bill is consistent with the Senate Democratic leadership positions announced last week and comes in advance of a December 4th Senate Commerce Committee hearing convened by Senator Wicker (R-Miss), Examining Legislative Proposals to Protect Consumer Data Privacy.

In substance, the bill primarily emphasizes individual control, codifying strong rights for individuals to be informed of data processing, and to be able to access, delete, correct, and port their data. The definition of covered data is broad, aligning with the GDPR and most other US privacy bills to date (data that “identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data”), although it excludes “de-identified data.” The FTC is tasked with rulemaking to enable centralized opt-outs for non-sensitive data, while “sensitive data” requires opt-in consent.

Notably, the bill contains a nuanced exception to support ethical commercial research if approved, monitored, and governed by an Institutional Review Board (IRB) or an IRB-like oversight entity that meets standards promulgated by the FTC. Such oversight would provide stronger legal protections for “scientific, historical, or statistical research in the public interest” in situations where informed consent is impractical, such as commercial research conducted on Big Data or other large, less readily identifiable datasets.

Below are FPF’s highlights of COPRA’s other provisions.

1. Jurisdictional Scope

2. Data Minimization and Data Security

3. Sensitive Data (and Opt-Outs for Non-Sensitive Data)

4. Third Parties and Service Providers

5. Interaction with State and Federal Laws

6. Algorithmic Discrimination and Civil Rights

7. Enforcement, Accountability, and Whistleblower Protections

 

READ MORE:

Statement by Future of Privacy Forum CEO Jules Polonetsky on the Consumer Online Privacy Rights Act

WASHINGTON, DC – November 26, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding the introduction of a new comprehensive federal privacy bill, the Consumer Online Privacy Rights Act (COPRA), proposed today by Senators Maria Cantwell, Amy Klobuchar, Brian Schatz, and Ed Markey:

“This is the most sophisticated federal proposal to emerge to date and demonstrates that Senate Democrats are committed to setting a high bar for consumer privacy. The bill would codify strong individual rights, meeting and exceeding the California Consumer Privacy Act. It also requires companies to implement training and accountability measures and includes a nuanced exception to support ethical research. The bill provides a strong starting point that will move bipartisan debate forward, with private rights of action, limits on preemption, and the definition of sensitive data, among other issues, likely to be points of ongoing negotiation.”

# # #

The Future of Privacy Forum will post a more detailed analysis of the legislation on its blog.

Media Contacts:

Nat Wood

Future of Privacy Forum

[email protected]

410-507-7898

Collin Boylin

Future of Privacy Forum

[email protected]

860-490-8326

About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

Questions to Ask Before You Buy a Genetic Testing Kit on Black Friday

By Rachele Hendricks-Sturrup and Katelyn Ringrose

On Black Friday and Cyber Monday, millions of consumers will hurry to their nearest doorbuster sale or boot up their favorite sales portal to buy a price-slashed consumer genetic testing kit. Some genetic testing kits will be up to half off this year, and the market as a whole is projected to more than triple from a valuation of $99 million this past year to a projected $310 million in 2022

Last year on Black Friday, AncestryDNA alone sold about 1.5 million testing kits. According to Wired, that means that consumers sent in around 2,000 gallons of saliva—enough spit to fill a modest above-ground swimming pool. Consumers are drawn to the tests for genealogical purposes, and new market offerings are being used as strategies to help raise consumer awareness on genetic health risks. 

With that much genetic material exchanging hands, it is important for consumers to think carefully about which kit provider will prioritize consumer privacy. DNA contains deeply personal information which can be incredibly beneficial for consumers. But that same information may also contain unexpected and deeply personal information that could be unsettling, and reveal information about the test taker’s family members. It deserves a high standard of protection. 

However, laws like the Health Insurance Portability and Accountability Act (HIPAA), the central U.S. health privacy law, do not apply to genetic information collected and housed by consumer genetic testing companies. Due to this regulatory gap, consumers should find out from the companies themselves, and prior to buying a test for themselves or a loved one, how the companies will protect and use the genetic data they provide and collect.

Here are five important questions consumers should ask before buying a genetic testing kit on Black Friday or Cyber Monday:

  1. Does the Company Ask for Your Consent Before Sharing Your Individual-Level Genetic Data with Third Parties? People choose to share their genetic data with third parties for a range of purposes (e.g., to participate in scientific research or connect with unknown biological relatives). However, genetic testing companies should never share your individual-level genetic data with third parties without your knowledge and consent, particularly with insurers, employers, and educational institutions.
  2. Do You Have the Ability to Delete Your Genetic Data and Destroy Your Biological Sample If You Choose? Companies may have default policies to destroy all samples once testing is completed, retain data or samples for only a finite period of time or in accordance with regulations, or retain data and samples indefinitely or until you close your account. Companies should be clear about their retention practices and offer prominent ways to delete your genetic data from their databases and destroy your biological sample.
  3. Does the Company Require a Valid Legal Process Before They will Disclose Your Genetic Data to Law Enforcement? As we have seen in prolific cases like the Golden State Killer, genetic data can be a powerful investigative tool for government. However, government access to your genetic data presents substantial privacy risks. Companies should require that government entities obtain valid legal process, like a warrant, subpoena, or a legal order before they disclose genetic data.
  4. What are the Company’s Notification Practices When it Comes to Conveying Material Changes to Their Privacy Policies? Companies may modify their privacy policies or statements occasionally, and sometimes they significantly change how genetic data is collected, used, and stored. But before changes are implemented, you should be notified and given an opportunity to review the changes to decide if you want to continue using the company’s services.
  5. Has the Company Committed to Strong Technical Data Security Practices? As more than 26 million individuals have had their DNA tested, the potential for hacking and data breaches is an increasing concern. Given the uniqueness of genetic data, companies should maintain a comprehensive security program through practices such as: secure storage of biological samples and genetic data, encryption, data-use agreements, contractual obligations, and accountability measures.

For consumers who are interested in learning more, the Future of Privacy Forum’s Privacy Best Practices for Consumer Genetic Testing Services set forth standards for the collection, use, and sharing of genetic data. The standards embrace express consent mechanisms for the transfer of data to third parties and have provisions restricting marketing based on genetic data, among other privacy-centric protections. Companies that currently support these best practices include: Ancestry, 23andMe, Helix, MyHeritage, Habit, African Ancestry, and Living DNA.

Before you buy a genetic test kit as a gift or for yourself for this holiday season, take a moment to consider how our genetic information shapes who we are… and whether you are dealing with a company that promises to protect it.

For more information and to learn how to become involved with FPF’s health privacy efforts, please contact Katelyn Ringrose at [email protected] or Rachele Hendricks-Sturrup at [email protected].

FPF Welcomes New Members to the Youth & Education Privacy Project

We are thrilled to announce three new members of FPF’s Youth & Education Privacy team. The new staff – Jasmine Park, Anisha Reddy, and Katherine Sledge – will help expand FPF’s technical assistance and training, resource creation and distribution, and state and federal legislative tracking.

You can read more about Katherine, Anisha, and Jasmine below. Please join us in welcoming them to the team!


JasminePark_HeadshotJasmine Park

Jasmine Park is a Policy Fellow for the Youth and Education Privacy Project. Jasmine is primarily supporting FPF’s outreach, training, and technical assistance for local and state education agencies (LEAs and SEAs), including FPF’s pilot Train-the-Trainer program and the K-12 privacy working group for LEA/SEA staff. She will also be helping to grow FPF’s child privacy portfolio in the U.S. and abroad. Jasmine recently graduated with an M.A. in Global Affairs from the Yale Jackson Institute for Global Affairs, where she focused on tech policy and digital anthropology. From 2015 to 2017, Jasmine served as a Peace Corps Volunteer in Cambodia, where she gained two years of on-the-ground experience as an educator. She worked closely with local government, school administrators, law enforcement, and community leaders to conduct needs assessments and to provide access to the training and resources necessary to address self-identified needs. She previously interned with the Los Angeles Mayor’s Office of International Affairs and Asian Americans Advancing Justice. Jasmine serves on the board of Brio, a nonprofit that empowers local partners to design and launch mental health solutions in vulnerable communities globally. Jasmine received her B.A. cum laude in History and East Asian Studies from Harvard University.

I most look forward to joining the FPF Education Privacy team’s efforts to equip local administrators with the knowledge and tools they need to implement best practices in their communities.


Anisha Reddy

Anisha Reddy is a Policy Fellow for the Youth and Education Privacy Project. Anisha is primarily supporting FPF’s state and federal legislative analysis and resources. Anisha is also running FPF’s K-12 working group for edtech companies and overseeing the bi-weekly education privacy newsletter. At Penn State’s Dickinson Law, Anisha was honored with the University’s 2017-2018 Montgomery and MacRae Award for Excellence in Administrative Law. She held the offices of Executive Editor for Digital Media of the Dickinson Law Review, President of the Asian Pacific Law Students Association, and Vice President of the Women’s Law Caucus. Anisha served as a Certified Legal Intern for the Children’s Advocacy Clinic in Carlisle, PA, where she represented children involved in civil court actions like adoption, domestic violence, and custody matters. She previously interned at the Governor of Pennsylvania’s Office of General Counsel, at Udacity in Mountain View, CA and at Blockchain, Inc. in New York, NY.

I’m most excited about the unique opportunity to impact the way the student privacy conversation is framed by helping include the voices of all stakeholders – not just the edtech industry – but parents, districts, and the students themselves.


Katherine Sledge

As the Policy Manager for Youth and Education Privacy at the Future of Privacy Forum, Katherine manages the progression of projects related to youth and student privacy at FPF. Before coming to FPF, Katherine worked with the executive team at the National Network to End Domestic Violence. She also has national and state-level political advocacy experience at the National Alliance to End Homelessness and the ACLU. Prior to transitioning to a career in public policy, Katherine was the Operations Specialist at an environmental firm that specializes in remediation projects. In that role, Katherine headed administrative and logistical support for environmental projects across the US.

Katherine graduated from American University with a Master of Public Administration with a custom concentration in Applied Politics: Women, Public Policy, and Political Advocacy. In addition to the core public management curriculum, Katherine focused her studies on the intersection of public policy and gender, as well as advocacy strategy, process, and best practices. Originally from Tennessee, Katherine attended the University of Tennessee, Knoxville, where she earned her B.A. in Political Science.

 


 

Interested in student privacy? Subscribe to our monthly education privacy newsletter here. Want more info? Check out FERPA|Sherpa, the education privacy resource center website.

What They’re Saying: Stakeholders Warn Senate Surveillance Bill Could Harm Students, Communities

Parents, privacy advocates, education stakeholders, and members of the disability rights community are raising concerns about new Senate legislation that would mandate unproven student surveillance programs and encourage greater law enforcement intervention in classrooms in a misguided effort to improve school safety.

Last week, Senator John Cornyn (R-TX) introduced the RESPONSE Act, legislation that is intended to help reduce and prevent mass violence in communities. However, the bill includes a provision to dramatically expand the Children’s Internet Protection Act and would require almost every U.S. school to implement costly network monitoring technology and collect massive amounts of student data.

The legislation also requires the creation of school-based “Behavioral Intervention Teams” that will be strongly encouraged to refer concerning student behavior directly to law enforcement, rather than allowing educators who know students best to engage directly and address the issue internally. This provision would likely strengthen the “school to prison pipeline” and could be especially harmful for students of color and students with disabilities.

Take a look at What They’re Saying about the legislation: 

A new Republican bill that claims ‘to help prevent mass shootings’ includes no new gun control measures. Instead, Republican lawmakers are supporting a huge, federally mandated boost to America’s growing school surveillance industry… There is still no research evidence that demonstrates whether or not online monitoring of schoolchildren actually works to prevent violence.

– The Guardian; “Republicans propose mass student surveillance plan to prevent shootings” 

Training behavioral assessment teams to default to the criminal process rather than school-based behavioral assessment and intervention would do little to address violence in schools and would likely foster rather than prevent a violent school environment … By making the criminal process the frontline for student discipline, this bill will only serve to increase the number of students of color and students with disabilities in the juvenile justice system.

– Coalition for Smart Safety; Letter to Senator John Cornyn 

Leslie Boggs, national PTA president, said in a statement that the organization has concerns with the bill as it is currently written. She said the PTA will work with Cornyn’s staff “to ensure evidence-based best practices for protecting students are used, the school to prison pipeline is not increased, students are not discouraged from seeking mental health and counseling support and that students’ online activities are not over monitored.”

– POLITICO; “Questions raised about school safety measures in anti-mass violence bill” 

Privacy experts and education groups, many of which have resisted similar efforts at the state level, say that level of social media and network surveillance can discourage children from speaking their minds online and could disproportionately result in punishment against children of color, who already face higher rates of punishment in school.

– The Hill; “Advocates warn kids’ privacy at risk in GOP gun violence bill” 

Generational gaps between adults and teens make for hefty communication barriers, and a private Facebook message that might read as “dangerous” to a grown law enforcement officer could easily just be two children goofing off… whenever they go online, students would be forced to think about what the government or their school would like and dislike, driving what Republicans so often claim to be against — mental conformity to institutional, government-driven norms. Students’ fears of being watched (and reported) would also inevitably widen the gap between government schools and their students. Surveillance accompanied by the threat of penalty would result in mass distrust from students toward the education system: a reinforced “us versus them” mentality between students and the adults in charge.

– Washington Examiner; “Sorry Republicans, but surveilling schoolchildren is an awful idea” 

Schools are already deploying significant digital surveillance systems in the name of safety…But critics say these surveillance systems vacuum up a huge and irrelevant stream of online data, can lead to false positives, and present huge problems for privacy.

– Education Week; “Senator’s Anti-Violence Bill Backs Active-Shooter Training, School Internet Monitoring” 

Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.

– Future of Privacy Forum; “Increased Surveillance is Not an Effective Response to Mass Violence” 

Billed as a response to school shootings, [the RESPONSE Act] has, as critics noted, almost nothing to do with guns, and a great deal to do with increasing surveillance (as well as targeting those with mental health issues)…Not everyone will find this troubling… But if you want to erode civil liberties and traditions of privacy, it’s best to start with people who don’t have the political power to fight back. Children are ideal–not only can’t they fight back, but they will grow up thinking it’s perfectly normal to live under constant surveillance. For their own safety, of course.

– Forbes; “Is Big Brother Watching Your Child? Probably.” 

Rather than focusing on surveillance as a solution to school safety concerns, schools should emphasize the importance of safe and responsible internet use and use school safety funding on evidence-based solutions. By doing so, administrators can create a school community built on trust rather than suspicion.

To learn more about the Future of Privacy Forum’s student privacy project, visit http://www.ferpasherpa.org/.

MEDIA CONTACT
Alexandra Sollberger

ICYMI: New Senate Legislation Mandates “Pervasive Surveillance” in Attempt to Improve School Safety

WASHINGTON, D.C. – Legislation introduced in the U.S. Senate this week is under scrutiny from privacy and disability rights advocates for provisions that would dramatically expand surveillance technologies in schools nationwide, despite lack of evidence or research to confirm these tools have any effect on preventing or predicting school violence.

According to The Guardian, “A new Republican bill that claims ‘to help prevent mass shootings’ includes no new gun control measures. Instead, Republican lawmakers are supporting a huge, federally mandated boost to America’s growing school surveillance industry… There is still no research evidence that demonstrates whether or not online monitoring of schoolchildren actually works to prevent violence.”

Future of Privacy Forum Senior Counsel and Director of Education Privacy Amelia Vance highlighted the challenges and unintended consequences that could result from the RESPONSE Act sponsored by Senator John Cornyn (R-TX):

Privacy advocates say pervasive surveillance is not appropriate for an educational setting, and that it may actually harm children, particularly students with disabilities and students of color, who are already disproportionately targeted with school disciplinary measures.

“You are forcing schools into a position where they would have to surveil by default,” said Amelia Vance, the director of education privacy at the Future of Privacy Forum.

“There’s a privacy debate to be had about whether surveillance is the right tactic to take in schools, whether it inhibits students trust in their schools and their ability to learn,” Vance said. But “the bottom line,” she said, is “we do not have evidence that violence prediction works”…

If Cornyn’s bill becomes law, “you’re going to force probably 10,000 districts to buy a new product that they’re going to have to implement”, she said.

That would mean redirecting public schools’ time and money away from strategies that are backed by evidence, such as supporting mental health and counseling services, and towards dealing with surveillance technologies, which often produce many false alarms, like alerts about essays on To Kill a Mockingbird.

Click here to read the article. To learn more about the Future of Privacy Forum, visit www.fpf.org.

CONTACT

[email protected]