Following YouTube’s September settlement with the Federal Trade Commission (FTC) regarding the Children’s Online Privacy Protection Act (COPPA), YouTube released a video in late November explaining upcoming changes to their platform. The YouTube creator community responded in large numbers, with numerous explainer videos and almost two hundred thousand comments filed in response to the FTC’s just-closed call for comments on the COPPA rule. Some responses have been insightful and sophisticated. Others have indicated confusion and misunderstanding of COPPA’s requirements and scope. This blog addresses some of the most common myths circulating in YouTube videos, tweets, and Instagram posts.
Fact 1: You can’t “stop COPPA” by filing comments with the FTC
Some COPPA explainer videos call for viewers to write comments to the FTC asking them to “stop COPPA.” There are two problems with this call to action.
First, the FTC can’t stop COPPA. COPPA is a federal law, passed by Congress in 1998. The law has existed for over 20 years, and the FTC does not have the authority to get rid of COPPA. Asking the FTC to do something beyond its authority will not change the law, YouTube’s policies, or the rules that creators must follow.
Second, the FTC is reviewing the rules it created in 2013 to determine whether they need to be updated or changed. These rules describe how the FTC implements and enforces COPPA. This is an important opportunity for creators to be involved in how COPPA will shape internet video services – it’s crucial to know what comments can accomplish. Comments to the FTC will influence how the rules will be implemented and enforced in the future but are unlikely to change things that have already occurred, like the YouTube settlement.
The recent changes to YouTube occurred because the platform agreed to modify some of its practices (and pay a fine) rather than go to court against the FTC. The major change for creators involves the addition of the “made for kids” flag: all creators are required to select whether their channel or a specific video is created for kids or for a more general audience. The flag tells YouTube when they can and cannot place targeted advertising (which isn’t allowed for kids under COPPA). The settlement did not require the creation of an algorithm to look for misflagged content or the changes in viewer functionality for “made for kids” videos. YouTube did this, and the FTC can’t change the platform back to the way it was.
That doesn’t mean that creators shouldn’t engage with the FTC. The current FTC comment period will be over, but the process of creating new COPPA rules will likely take years, and there will be more opportunities to be heard. The FTC needs to know which data creators have about their users and how that data drives decision-making. Tell them about your income from advertisements, your level of control over the advertisements on your channel, and what you wish you knew about advertising on YouTube. The FTC needs to understand your business model and the business of content creators on platforms in general. So, tell them about how information from YouTube informs the content on your Twitch stream, whether it changes how you use your Patreon, or influences merchandising decisions. The best way to get useful rules that protect children’s privacy while supporting content creation on the internet is if the FTC knows how this portion of the economy works.
A note about commenting: be polite. When you make comments on a government website, a real person will read them. So don’t use all caps or profanity, and write in complete sentences. The point is not to fight the FTC but to inform them of your concerns. While asking your viewers to comment can be useful, make sure that you remind them to be kind as well. Civil, smart comments are more likely to have a positive influence.
Fact 2: COPPA is not infringing on First Amendment rights
COPPA regulates the collection, use, and sharing of data created by children. COPPA does have specific obligations for data collected from websites or online services directed to children, but those obligations do not dictate the content you can or cannot create. COPPA may make it more difficult to profit from child-directed content, but this does not mean that anyone’s rights have been infringed. YouTube’s Terms of Service and platform design have more influence than any American law over the kind of content users can create.
Fact 3: Not everything is child-directed
YouTube’s video explaining COPPA-related changes to the platform includes a list of factors that can indicate whether a video or channel is made for kids.
This has caused some confusion. Just because a channel or video may check one or two of these boxes does not mean that it is child-directed. The FTC said as much in its blog post (if you have not read it yet, please do). The blog was written specifically to answer questions about what constitutes “child-directed” content.
If you’re still confused about whether your content is child-directed, ask yourself, “Who am I speaking to?” When you create a video, you should have an idea of who will watch it. And humans use different communication strategies for different kinds of people. For example, think about sitcoms created for major TV networks like CBS, NBC, or ABC versus sitcoms created for the Disney Channel or Nickelodeon. While the two types share similarities, including the format, some of the situations, and the presence of a laugh track, there are major differences that reflect the different audiences. Some sitcoms created for children have simpler language, vivid costumes, or include extensive over-acting (for a visual explanation, watch this clip from SNL). These features make the content more engaging for children. Think about your content. Where does it belong: the Disney Channel or ABC?
After you decide whether your channel or individual video is made for kids, write down why you made that decision. There are two reasons for doing this. First, if the FTC contacts you or if YouTube changes the flag on your video, you will have already prepared your response. You won’t have to try to create responses based on decisions you may have made months or even years earlier. Second, and more importantly, this will keep your actions consistent. As you continue to work and create new videos, you may not remember why you said one video was made for kids while another wasn’t. This not only creates a record of your decision-making process but should make it easier in the long run.
Fact 4: A COPPA violation probably won’t bankrupt your channel
The FTC has stated that when it determines fines for violations, it considers “a company’s financial condition and the impact a penalty could have on its ability to stay in business.” The FTC’s mission is not to put people out of business but to protect consumers. They have limited staff and limited resources; targeting small channels or channels about which a reasonable person could disagree whether the content is child-directed is not typically a good use of their time. But this does not excuse creators from reviewing videos and flagging content as appropriate. You must comply with COPPA, but if you make a mistake, the FTC’s likely first action would be to ask you to change your flag, rather than impose a large fine.
A final piece of advice: do not panic. Panic won’t help. Take a deep breath, review your channel, and stay informed about any other changes that YouTube may announce.
Privacy Papers 2019
The winners of the 2019 Privacy Papers for Policymakers (PPPM) Award are:
by Ignacio N. Cofone, McGill University Faculty of Law
Abstract
Law often regulates the flow of information to prevent discrimination. It does so, for example, in Law often blocks sensitive personal information to prevent discrimination. It does so, however, without a theory or framework to determine when doing so is warranted. As a result, these measures produce mixed results. This article offers a framework for determining, with a view of preventing discrimination, when personal information should flow and when it should not. It examines the relationship between precluded personal information, such as race, and the proxies for precluded information, such as names and zip codes. It proposes that the success of these measures depends on what types of proxies exist for the information blocked and it explores in which situations those proxies should also be blocked. This framework predicts the effectiveness of antidiscriminatory privacy rules and offers the potential of a wider protection to minorities.
by Woodrow Hartzog, Northeastern University, School of Law and Khoury College of Computer Sciences and Neil M. Richards, Washington University, School of Law and the Cordell Institute for Policy in Medicine & Law
Abstract
America’s privacy bill has come due. Since the dawn of the Internet, Congress has repeatedly failed to build a robust identity for American privacy law. But now both California and the European Union have forced Congress’s hand by passing the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR). These data protection frameworks, structured around principles for Fair Information Processing called the “FIPs,” have industry and privacy advocates alike clamoring for a “U.S. GDPR.” States seemed poised to blanket the country with FIP-based laws if Congress fails to act. The United States is thus in the midst of a “constitutional moment” for privacy, in which intense public deliberation and action may bring about constitutive and structural change. And the European data protection model of the GDPR is ascendant.
In this article we highlight the risks of U.S. lawmakers embracing a watered-down version of the European model as American privacy law enters its constitutional moment. European-style data protection rules have undeniable virtues, but they won’t be enough. The FIPs assume data processing is always a worthy goal, but even fairly processed data can lead to oppression and abuse. Data protection is also myopic because it ignores how industry’s appetite for data is wrecking our environment, our democracy, our attention spans, and our emotional health. Even if E.U.-style data protection were sufficient, the United States is too different from Europe to implement and enforce such a framework effectively on its European law terms. Any U.S. GDPR would in practice be what we call a “GDPR-Lite.”
Our argument is simple: In the United States, a data protection model cannot do it all for privacy, though if current trends continue, we will likely entrench it as though it can. Drawing from constitutional theory and the traditions of privacy regulation in the United States, we propose instead a “comprehensive approach” to privacy that is better focused on power asymmetries, corporate structures, and a broader vision of human well-being. Settling for an American GDPR-lite would be a tragic ending to a real opportunity to tackle the critical problems of the information age. In this constitutional moment for privacy, we can and should demand more. This article offers a path forward to do just that.
by Margot E. Kaminski, University of Colorado Law and Gianclaudio Malgieri, Vrije Universiteit Brussel (VUB) – Faculty of Law
Abstract
Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.
The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights.
After noting the potential shortcomings of DPIAs, this paper closes with a call — and some suggestions — for a Model Algorithmic Impact Assessment in the context of the GDPR. Our examination of DPIAs suggests that the current focus on the right to explanation is too narrow. We call, instead, for data controllers to consciously use the required DPIA process to produce what we call “multi-layered explanations” of algorithmic systems. This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively better fills potential gaps between the GDPR’s two approaches to algorithmic accountability.
by Arunesh Mathur, Princeton University; Gunes Acar, Princeton University; Michael Friedman, Princeton University; Elena Lucherini, Princeton University; Jonathan Mayer, Princeton University; Marshini Chetty, University of Chicago;and Arvind Narayanan, Princeton University
Abstract
Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions. We present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, we study shopping websites, which often use dark patterns to influence users into making more purchases or disclosing more information than they would otherwise. Analyzing ~53K product pages from ~11K shopping websites, we discover 1,818 dark pattern instances, together representing 15 types and 7 broader categories. We examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices. We also uncover 22 third-party entities that offer dark patterns as a turnkey solution. Finally, we develop a taxonomy of dark pattern characteristics that describes the underlying influence of the dark patterns and their potential harm on user decision-making. Based on our findings, we make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.
Carpenter v. United States, the 2018 Supreme Court opinion that requires the police to obtain a warrant to access an individual’s historical whereabouts from the records of a cell phone provider, is the most important Fourth Amendment opinion in decades. Although many have acknowledged some of the ways the opinion has changed the doctrine of Constitutional privacy, the importance of Carpenter has not yet been fully appreciated. Carpenter works many revolutions in the law, not only through its holding and new rule, but in more fundamental respects. The opinion reinvents the reasonable expectation of privacy test as it applies to large databases of information about individuals. It turns the third-party doctrine inside out, requiring judges to scrutinize the products of purely private decisions. In dicta, it announces a new rule of technological equivalence, which might end up covering more police activity than the core rule. Finally, it embraces technological exceptionalism as a centerpiece for the interpretation of the Fourth Amendment, rejecting backwards-looking interdisciplinary methods such as legal history or surveys of popular attitudes. Considering all of these revolutions, Carpenter is the most important Fourth Amendment decision since Katz v. United States, a case it might end up rivaling in influence.
The 2019 PPPM Honorable Mentions are:
Can You Pay for Privacy? Consumer Expectations and the Behavior of Free and Paid Appsby Kenneth Bamberger,University of California, Berkeley – School of Law; Serge Egelman,University of California, Berkeley – Department of Electrical Engineering & Computer Sciences; Catherine Han, University of California, Berkeley; Amit Elazari Bar On, University of California, Berkeley; and Irwin Reyes, University of California, Berkeley
Abstract
“Paid” digital services have been touted as straightforward alternatives to the ostensibly “free” model, in which users actually face a high price in the form of personal data, with limited awareness of the real cost incurred and little ability to manage their privacy preferences. Yet the actual privacy behavior of paid services, and consumer expectations about that behavior, remain largely unknown.
This Article addresses that gap. It presents empirical data both comparing the true cost of “paid” services as compared to their so-called “free” counterparts, and documenting consumer expectations about the relative behaviors of each.
We first present an empirical study that documents and compares the privacy behaviors of 5,877 Android apps that are offered both as free and paid versions. The sophisticated analysis tool we employed, AppCensus, allowed us to detect exactly which sensitive user data is accessed by each app and with whom it is shared. Our results show that paid apps often share the same implementation characteristics and resulting behaviors as their free counterparts. Thus, if users opt to pay for apps to avoid privacy costs, in many instances they do not receive the benefit of the bargain. Worse, we find that there are no obvious cues that consumers can use to determine when the paid version of a free app offers better privacy protections than its free counterpart.
We complement this data with a second study: surveying 1,000 mobile app users as to their perceptions of the privacy behaviors of paid and free app versions. Participants indicated that consumers are more likely to expect that the free version would share their data with advertisers and law enforcement agencies than the paid version, and be more likely to keep their data on the app’s servers when no longer needed for core app functionality. By contrast, consumers are more likely to expect the paid version to engage in privacy-protective practices, to demonstrate transparency with regard to its data collection and sharing behaviors, and to offer more granular control over the collection of user data in that context.
Together, these studies identify ways in which the actual behavior of apps fails to comport with users’ expectations, and the way that representations of an app as “paid” or “ad-free” can mislead users. They also raise questions about the salience of those expectations for consumer choices.
In light of this combined research, we then explore three sets of ramifications for policy and practice.
First, our findings that paid services often conduct equally extensive levels of data collection and sale as free ones challenges understandings about how the “pay for privacy” model operates in practice, its promise as a privacy-protective alternative, and the legality of paid app behavior.
Second, by providing empirical foundations for better understanding both corporate behavior and consumer expectations, our findings support research into ways that users’ beliefs about technology business models and developer behavior are actually shaped, and the manipulability of consumer decisions about privacy protection, undermining the legitimacy of legal regimes relying on fictive user “consent” that does not reflect knowledge of actual market behavior.
Third, our work demonstrates the importance of the kind of technical tools we use in our study — tools that offer transparency about app behaviors, empowering consumers and regulators. Our study demonstrates that, at least in the most dominant example of a free vs. paid market — mobile apps — there turns out to be no real privacy-protective option. Yet the failures of transparency or auditability of app behaviors deprive users, regulators, and law enforcement of any means to keep developers accountable, and privacy is removed as a salient concern to guide user behavior. Dynamic analysis of the type we performed can both allow users to go online and test, in real-time, an app’s privacy behavior, empowering them as advocates and informing their choices to better align expectations with reality. The same tools, moreover, can equip regulators, law enforcement, consumer protections organizations, and private parties seeking to remedy undesirable or illegal privacy behavior.
AI – in its interplay with Big Data, ambient intelligence, ubiquitous computing and cloud computing – augments the existing major, qualitative and quantitative, shift with regard to the processing of personal information. The questions that arise are of crucial importance both for the development of AI and the efficiency of data protection arsenal: Is the current legal framework AI-proof ? Are the data protection and privacy rules and principles adequate to deal with the challenges of AI or do we need to elaborate new principles to work alongside the advances of AI technology? Our research focuses on the assessment of GDPR that, however, does not specifically address AI, as the regulatory choice consisted more in what we perceive as “technology – independent legislation.
The paper will give a critical overview and assessment of the provisions of GDPR that are relevant for the AI-environment, i.e. the scope of application, the legal grounds with emphasis on consent, the reach and applicability of data protection principles and the new (accountability) tools to enhance and ensure compliance.
Usable and Useful Privacy Interfaces(book chapter to appear in An Introduction to Privacy for Technology Professionals, Second Edition)by Florian Schaub, University of Michigan School of Informationand Lorrie Faith Cranor, Carnegie Mellon University
Abstract
The design of a system or technology, in particular its user experience design, affects and shapes how people interact with it. Privacy engineering and user experience design frequently intersect. Privacy laws and regulations require that data subjects are informed about a system’s data practices, asked for consent, provided with a mechanism to withdraw consent, and given access to their own data. To satisfy these requirements and address users’ privacy needs most services offer some form of privacy notices, privacy controls, or privacy settings to users.
However, too often privacy notices are not readable, people do not understand what they consent to, and people are not aware of certain data practices or the privacy settings or controls available to them. The challenge is that an emphasis on meeting legal and regulatory obligations is not sufficient to create privacy interfaces that are usable and useful for users. Usable means that people can find, understand and successfully use provided privacy information and controls. Useful means that privacy information and controls align with users’ needs with respect to making privacy-related decisions and managing their privacy. This chapter provides insights into the reasons why it can be difficult to design privacy interfaces that are usable and useful. It further provides guidance and best practices for user-centric privacy design that meets both legal obligations and users’ needs. Designing effective privacy user experiences not only makes it easier for users to manage and control their privacy, but also benefits organizations by minimizing surprise for their users and facilitating user trust. Any privacy notice and control is not just a compliance tool but rather an opportunity to engage with users about privacy, to explain the rationale behind practices that may seem invasive without proper context, to make users aware of potential privacy risks, and to communicate the measures and effort taken to mitigate those risks and protect users’ privacy.
Privacy laws, privacy technology, and privacy management are typically centered on information – how information is collected, processed, stored, transferred, how information can and must be protected, and how to ensure compliance and accountability. To be effective, designing privacy user experiences requires a shift in focus: while information and compliance are of course still relevant, user-centric privacy design focuses on people, their privacy needs, and their interaction with a system’s privacy interfaces.
Why is it important to pay attention to the usability of privacy interfaces? How do people make privacy decisions? What drives their privacy concerns and behavior? We answer these questions in this chapter and then provide an introduction to user experience design. We discuss common usability issues in privacy interfaces, and describe a set of privacy design principles and a user-centric process for designing usable and effective privacy interfaces, concluding with an overview of best practices.
The design of usable privacy notices and controls is not trivial, but this chapter hopefully motivated why it is important to invest the effort in getting the privacy user experience right – making sure that privacy information and controls are not only compliant with regulation but also address and align with users’ needs. Careful design of the privacy user experience can support users in developing an accurate and more complete understanding of a system and its data practices. Well-designed and user-tested privacy interfaces provide responsible privacy professionals and technologists with the confidence that an indication of consent was indeed an informed and freely-given expression by the user. Highlighting unexpected data practices and considering secondary and incidental users reduces surprise for users and hopefully prevents privacy harms, social media outcries, bad press, and fines from regulators. Importantly, a privacy interface is not just a compliance tool but rather an opportunity to engage with users about privacy, to explain the rationale behind practices that may seem invasive without proper context, to make users aware of potential privacy risks, and to communicate the measures and effort taken to mitigate those risks and protect users’ privacy.
The 2019 PPPM Student Paper Winner Is:
Privacy Attitudes of Smart Speaker Usersby Nathan Malkin, Joe Deatrick, Allen Tong, PrimalWijesekera, Serge Egelman, and David Wagner, University of California, Berkeley
Abstract
As devices with always-on microphones located in people’s homes, smart speakers have significant privacy implications. We surveyed smart speaker owners about their beliefs, attitudes, and concerns about the recordings that are made and shared by their devices. To ground participants’ responses in concrete interactions, rather than collecting their opinions abstractly, we framed our survey around randomly selected recordings of saved interactions with their devices. We surveyed 116 owners of Amazon and Google smart speakers and found that almost half did not know that their recordings were being permanently stored and that they could review them; only a quarter reported reviewing interactions, and very few had ever deleted any. While participants did not consider their own recordings especially sensitive, they were more protective of others’ recordings (such as children and guests) and were strongly opposed to use of their data by third parties or for advertising. They also considered permanent retention, the status quo, unsatisfactory. Based on our findings, we make recommendations for more agreeable data retention policies and future privacy controls.
Read more about the winners in the Future of Privacy Forum’s blog post.
This Year’s Must-Read Privacy Papers: FPF Announces Recipients of Annual Award
Today, FPF announced the winners of the 10th Annual Privacy Papers for Policymakers (PPPM) Award. This Award recognizes leading privacy scholarship that is relevant to policymakers in the United States Congress, at U.S. federal agencies and for data protection authorities abroad. The winners of the 2019 PPPM Award are:
Privacy’s Constitutional Moment and the Limits of Data Protectionby Woodrow Hartzog, Northeastern University, School of Law and Khoury College of Computer Sciences and Neil M. Richards, Washington University, School of Law and the Cordell Institute for Policy in Medicine & Law
Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websitesby Arunesh Mathur, Princeton University; Gunes Acar, Princeton University; Michael Friedman, Princeton University; Elena Lucherini, Princeton University; Jonathan Mayer, Princeton University; Marshini Chetty, University of Chicago;and Arvind Narayanan, Princeton University
These five papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. These papers demonstrate a thoughtful analysis of emerging issues and propose new means of analysis that can lead to real-world policy impact, making them “must-read” privacy scholarship for policymakers.
Three papers were selected for Honorable Mention:
Can You Pay for Privacy? Consumer Expectations and the Behavior of Free and Paid Appsby Kenneth Bamberger,University of California, Berkeley – School of Law; Serge Egelman,University of California, Berkeley – Department of Electrical Engineering & Computer Sciences; Catherine Han, University of California, Berkeley; Amit Elazari Bar On, University of California, Berkeley; and Irwin Reyes, University of California, Berkeley
Usable and Useful Privacy Interfaces(book chapter to appear in An Introduction to Privacy for Technology Professionals, Second Edition published by IAPP)by Florian Schaub, University of Michigan School of Informationand Lorrie Faith Cranor, Carnegie Mellon University
For the fourth year in a row, FPF also granted a Student Paper Award. To be considered, student work must meet similar guidelines as those set for the general Call for Nominations. The Student Paper Award is presented to:
Privacy Attitudes of Smart Speaker Usersby Nathan Malkin, Joe Deatrick, Allen Tong, PrimalWijesekera, Serge Egelman, and David Wagner, University of California, Berkeley
The winning authors have been invited to join FPF and Honorary Co-Hosts Senator Ed Markey and Congresswoman Diana DeGette to present their work at the U.S. Senate with policymakers, academics, and industry privacy professionals.
Held at the Hart Senate Office Building on February 6, 2020, this annual event will feature a keynote speech by FTC Commissioner Christine S. Wilson. FPF will subsequently publish a printed digest of summaries of the winning papers for distribution to policymakers, privacy professionals, and the public.
This event is free, open to the general public, and widely attended. For more information or to RSVP, please visit this page. This event is supported by a National Science Foundation grant. Any opinions, findings and conclusions or recommendations expressed in these papers are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Closer than Apart: Comparing Senate Commerce Committee Bills
Post By: Stacey Gray, Senior Counsel, and Polly Sanderson, Policy Counsel, Future of Privacy Forum
Over the Thanksgiving holiday, we saw for the first time a public Staff Discussion Draft of a federal comprehensive privacy law from the office of Senator Wicker (R-MS), the Chairman of the Senate Commerce Committee. Together with Senator Cantwell (D-WA)’s bill, the Consumer Online Privacy Rights Act, introduced last week with leading Democrat co-sponsors, Senator Wicker’s Discussion Draft represents a significant movement toward bipartisan negotiations in the Senate. We expect to see these negotiations play out during this Wednesday’s Senate Commerce Committee Hearing (12/4 at 10:00 AM), and through the New Year.
How do the two bills, one from leading Democrats, and one from the Republican Chairman, compare to each other? We find them to be closer together on most issues than they are apart: a promising sign for bipartisan progress. Here is FPF’s breakdown of all the major commonalities and differences in the two bills (as they currently exist).
Significant Commonalities (with some differences):
Covered Data – Both bills define covered data as “linked or reasonably linkable,” including inferences or derived data; both exclude employee data, de-identified data, and public records. The Wicker draft would also broadly exempt publicly available information from regulation (a key difference below).
Covered Entities and Small Business Exemption – Both bills would govern commercial businesses, either those already governed by the FTC Act (Cantwell), or those “affecting interstate commerce” (Wicker). Both contain a small business exemption, from the entire bill (Cantwell), or from the bulk of core obligations (Wicker).
Transparency – Both would require detailed public privacy policies, including: categories of data collected/transferred, processing purposes, retention practices, and how to exercise rights. Cantwell additionally would require disclosure of the specific identities of all third parties to which data is transferred (102(b)(3)(B)).
Access – Both would require companies to provide individuals with a copy “or accurate representation” of their data upon “verified request” and the names of third parties to whom it has been transferred (and, in Wicker, the names of service providers), free of charge (although Wicker limits free requests to 2/year).
Deletion, Correction & Portability – Both bills would require companies, upon “verified request” to correct or delete covered data of an individual and inform service providers and third parties of the request, although the Wicker draft uses the phrase “delete or deidentify.” Both would require that data be provided upon request “in a structured, interoperable, and machine-readable format,” “without licensing restrictions,” although to this list Wicker adds the term “standards-based.” Both exclude inferred or derived data from portability requests.
Opt-Outs for Non-Sensitive Data – Cantwell would establish a right to object to data transfers, subject to a process established by FTC rulemaking, within guidelines (clear and conspicuous, centralized, with the ability for individuals to view and change the status of their objection, and informed by the Do Not Call Registry) (105(b)). Wicker would more broadly allow individuals to “object to the processing and transfer” of data, but would not require specific rulemaking on this point (104(d)).
Opt-In for Sensitive Data – Both bills would require “prior, affirmative express consent” for processing of sensitive data, including biometric data (with restrictions on what consent must require, in definitions for Cantwell, and in the text of Sec. 105 for Wicker). The definition of “sensitive data” in the Cantwell bill is significantly broader, containing “browsing history,” “email,” and “phone number” (these are not included in Wicker).
Commercial IRBs for Ethical Research – Among similar lists of enumerated exceptions to individual rights (for e.g. product recalls, audits, etc.), both bills contain an exemption for research governed by commercial IRBs or similar oversight entities meeting standards promulgated by the FTC (Cantwell 110(d)) (Wicker 108(a)).
Privacy and Security Officers & Risk Assessments – Both bills would require companies to appoint a designated privacy officer and security officer, with responsibilities to facilitate compliance with the Acts. In addition, the Cantwell bill would create specific responsibilities to implement and oversee a comprehensive data privacy and security program, including annual “privacy and data security risk assessments,” among their other quality control responsibilities (Sec. 202). Wicker would only require “large data holders” to undergo similar internal “privacy impact assessments” (Sec. 107).
Deepfakes / Digital Content Forgeries – Both bills would require federal agencies (NIST, and in Wicker, NIST and the FTC) to study and produce reports on the definition, assessments and analysis, of “digital content forgeries.”
Algorithmic Bias & Civil Rights – Cantwell would require companies to conduct annual algorithmic decisionmaking impact assessments for bias, and both bills would require the FTC to publish a report (Cantwell) or a report every 5 years (Wicker) examining the uses of algorithms to process covered data in ways that may violate anti-discrimination laws. In addition, Cantwell goes significantly further by directly prohibiting covered entities from processing or transferring covered data on the basis of protected characteristics for specified purposes (Sec. 108).
Significant Differences:
Effective Dates – Cantwell: 180 days after passage; Wicker: 2 years
Publicly available information – While both bills would exclude “public records” from covered data, Cantwell would still govern “publicly available information” (although excluding it from deletion and correction obligations). In contrast, the Wicker draft would exclude all data “widely available to the general public” from the definition of covered data, thus exempting it from all regulation.
Enforcement – While both bills situate the FTC and State AGs as enforcers, the Cantwell bill would also create a broad private right of action for individuals to bring civil actions, and receive penalties of $100-$1,000 per violation per day, or actual damages, whichever is greater; punitive damages; reasonable attorney’s fees and litigation costs; and/or any other judicial relief deemed appropriate.
Preemption – The Wicker bill would preempt very broadly (any state “law, regulation, rule, requirement, or standard related to the data privacy or security and associated activities of covered entities.”) In contrast, the Cantwell bill would not preempt any state laws beyond those “directly conflicting,” to the extent of the conflict.
Children’s Data (Wicker Only) – While the Cantwell bill does not contain any special protections for children’s data, the Wicker draft would require affirmative consent prior to any transfers of data from children under 16. (Wicker 104(c)).
Certification Programs (Wicker Only) – The Wicker draft would allow the FTC to approve self-regulatory “certification programs” developed by businesses or industry associations, so long as they meet certain requirements for eligibility (Sec. 403).
Data Broker Registration Requirements (Wicker Only) – The Wicker draft would require data brokers (a “data broker” is an entity that “knowingly collects or processes on behalf of, or transfers to, third parties the covered data of an individual with whom the entity does not have a direct relationship”) to register basic contact information annually with the FTC, with civil penalties for failing to register.
Executive Responsibility (Cantwell only) – The Cantwell bill would require the CEO or top officer, and the privacy and data security officers, of “large data holders,” to make annual certifications to the FTC regarding adequate internal controls.
Whistleblower Provisions (effectively Cantwell Only) – The Cantwell bill would create robust protections for whistleblowers (defined broadly) against any retaliatory action (also defined broadly), including civil remedies of reinstatement, back pay and damages. In contrast, the Wicker draft defines “whistleblower” narrowly (provider of “original information” to an agency), and provides no legal remedies to individuals; instead, it provides only that the FTC may take into account any retaliation against a whistleblower when assessing penalties.
Did we miss anything? Let us know at [email protected] as we continue tracking these developments.
Starting Point for Negotiation: An Analysis of Senate Democratic Leadership’s Landmark Comprehensive Privacy Bill
Today, Senate Commerce Committee Ranking Member Maria Cantwell (D-WA), joined by top Democrats on the Senate Commerce Committee – Senators Markey, Schatz and Klobuchar – introduced a new comprehensive federal privacy bill, the Consumer Online Privacy Rights Act(COPRA). The bill is consistent with the Senate Democratic leadership positions announced last week and comes in advance of a December 4th Senate Commerce Committee hearingconvened by Senator Wicker (R-Miss), Examining Legislative Proposals to Protect Consumer Data Privacy.
In substance, the bill primarily emphasizes individual control, codifying strong rights for individuals to be informed of data processing, and to be able to access, delete, correct, and port their data. The definition of covered data is broad, aligning with the GDPR and most other US privacy bills to date (data that “identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data”), although it excludes “de-identified data.” The FTC is tasked with rulemaking to enable centralized opt-outs for non-sensitive data, while “sensitive data” requires opt-in consent.
Notably, the bill contains a nuanced exception to support ethical commercial research if approved, monitored, and governed by an Institutional Review Board (IRB) or an IRB-like oversight entity that meets standards promulgated by the FTC. Such oversight would provide stronger legal protections for “scientific, historical, or statistical research in the public interest” in situations where informed consent is impractical, such as commercial research conducted on Big Data or other large, less readily identifiable datasets.
Below are FPF’s highlights of COPRA’s other provisions.
“Covered entity” is defined in the bill as “any entity or person that is subject to the Federal Trade Commission Act (15 U.S.C. 41 et seq) other than a small business.
Excludes small businesses, defined as those that do not maintain an average annual revenue of over $25,000,000; annually process covered data of an average of 50,000 individuals, households, or devices; and derive half or more of their annual revenue from transferring covered data.
Unlike other federal proposals, this bill would not govern non-profit organizations, political campaigns, or any other entity not already subject to the FTC’s jurisdiction.
2. Data Minimization and Data Security
The bill features a general right to data minimization, a provision that may place significant limits on collection of data. Businesses may not process or transfer data beyond what is “reasonably necessary, proportionate, and limited” to (1) carry out the specific processing purposes and transfers described in the privacy policy made available by the covered entity; (2) carry out a specific processing purpose or transfer for which the covered entity has obtained affirmative express consent; or (3) for a purpose specifically permitted by the Act.
Covered entities must maintain “reasonable data security practices,” including specific obligations to assess vulnerabilities, take preventative and corrective actions, implement policies for information retention and disposal, and conduct employee training.
3. Sensitive Data (and Opt-Outs for Non-Sensitive Data)
The bill requires “prior, affirmative, express consent” to collect or process sensitive data, which includes (among other things): biometric data, information that reveals past, present, or future physical or mental health, disability, or diagnosis of an individual, precise geolocation, details of private communications, phone or text logs, race or ethnicity, union membership, sexual orientation or sexual behavior, and intimate images (a notable inclusion that may intersect with Section 230 and bills like the ENOUGH Act). The FTC can also determine other data to be sensitive through rulemaking.
Sensitive data also includes “email address,” “phone number,” and “browsing history,” which are commonly collected for online and mobile advertising and marketing purposes under an “opt out” standard promoted by industry self-regulation bodies.
Prior, affirmative opt-in consent is required for both processing and transferring of sensitive data. For all other data (“non-sensitive”), the bill provides a right to opt-out of the transfer of personal information, with the FTC tasked with promulgating rules for how such opt-outs should be governed, including the extent to which they may be centralized in ways similar to the FTC’s Do Not Call Registry.
4. Third Parties and Service Providers
Third parties and service providers are defined similarly to how these terms are defined in the California Consumer Privacy Act:
“Service providers” are entities that process or transfer covered data “in the course of performing a service or function on behalf of, and at the direction of, another covered entity,” but only to the extent that such processing or transferral (i) relates to the performance of such service or function; or (ii) is necessary to comply with a legal obligation or to establish, exercise, or defend legal claims.
“Third parties” include any other non-service provider entity that processes or transfers third party data; and that is not under common ownership, corporate control, and branding as the covered entity.
Businesses must identify “third parties” to whom data is shared, and allow individuals to opt-out or object to transfers of data to third parties through rules promulgated by the FTC. Service providers may not transfer data to a third party without express affirmative consent from the linkable individual. Meanwhile, “third parties” are prohibited from processing covered data in a manner “inconsistent with the expectations of a reasonable individual.”
The bill also requires the FTC to promulgate strict rules limiting processing and restricting transfers of “biometric data” to third parties.
5. Interaction with State and Federal Laws
This law preempts only “directly conflicting” state laws, and then only to the extent of the conflict. It does not preempt any aspects of state laws that afford “a greater level of protection to individuals protected under this Act.”
Similarly, it would not preempt or displace any “common law rights or remedies, or any statute creating a remedy for civil relief, including any cause of action for personal injury, wrongful death, property damage, or other financial, physical, reputational, or psychological injury based in negligence, strict liability, products liability, failure to warn, an objectively offensive intrusion into the private affairs or concerns of the individual, or any other legal theory of liability under any Federal or State common law, or any State statutory law.”
Similarly, the bill does not supersede any existing federal sectoral laws. However, covered entities that are already required to comply with privacy requirements of federal sectoral laws — including Gramm-Leach-Bliley, the Fair Credit Reporting Act (FCRA), the Health Information Technology for Economic and Clinical Health Act, the Family Educational Rights and Privacy Act (FERPA), or Health Insurance Portability and Accountability Act (HIPAA) — will be “deemed to be in compliance” with the related requirements of the Act (with the exception of the data security requirements in Section 107).
6. Algorithmic Discrimination and Civil Rights
The bill prohibits covered entities from processing or transferring covered data on the basis of an individual’s or class of individuals’ actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability – (A) for the purpose of advertising, marketing, soliciting, offering, selling, leasing, licensing, renting, or otherwise commercially contracting for a housing, employment, credit, or education opportunity, in a manner that unlawfully discriminates against or otherwise makes the opportunity unavailable to the individual or class of individuals; or (B) in a manner that unlawfully segregates, discriminates against, or otherwise makes unavailable, to an individual or class of individuals, goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.
Covered entity must designate privacy and security officers (see below) who among other things are required to annually conduct algorithmic decision-making impact assessments. These assessments must describe and evaluate the development of the entity’s algorithmic decision-making processes (including the design and training data used to develop the algorithmic decision-making process), how they were tested for accuracy, fairness, bias, and discrimination, and whether the system produces discriminatory results on the basis of protected characteristics. A covered entity may use external, independent auditors or researchers to make such assessments.
Three years after enactment, the bill provides that the Commission shall publish a report containing the results of a study examining the use of algorithms for the purposes of processing or transferring covered data to make or facilitate advertising for housing, education, employment, or credit opportunities, or determining access to any place of public accommodation.
The bill disallows covered entities from charging individual fees to exercise their rights, and prohibits covered entities from conditioning a service or product to an individual in exchange for waiving privacy rights.
7. Enforcement, Accountability, and Whistleblower Protections
Within two years, the bill calls for the Federal Trade Commission to establish a “new Bureau” within the Commission. The purpose of the new Bureau would be to assist the Commission in exercising their authority under the Act and other Federal laws addressing privacy, data security, and related issues. Violations of the Act are treated as unfair or deceptive acts under the FTC Act, and are enforceable by the Federal Trade Commission. State Attorneys General are also able to bring actions under the Act. The bill also gives the Federal Trade Commission broad rulemaking authority to promulgate regulations under the Act.
The individual rights granted by the bill are complemented by a private right of action, and the bill provides that any violation of the Act, or of a regulation promulgated under it, should be considered an “injury in fact.” The bill also renders pre-dispute arbitration agreements and joint action waivers invalid or unenforceable with respect to privacy or data security disputes.
The bill also mandates that the chief executive officer, CISO and chief privacy officers of “large data holder[s]” must annually certify to the Commission that the entity maintains adequate internal controls to comply with the Act. “Large data holder” is defined as an entity that, in the last calendar year, processed or transferred the covered data of more than 5,000,000 individuals, devices, or households; or processed or transferred the sensitive data of over 100,000 individuals, devices, or households.
The bill prohibits covered entities from discharging, demoting, suspending, threatening, harassing, or in any other manner discriminating against a covered individual of the entity for, inter alia, taking a lawful action in providing the Federal Government or State Attorney General with information relating to acts or omissions they believe to be violations of the Act or regulations promulgated under the Act. Various forms of relief are available, including reinstatement, back pay, financial damages, and attorneys’ fees.
The bill calls for the designation of qualified employees as privacy and data security officers to facilitate the covered entities ongoing compliance with the Act. Among others, responsibilities include the facilitation of ongoing compliance with the Act; the implementation of comprehensive privacy and data security programs; employee training & risk assessments; and annually conducting mandated algorithmic decision-making impact assessments (described above).
READ MORE:
Senator Cantwell’s Press Release (11.26.19) – Link
FPF CEO Jules Polonetsky’s Statement (11.26.19) – Link
Witnesses and Details for December 4th Hearing – Link
Statement by Future of Privacy Forum CEO Jules Polonetsky on the Consumer Online Privacy Rights Act
WASHINGTON, DC – November 26, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding the introduction of a new comprehensive federal privacy bill, the Consumer Online Privacy Rights Act(COPRA), proposed today by Senators Maria Cantwell, Amy Klobuchar, Brian Schatz, and Ed Markey:
“This is the most sophisticated federal proposal to emerge to date and demonstrates that Senate Democrats are committed to setting a high bar for consumer privacy. The bill would codify strong individual rights, meeting and exceeding the California Consumer Privacy Act. It also requires companies to implement training and accountability measures and includes a nuanced exception to support ethical research. The bill provides a strong starting point that will move bipartisan debate forward, with private rights of action, limits on preemption, and the definition of sensitive data, among other issues, likely to be points of ongoing negotiation.”
# # #
The Future of Privacy Forum will post a more detailed analysis of the legislation on its blog.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
Questions to Ask Before You Buy a Genetic Testing Kit on Black Friday
By Rachele Hendricks-Sturrup and Katelyn Ringrose
On Black Friday and Cyber Monday, millions of consumers will hurry to their nearest doorbuster sale or boot up their favorite sales portal to buy a price-slashed consumer genetic testing kit. Some genetic testing kits will be up to half off this year, and the market as a whole is projected to more than triple from a valuation of $99 million this past year to a projected $310 million in 2022.
Last year on Black Friday, AncestryDNA alone sold about 1.5 million testing kits. According to Wired, that means that consumers sent in around 2,000 gallons of saliva—enough spit to fill a modest above-ground swimming pool. Consumers are drawn to the tests for genealogical purposes, and new market offerings are being used as strategies to help raise consumer awareness on genetic health risks.
With that much genetic material exchanging hands, it is important for consumers to think carefully about which kit provider will prioritize consumer privacy.DNA contains deeply personal information which can be incredibly beneficial for consumers. But that same information may also contain unexpected and deeply personal information that could be unsettling, and reveal information about the test taker’s family members. It deserves a high standard of protection.
However, laws like the Health Insurance Portability and Accountability Act (HIPAA), the central U.S. health privacy law, do not apply to genetic information collected and housed by consumer genetic testing companies. Due to this regulatory gap, consumers should find out from the companies themselves, and prior to buying a test for themselves or a loved one, how the companies will protect and use the genetic data they provide and collect.
Here are five important questions consumers should ask before buying a genetic testing kit on Black Friday or Cyber Monday:
Does the Company Ask for Your Consent Before Sharing Your Individual-Level Genetic Data with Third Parties? People choose to share their genetic data with third parties for a range of purposes (e.g., to participate in scientific research or connect with unknown biological relatives). However, genetic testing companies should never share your individual-level genetic data with third parties without your knowledge and consent, particularly with insurers, employers, and educational institutions.
Do You Have the Ability to Delete Your Genetic Data and Destroy Your Biological Sample If You Choose? Companies may have default policies to destroy all samples once testing is completed, retain data or samples for only a finite period of time or in accordance with regulations, or retain data and samples indefinitely or until you close your account. Companies should be clear about their retention practices and offer prominent ways to delete your genetic data from their databases and destroy your biological sample.
Does the Company Require a Valid Legal Process Before They will Disclose Your Genetic Data to Law Enforcement? As we have seen in prolific cases like the Golden State Killer, genetic data can be a powerful investigative tool for government. However, government access to your genetic data presents substantial privacy risks. Companies should require that government entities obtain valid legal process, like a warrant, subpoena, or a legal order before they disclose genetic data.
What are the Company’s Notification Practices When it Comes to Conveying Material Changes to Their Privacy Policies? Companies may modify their privacy policies or statements occasionally, and sometimes they significantly change how genetic data is collected, used, and stored. But before changes are implemented, you should be notified and given an opportunity to review the changes to decide if you want to continue using the company’s services.
Has the Company Committed to Strong Technical Data Security Practices? As more than 26 million individuals have had their DNA tested, the potential for hacking and data breaches is an increasing concern. Given the uniqueness of genetic data, companies should maintain a comprehensive security program through practices such as: secure storage of biological samples and genetic data, encryption, data-use agreements, contractual obligations, and accountability measures.
For consumers who are interested in learning more, the Future of Privacy Forum’s Privacy Best Practices for Consumer Genetic Testing Services set forth standards for the collection, use, and sharing of genetic data. The standards embrace express consent mechanisms for the transfer of data to third parties and have provisions restricting marketing based on genetic data, among other privacy-centric protections. Companies that currently support these best practices include: Ancestry, 23andMe, Helix, MyHeritage, Habit, African Ancestry, and Living DNA.
Before you buy a genetic test kit as a gift or for yourself for this holiday season, take a moment to consider how our genetic information shapes who we are… and whether you are dealing with a company that promises to protect it.
For more information and to learn how to become involved with FPF’s health privacy efforts, please contact Katelyn Ringrose at [email protected] or Rachele Hendricks-Sturrup at [email protected].
FPF Welcomes New Members to the Youth & Education Privacy Project
We are thrilled to announce three new members of FPF’s Youth & Education Privacy team. The new staff – Jasmine Park, Anisha Reddy, and Katherine Sledge – will help expand FPF’s technical assistance and training, resource creation and distribution, and state and federal legislative tracking.
You can read more about Katherine, Anisha, and Jasmine below. Please join us in welcoming them to the team!
Jasmine Park
Jasmine Park is a Policy Fellow for the Youth and Education Privacy Project. Jasmine is primarily supporting FPF’s outreach, training, and technical assistance for local and state education agencies (LEAs and SEAs), including FPF’s pilot Train-the-Trainer program and the K-12 privacy working group for LEA/SEA staff. She will also be helping to grow FPF’s child privacy portfolio in the U.S. and abroad. Jasmine recently graduated with an M.A. in Global Affairs from the Yale Jackson Institute for Global Affairs, where she focused on tech policy and digital anthropology. From 2015 to 2017, Jasmine served as a Peace Corps Volunteer in Cambodia, where she gained two years of on-the-ground experience as an educator. She worked closely with local government, school administrators, law enforcement, and community leaders to conduct needs assessments and to provide access to the training and resources necessary to address self-identified needs. She previously interned with the Los Angeles Mayor’s Office of International Affairs and Asian Americans Advancing Justice. Jasmine serves on the board of Brio, a nonprofit that empowers local partners to design and launch mental health solutions in vulnerable communities globally. Jasmine received her B.A. cum laude in History and East Asian Studies from Harvard University.
I most look forward to joining the FPF Education Privacy team’s efforts to equip local administrators with the knowledge and tools they need to implement best practices in their communities.
Anisha Reddy
Anisha Reddy is a Policy Fellow for the Youth and Education Privacy Project. Anisha is primarily supporting FPF’s state and federal legislative analysis and resources. Anisha is also running FPF’s K-12 working group for edtech companies and overseeing the bi-weekly education privacy newsletter. At Penn State’s Dickinson Law, Anisha was honored with the University’s 2017-2018 Montgomery and MacRae Award for Excellence in Administrative Law. She held the offices of Executive Editor for Digital Media of the Dickinson Law Review, President of the Asian Pacific Law Students Association, and Vice President of the Women’s Law Caucus. Anisha served as a Certified Legal Intern for the Children’s Advocacy Clinic in Carlisle, PA, where she represented children involved in civil court actions like adoption, domestic violence, and custody matters. She previously interned at the Governor of Pennsylvania’s Office of General Counsel, at Udacity in Mountain View, CA and at Blockchain, Inc. in New York, NY.
I’m most excited about the unique opportunity to impact the way the student privacy conversation is framed by helping include the voices of all stakeholders – not just the edtech industry – but parents, districts, and the students themselves.
Katherine Sledge
As the Policy Manager for Youth and Education Privacy at the Future of Privacy Forum, Katherine manages the progression of projects related to youth and student privacy at FPF. Before coming to FPF, Katherine worked with the executive team at the National Network to End Domestic Violence. She also has national and state-level political advocacy experience at the National Alliance to End Homelessness and the ACLU. Prior to transitioning to a career in public policy, Katherine was the Operations Specialist at an environmental firm that specializes in remediation projects. In that role, Katherine headed administrative and logistical support for environmental projects across the US.
Katherine graduated from American University with a Master of Public Administration with a custom concentration in Applied Politics: Women, Public Policy, and Political Advocacy. In addition to the core public management curriculum, Katherine focused her studies on the intersection of public policy and gender, as well as advocacy strategy, process, and best practices. Originally from Tennessee, Katherine attended the University of Tennessee, Knoxville, where she earned her B.A. in Political Science.
Interested in student privacy? Subscribe to our monthly education privacy newsletter here. Want more info? Check out FERPA|Sherpa, the education privacy resource center website.
What They’re Saying: Stakeholders Warn Senate Surveillance Bill Could Harm Students, Communities
Parents, privacy advocates, education stakeholders, and members of the disability rights community are raising concerns about new Senate legislation that would mandate unproven student surveillance programs and encourage greater law enforcement intervention in classrooms in a misguided effort to improve school safety.
Last week, Senator John Cornyn (R-TX) introduced the RESPONSE Act, legislation that is intended to help reduce and prevent mass violence in communities. However, the bill includes a provision to dramatically expand the Children’s Internet Protection Act and would require almost every U.S. school to implement costly network monitoring technology and collect massive amounts of student data.
The legislation also requires the creation of school-based “Behavioral Intervention Teams” that will be strongly encouraged to refer concerning student behavior directly to law enforcement, rather than allowing educators who know students best to engage directly and address the issue internally. This provision would likely strengthen the “school to prison pipeline” and could be especially harmful for students of color and students with disabilities.
Take a look at What They’re Saying about the legislation:
A new Republican bill that claims ‘to help prevent mass shootings’ includes no new gun control measures. Instead, Republican lawmakers are supporting a huge, federally mandated boost to America’s growing school surveillance industry… There is still no research evidence that demonstrates whether or not online monitoring of schoolchildren actually works to prevent violence.
Training behavioral assessment teams to default to the criminal process rather than school-based behavioral assessment and intervention would do little to address violence in schools and would likely foster rather than prevent a violent school environment … By making the criminal process the frontline for student discipline, this bill will only serve to increase the number of students of color and students with disabilities in the juvenile justice system.
Leslie Boggs, national PTA president, said in a statement that the organization has concerns with the bill asit is currently written. She said the PTA will work with Cornyn’s staff “to ensure evidence-based bestpractices for protecting students are used, the school to prison pipeline is not increased, students are not discouraged from seeking mental health and counseling support and that students’ online activities are not over monitored.”
Privacy experts and education groups, many of which have resisted similar efforts at the state level, say that level of social media and network surveillance can discourage children from speaking their minds online and could disproportionately result in punishment against children of color, who already face higher rates of punishment in school.
Generational gaps between adults and teens make for hefty communication barriers, and a private Facebook message that might read as “dangerous” to a grown law enforcement officer could easily just be two children goofing off… whenever they go online, students would be forced to think about what the government or their school would like and dislike, driving what Republicans so often claim to be against — mental conformity to institutional, government-driven norms. Students’ fears of being watched (and reported) would also inevitably widen the gap between government schools and their students. Surveillance accompanied by the threat of penalty would result in mass distrust from students toward the education system: a reinforced “us versus them” mentality between students and the adults in charge.
Schools are already deploying significant digital surveillance systems in the name of safety…But critics say these surveillance systems vacuum up a huge and irrelevant stream of online data, can lead to false positives, and present huge problems for privacy.
Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.
Billed as a response to school shootings, [the RESPONSE Act] has, as critics noted, almost nothing to do with guns, and a great deal to do with increasing surveillance (as well as targeting those with mental health issues)…Not everyone will find this troubling… But if you want to erode civil liberties and traditions of privacy, it’s best to start with people who don’t have the political power to fight back. Children are ideal–not only can’t they fight back, but they will grow up thinking it’s perfectly normal to live under constant surveillance. For their own safety, of course.
ICYMI: New Senate Legislation Mandates “Pervasive Surveillance” in Attempt to Improve School Safety
WASHINGTON, D.C. – Legislation introduced in the U.S. Senate this week is under scrutiny from privacy and disability rights advocates for provisions that would dramatically expand surveillance technologies in schools nationwide, despite lack of evidence or research to confirm these tools have any effect on preventing or predicting school violence.
According to The Guardian, “A new Republican bill that claims ‘to help prevent mass shootings’ includes no new gun control measures. Instead, Republican lawmakers are supporting a huge, federally mandated boost to America’s growing school surveillance industry… There is still no research evidence that demonstrates whether or not online monitoring of schoolchildren actually works to prevent violence.”
Future of Privacy Forum Senior Counsel and Director of Education Privacy Amelia Vance highlighted the challenges and unintended consequences that could result from the RESPONSE Act sponsored by Senator John Cornyn (R-TX):
Privacy advocates say pervasive surveillance is not appropriate for an educational setting, and that it may actually harm children, particularly students with disabilities and students of color, who are already disproportionately targeted with school disciplinary measures.
“You are forcing schools into a position where they would have to surveil by default,” said Amelia Vance, the director of education privacy at the Future of Privacy Forum.
“There’s a privacy debate to be had about whether surveillance is the right tactic to take in schools, whether it inhibits students trust in their schools and their ability to learn,” Vance said. But “the bottom line,” she said, is “we do not have evidence that violence prediction works”…
If Cornyn’s bill becomes law, “you’re going to force probably 10,000 districts to buy a new product that they’re going to have to implement”, she said.
That would mean redirecting public schools’ time and money away from strategies that are backed by evidence, such as supporting mental health and counseling services, and towards dealing with surveillance technologies, which often produce many false alarms, like alerts about essays on To Kill a Mockingbird.