Increased Surveillance is Not an Effective Response to Mass Violence
By Sara Collins and Anisha Reddy
This week, Senator Cornyn introduced the RESPONSE Act, an omnibus bill meant to reduce violent crimes, with a particular focus on mass shootings. The bill has several components, including provisions that would have significant implications for how sensitive student data is collected, used, and shared. The most troubling part of the proposal would broaden the categories of content schools must monitor under the Children’s Internet Protection Act(CIPA); specifically, schools would be required to “detect online activities of minors who are at risk of committing self-harm or extreme violence against others.”
Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.
More Untargeted Monitoring is not the Answer
About 95% of schools are required to create internet safety policies under CIPA (these requirements are tied to schools’ participation in the “E-rate” telecommunications discount program). CIPA requires safety policies to include technology that monitors, blocks, and filters students’ attempts to access inappropriate online content. CIPA generally imposes monitoring requirements regarding: obscene content; child pornography; and content that is otherwise harmful to minors.
The RESPONSE Act would impose new obligations, requiring schools to infer whether a students’ internet use might indicate they are at risk of committing self-harm or extreme violence against others. However, there is little evidence that detecting or blocking this kind of content is technically possible and would prevent physical harm. A report on school safety technology funded by the U.S. Department of Justice noted that violence prediction software is “immature technology.” Not only is the technology immature, the FBI found that there is no one profile for a school shooter: scanning student activity to look for the next “school shooter” is unlikely to be effective.
By directing schools to implement “technology protection measure[s] that detect online activities of minors who are at risk of committing self-harm or extreme violence against others,” the RESPONSE Act would essentially require that all schools across the nation implement some form of comprehensive network or device monitoring technology to scan lawful content–a direct violation of local control and a serious invasion of students’ privacy.
This broad language could encourage schools to collect as much information as possible about students, requiring already overwhelmed faculty and administrators to spend countless hours sifting through contextually harmless student data–hours that could be better spent engaging with students directly.
Additionally, this technology mandate could limit schools’ ability and desire to implement more thoughtful and effective programs and policies designed to improve school safety. Schools may assume that network monitoring technology is more effective than it actually is, and redirect resources away from evidence-based school safety measures, such as holistic approaches to early intervention. Further, without more guidance, school administrators would be forced to make judgement calls that result in the over-monitoring of student online activity.
The cost associated with the implementation of these technologies goes beyond buying appropriate network monitoring software, which is a burden in and of itself. Schools—which are under-resourced and under-staffed—would experience difficulty devoting funds and staff time to monitoring these alerts, as well as developing policies for responses to those alerts. These burdens are further compounded in rural school districts that already receive less funding per student.
False Alerts Unjustly Trap Students in the Threat Assessment Process
In some cases, network monitoring does not end when the school day ends. Schools often issue devices for students to take home or online accounts students access from a device at home. Under the RESPONSE Act, these schools would be forced to monitor students constantly. If a school gets an alert during non-school hours, their default action may be to alert law enforcement. But sending law enforcement to conduct wellness checks is not a neutral action. These interactions can be traumatic for students and families, and can result in injury or false imprisonment. These harms are exacerbated when monitoring technology provides overwhelming numbers of false positives.
Even if content monitoring technology were effective, the belief that surveillance has no negative outcomes or consequences for students has created a pernicious narrative. Surveillance technologies, like device, network, or social media monitoring services, can harm students by stifling their creativity, individual growth, and speech. Constant surveillance also conditions students to expect and accept that authority figures, such as the government, will always monitor their activity. We also know that students of color and students with disabilities are disproportionately suspended, arrested, and expelled compared to white students and non-disabled students. The RESPONSE Act’s proposed new requirements would only serve to further exacerbate this disparity.
Schools, educators, caregivers, and communities are in the best position to notice and address concerning student behavior. The Department of Education has several resources outlining effective disciplinary measures in schools, finding that “[e]vidence-based, multi-tiered behavioral frameworks . . . can help improve overall school climate and safety.”
Ultimately, requiring schools to spend money on ineffective technology would divert much-needed resources and staff from providing students with a safe learning environment. Rather than focusing on filtering content, schools should emphasize the importance of safe and responsible internet use and use school safety funding on evidence-based solutions. By doing so, administrators can create a school community built on trust rather than suspicion.
FPF Receives Grant To Design Ethical Review Process for Research Access to Corporate Data
One of the defining features of the data economy is that research is increasingly taking place outside of universities and traditional academic settings. With information becoming the raw material for production of products and services, more organizations are exposed to and closely examining vast amounts of personal data about citizens, consumers, patients and employees. This includes companies in industries ranging from technology and education to financial services and healthcare, and also non-profit entities, which may seek to advance societal causes, or other agenda-driven projects.
For research on data subject to the Common Rule, institutional review boards (IRBs) provide an essential ethical check on experimentation and research. However, much of the research relying on corporate data is beyond the scope of IRBs, because the data has been previously collected, the project or researcher is not federally funded, the data may be a public data set or other reasons.
Future of Privacy Forum (FPF) has received a Schmidt Futures grant to create an independent party of experts for an ethical review process that can provide trusted vetting of corporate-academic research projects. FPF will establish a pool of respected reviewers to operate as a standalone, on-demand review board to evaluate research uses of personal data and create a set of transparent policies and processes to be applied to such reviews.
FPF will define the review structure, establish procedural guidelines, and articulate the substantive principles and requirements for governance. Other considerations to be addressed include companies’ common concerns about risk analysis, disclosure of intellectual property and trade secrets, and exposure to negative media and public reaction. Following this phase, members who can be available for reviews will be recruited from a range of backgrounds. The project will include input and review by government, civil society, industry and academic stakeholders.
Sara Jordan, who will be cooperating with FPF on this project, has proposed one model for addressing this challenge. Her paper, Designing an AI Research Review Committee, calls for a review committee dedicated to ethical oversight of AI research by giving serious consideration of the design of such an organization. This model proposes a design for such a committee drawing upon the history and structure of existing research review committees such as IRBs, Institutional Animal Care and Use Committees (IACUC), and Institutional Biosafety Committees. This model follows that of the IBC but with a blend of features from human subject and animal care and use committees in order to improve implementation of risk-adjusted oversight mechanisms.
Another analysis and recommendation was published recently by Northeastern University Ethics Institute and Accenture: Building Data and AI Ethics Committees. This paper comments that an ethics committee is a potentially valuable component of accomplishing responsible collection, sharing, and use of data, machine learning, and AI within and between organizations. However, to be effective, such a committee must be thoughtfully designed, adequately resourced, clearly charged, sufficiently empowered, and appropriately situated within the organization.
Likewise the EU is considering these challenges with several recent AI guidance publications including the Council of Europe established an ad hoc committee on Artificial Intelligence, which will examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law.
BACKGROUND
The ethical framework applying to human subject research in the biomedical and behavioral research fields dates back to the Belmont Report. Drafted in 1976 and adopted by the United States government in 1991 as the Common Rule, the Belmont principles were geared towards a paradigmatic controlled scientific experiment with a limited population of human subjects interacting directly with researchers and manifesting their informed consent. These days, researchers in academic institutions as well as private sector businesses not subject to the Common Rule, seek to conduct analysis of a wide array of data sources, from massive commercial or government databases to individual tweets or Facebook postings publicly available online, with little or no opportunity to directly engage human subjects to obtain their consent or even inform them of research activities. Data analysis is now used in multiple contexts, such as combatting fraud in the payment card industry, reducing the time commuters spend on the road, detecting harmful drug interactions, improving marketing mechanisms, personalizing the delivery of education in K-12 schools, encouraging exercise and weight loss, and much more.
These data uses promise tremendous research opportunities and societal benefits but at the same time create new risks to privacy, fairness, due process and other civil liberties. Increasingly, researchers and corporate officers find themselves struggling to navigate unsettled social norms and make ethical choices for ways to use this data to achieve appropriate goals. The ethical dilemmas arising from data analysis may transcend privacy and trigger concerns about stigmatization, discrimination, human subject research, algorithmic decision making and filter bubbles.
In many cases, the scoping definitions of the Common Rule are strained by new data-focused research paradigms, which are often product-oriented and based on the analysis of preexisting datasets. For starters, it is not clear whether research of large datasets collected from public or semi-public sources even constitutes human subject research. “Human subject” is defined in the Common Rule as “a living individual about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information.” Yet, data driven research often leaves little or no footprint on individual subjects (“intervention or interaction”), such as in the case of automated testing for security flaws.
While obtaining individuals’ informed consent may be feasible in a controlled research setting involving a well-defined group of individuals, such as a clinical trial, it is untenable for researchers experimenting on a database that contains the footprints of millions, or indeed billions, of data subjects. In response to these developments, the Department of Homeland Security commissioned a series of workshops in 2011-2012, leading to the publication of the Menlo Report on Ethical Principles Guiding Information and Communication Technology Research. That report remains anchored in the Belmont Principles, which it interprets to adapt them to the domain of computer science and network engineering, in addition to introducing a fourth principle, respect for law and public interest, to reflect the “expansive and evolving yet often varied and discordant, legal controls relevant for communication privacy and information assurance.”
Ryan Calo foresaw the establishment of “Consumer Subject Review Boards” to address ethical questions about corporate data research. Calo suggested that organizations should “take a page from biomedical and behavioral science” and create small committees with diverse expertise that could operate according to predetermined principles for ethical use of data. No model has a direct correlation to the current challenges, however. The categorical non-appealable decision making of an academic IRB, which is staffed by tenured professors to ensure independence, will be difficult to reproduce in a corporate setting. And corporations face legitimate concerns about sharing trade secrets and intellectual property with external stakeholders who may serve on IRBs.
FPF’s work on this grant will seek to demonstrate the composition and viability of one way to address these challenges.
COPPA Workshop Takeaways
On Monday, the Federal Trade Commission (FTC) held a public workshop focused on potential updates to the Children’s Online Privacy Protection Act (COPPA) rule. The workshop follows a July 25, 2019 notice of rule review and call for public comments regarding COPPA rule reform. The comment period remains open until December 9th. Senior FTC officials expect the process to result in changes to the COPPA rule. The workshop also follows the Commission’s high-profile settlement with YouTube regarding child-directed content.
Monday’s workshop was a key part of the Commission’s review; the day-long session featured panel discussions focused on the various questions raised regarding COPPA’s continued effectiveness as technology evolves. FPF’s Amelia Vance spoke on a panel focused on the intersection of issues related to children’s privacy and student privacy.
During the edtech focused panel, there was a consensus that schools should be able to use the Family Educational Rights and Privacy Act’s (FERPA) school official exception to provide consent on behalf of under-thirteen students under COPPA, rather than collect consent directly from parents. This allows schools to continue to exercise judgment over what technology is used, while also preserving the privacy protections of both COPPA and FERPA. Many speakers expressed that parents feel they have little transparency about the technology being used in their child’s school. The FTC may potentially require increased transparency or notice to assuage these worries.
We also noticed several recurring themes throughout the workshop:
The tension between child-directed content and “child-attractive” or child-appropriate content and what that means under COPPA;
The misconceptions surrounding the meaning of “actual knowledge” and COPPA’s “product improvement” exception; and
A need to focus on frameworks and technology that allow children to safely be online.
The Tension Between “Child-Directed” Content and “Child-Attractive” or Child-Appropriate Content
Several questions were posed regarding the meaning of “child-directed content:”
Does child-directed content also mean child-attractive content—content that may interest children but is aimed at broader audiences, such as an interview with a sports player?
Does child-directed content also include child-appropriate content—content that doesn’t typically interest children, but is not violent, explicit, or objectionable, such as a video teaching viewers how to change the oil in their car?
If child-attractive or child-appropriate content gains a large enough child audience, does it transform into child-directed content?
Panelists cautioned that determining whether content is child-directed by focusing solely on audience makeup could create a moving target for creators; they would constantly have to monitor their audience to ensure they don’t break the “child-directed” threshold. Without clear methods for monitoring when children are accessing general audience content, this tension could not only encourage additional data collection, but it also makes it very difficult to create content for teenagers or “nostalgia-content” for adults. Panelists noted that this tension can transcend beyond content creators to services originally intended for a general audience that unintentionally attracts a child audience.
Harry Jho, a YouTube content creator, raised a concern that COPPA, as applied in the FTC’s YouTube settlement, will stifle creators’ ability to produce quality children’s online content. The settlement requires YouTube and creators to disable behaviorally targeted advertisements on child-directed content. Jho stated that he relies on behavioral advertising for the “lion’s share” of his revenue. Jho claimed that this settlement requirement will cause creators to suffer, and the quality of free children’s content on the internet to decline. Jho also articulated that there is confusion among creators about whether child-attractive or child-appropriate content will be considered “child-directed” under COPPA, resulting in less certainty than ever about whether COPPA applies to particular creators, channels, or videos.
Misconceptions: Actual Knowledge and Product Improvement
There was also significant confusion around the scope of COPPA and its definitions throughout the workshop. We heard many different opinions about the meaning of the actual knowledge standard, and the only point of agreement was that the YouTube settlement has contributed to the confusion. The FTC has said that having actual knowledge that there is child-directed content on your platform triggers COPPA. However, in the YouTube settlement, the FTC cited evidence that showed YouTube had knowledge that children were using the site, as well as pointing to channels that were obviously child-directed. Phyllis Marcus, a partner at Hunton Andrews Kurth, argued that the distinction between actual knowledge of child-directed content on a website versus actual knowledge that children are using a website seems to be collapsing. This shift, coupled with the confusion regarding the definition of “child-directed,” has caused significant uncertainty. Marcus believes that the use of the term “actual knowledge” in various other privacy regimes, such as California’s Consumer Privacy Act, will also create substantial confusion for companies.
While discussing edtech, the question of whether product improvement remains acceptable under COPPA was raised. Ariel Fox Johnson of Common Sense Media argued that product improvement is a commercial purpose under COPPA, full stop, and if schools are paying for a service, they should not also be “paying” with student data. FPF’s Amelia Vance argued that the product improvement exception is necessary to allow essential functions like security patches and authenticating users, so any changes should be carefully tailored.
Keeping Kids on the Internet
A recurring discussion was that some strategies for COPPA compliance have the unintended consequence of keeping kids off the internet. Jo Pedder, Head of Regulatory Strategy at the United Kingdom Information Commissioner’s Office discussed the UK’s implementation of the age-appropriate design code. The code’s goal is to empower kids on the internet while keeping them safe, rather than keeping them out of the digital world. Instead of a one-stop age-gate—largely decried by panelists as an ineffective method of keeping kids safe from age-inappropriate content and data collection—the design code requires entities to understand the age ranges of users and use these “age bands” to, for example, tailor privacy notices or settings.
Similarly, sites with a “mixed audience” under COPPA were heavily discussed, including if age gates can be effective in the space. Dona Fraser of the Children’s Advertising Review Unit pointed out that when kids see an age-gate, they see it as a requirement to lie about their age. Children want to use the internet and they are worried about what they are missing out on. When a mixed audience online service implements a holistic design approach by, for example, establishing a child-appropriate service by default, kids don’t feel like they are missing out on content and don’t have to lie.
Next Steps for the FTC
Several privacy advocates called for the Commission to exercise its 6(b) authority regarding COPPA-covered online services: under Section 6(b) of its enabling Act, the FTC has investigative authority to require reports providing “information about [an] entity’s ‘organization, business, conduct, practices, management, and relation to other corporations, partnerships, and individuals.’ 15 U.S.C. Sec. 46(b).” Panelists who brought up Section 6(b) raised concerns about the lack of insight about what information is being collected by websites and applications, especially in the education technology sector. Panelists also asked the FTC to do studies on the effectiveness of age-gates and whether behaviorally targeted ads actually have a higher market value than contextual advertisements, and even include the voices of the most important stakeholders–children–in the FTC’s analysis. Additionally, it is important to note that several panelists commented that the child privacy conversation needs to evolve beyond notice and consent, and urged the FTC to focus on creating requirements that provide privacy protections to children, while not creating additional notice or consent mechanisms that burden both parents and companies.
Many panelists also urged the FTC to engage in more enforcement actions. One panelist stated that more frequent enforcement actions would have a “tremendous effect” in rooting out bad actors and encouraging COPPA compliance.
For additional reading on the workshop, see these articles:
FPF is delighted to announce that Dr. Rachele Hendricks-Sturrup has joined the staff as health policy counsel, strengthening FPF’s commitment to supporting the data protection and ethics guidelines needed for health data. In this role, Rachele will work with stakeholders to advance opportunities for data to be used for research and real world evidence, improve patient care, and allow patients to access their medical records. She will also continue to develop FPF’s projects around genetic data, wearables, and machine learning with health data.
Rachele received a Doctor of Health Science degree in 2018 and holds a special focus on pharmacogenomics and precision medicine. Previously, she conducted health information privacy-related research within Harvard Pilgrim Health Care Institute’s Department of Population Medicine, where she was one of the first research fellows to have a combined focus on addressing issues and challenges at the forefront of precision medicine and health policy.
As a prominent academic, Rachele has written numerous influential publications on consumer privacy and non-discrimination. She recently wrote a piece that looks at how direct-to-consumer genetic testing companies engage health consumers in unprecedented ways and leverage genetic information to further engage health companies. Many of her peer-reviewed manuscripts, including one relevant piece entitled “Direct-to-Consumer Genetic Testing Data Privacy: Key Concerns and Recommendations Based on Consumer Perspectives,” can beaccessed via PubMed.
FPF Appoints Robbert van Eijk as Managing Director for Europe
FPF Expanding EU Programming
BRUSSELS – October 1, 2019 – The Future of Privacy Forum (FPF) today announced Robbert van Eijk as managing director for its operations in Europe. In this role, Eijk will implement FPF’s agenda in Europe, oversee its day-to-day operations, and manage relationships with stakeholders in industry, government, academia, and civil society.
“European data protection policies are driving privacy practices around the world,” said FPF CEO Jules Polonetsky. “As an established leader in the data protection field, Rob has technical and policy expertise that will be a tremendous asset as we provide on-the-ground guidance to European stakeholders navigating the dynamic data protection landscape.”
Prior to serving in this position, Eijk worked at the Dutch Data Protection Authority (DPA) for nearly 10 years and has since become an authority in the field of online privacy and data protection. He represented the Dutch DPA in international meetings and as a technical expert in court. He also represented the European Data Protection Authorities, assembled as the Article 29 Working Party, in the multi-stakeholder negotiations of the World Wide Web Consortium on Do Not Track. Eijk is a technologist with a PhD from Leiden Law School focusing on online advertising (real-time bidding).
Peter Swire, FPF Senior Fellow and Professor at the Georgia Institute of Technology, worked with Eijk on the World Wide Web Consortium (W3C) Do Not Track process, which involved more than 100 organizations, and found him to be uniquely constructive. “Rob’s combination of technical insight, policy savvy, and integrity as a person is outstanding,” said Swire. “Rob is an acclaimed expert in EU data protection and the technology of processing personal data, while also understanding perspectives from the United States and globally. He will be a great leader for FPF in Europe.”
Eijk started his professional career in the automotive industry. As an onsite consultant, he specialized in dealer-network planning. Before joining the Dutch DPA, he founded a company with a focus on office automation for small-sized enterprises. In 2008, he sold the company, BLAEU Business Intelligence BV, after he had run it successfully for nine years. Eijk expects to deploy this expertise in the European tech market, helping local startups, entrepreneurs and technologists establish the knowledge and expertise needed to navigate tech and innovation policy.
“The Future of Privacy Forum could not have made a better choice than appointing Robbert van Eijk as Director for its European operations. He is a brilliant expert in privacy and technology matters and has contributed enormously to the European and international debate on these issues,” said Alexander Dix, Former Chairman of the International Working Group on Data Protection in Telecommunications (Berlin Group).
Eijk will collaborate with FPF EU senior policy counsel Gabriela Zanfir-Fortuna to expand FPF programming to bridge the gap between European and U.S. privacy cultures and build a common data protection language. Through its convenings and trainings, FPF helps regulators, policymakers, and staff at EU data protection authorities better understand the technologies at the forefront of data protection law. Last year, FPF kicked off its Digital Data Flows Masterclass, a year-long educational program designed for regulators, policymakers, and staff seeking to better understand the data-driven technologies at the forefront of data protection law and policy.
“FPF has a great reputation in the EU for bringing diverse stakeholders together to develop practical policy approaches to emerging technologies,” said Eijk. “It’s exciting to be part of a talented team exploring best practices for data portability, user control, the ethical use of AI, data research, anonymization, and other issues critical to data protection and fundamental rights in Europe and around the world.”
On 19 November, FPF will host its third annual Brussels Privacy Symposium in partnership with the Brussels Privacy Hub of the Vrije Universiteit Brussel. Details about the event, Exploring the Intersection of Data Protection and Competition Law: The 2019 Brussels Privacy Symposium, can be found here.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
Key Findings From the Latest ‘Right To Be Forgotten’ Cases
Case C-136/17 GC et al v CNIL – right to be forgotten; lawful grounds for processing of sensitive data
Four erasure requests not linked to each other and all having to do with de-linking news articles from Google search results pages, some of which contained sensitive data, were rejected by Google. The CNIL upheld Google’s assessment, considering the public’s right of information prevailed in all cases. The data subjects challenged CNIL’s decision in Court, which sent questions for a preliminary ruling to the CJEU. One key question was whether Google as a controller and within the limits of its activity as a search engine has to comply with the prohibition of processing sensitive personal data, which has very limited exceptions. In other words, should Google ensure that before displaying a search result leading to information containing sensitive data it must have in place one of the exceptions under Article 9(2)? So should there be a difference of treatment between controllers, depending on the nature of the processing they engage in? Another question was whether information related to criminal investigations falls under the definition of information related to “offences” and “criminal convictions” under Article 10 GDPR, so subject to the restrictions for processing imposed by it. The Court made detailed findings about the content of Article 17 GDPR (the right to be forgotten) and about the exceptions of the prohibition to process sensitive personal data.
Key findings:
The Court makes it clear that its findings are equally applicable to the former provisions of the Directive, as well as to the current provisions of the GDPR.
The Court reiterated that Google is a controller (#35, #36, #37) and found that the law doesn’t provide for a general derogation from the prohibition of processing sensitive data for processing conducted by an internet search engine. Therefore, that prohibition and the exceptions following it apply to search engines as well (#42, #43).
In fact, the sensitivity of personal data is what justifies the obligations being applicable to all controllers equally, with the Court stating that exempting search-engine-controllers from the stricter regime applied to processing sensitive data would run counter to the purpose of the provisions to ensure “enhanced protection” for such processing, which “because of the particular sensitivity of the data, is liable to constitute … a particularly serious interference with the fundamental rights to privacy and the protection of personal data” (#44).
This being said, the Court nonetheless acknowledged the practical difficulty of applying those restrictions a priori to hyperlinks that lead to webpages containing sensitive data. “The specific features of the processing carried out by the operator of a search engine in connection with the activity of the search engine … may have an effect on the extent of the operator’s responsibility and obligations under those provisions” (#45), the Court found.
The Court added that a search engine is responsible for this processing “not because personal data referred to in those provisions appear on a web page published by a third party but because of the referencing of that page and in particular the display of the link to that web page in the list of results presented to internet users following a search on the basis of an individual’s name” (#46).
As a consequence, the Court decided that the prohibition to process sensitive data only kicks in for search engines “by reason of that referencing, and thus via a verification, under the supervision of the competent national authorities, on the basis of a request by the data subject” (#47). This means that Google doesn’t have to justify any of the exceptions that would apply to its processing of sensitive personal data in hyperlinks displayed as search results before it receives a request from the data subject.
So what happens after a data subject signals that a search result leads to content that includes sensitive data about them and ask for de-listing?
The Court makes a thorough analysis of Article 17 GDPR – the right to be forgotten, by laying out its conditions of applicability as well as its exceptions. It highlights that the exercise of the freedom of expression and information is now expressly mentioned among the exceptions to the right to be forgotten, per Article 17(3) GDPR (#56, #57).
It concludes that the GDPR “expressly lays down the requirement to strike a balance between the fundamental rights to privacy and protection of personal data guaranteed by Articles 7 and 8 of the Charter, on the one hand, and the fundamental right of freedom of information guaranteed by Article 11 of the Charter, on the other” (#59).
The Court considers that the processing of sensitive data by a search engine can be justified by consent – Article 9(2)(a); if the data are manifestly made public by the data subject – Article 9(2)(e); or where the processing is necessary for reasons of substantial public interest – Article 9(2)(g), on the basis of EU or Member State law (#61).
The Court then analyzes how all these three exceptions from the prohibition would apply to the processing of sensitive data by a search engine. Relevantly, the Court finds that “in practice, it is scarcely conceivable … that the operator of a search engine will seek the express consent of data subjects before processing personal data concerning them for the purposes of his referencing activity” (#62). It seems that the Court recognizes the practical impossibility for a search engine to obtain consent for its referencing activity. The Court also points out that in any case any request to have data de-listed would amount to a withdrawal of consent.
The other possible exception – that the sensitive data have been manifestly made public, is intended to apply “both to the operator of the search engine and to the publisher of the web page concerned” (#63). The Court doesn’t further explain what “manifestly made public” means.
When these conditions are met and provided that the other lawfulness provisions in Article 5 GDPR are complied with (purpose limitation, data minimization etc.), the processing of sensitive data is “compliant” (#64). It thus seems that the Court does not support the approach taken by the EDPB that a controller needs to first have in place a general lawful ground for processing under Article 6 GDPR and then the processing has to fall under one of the exceptions in Article 9.
Even in the case of a compliant processing, the Court points out that data subjects can still object to that processing based on their particular situation, following Article 21 GDPR (#65).
Ultimately, the Court shows that when dealing with a de-listing request involving sensitive data, a search engine must ascertain “having regard to the reasons of substantial public interest” per Article 9(2)(g) GDPR whether including the link at issue in the search results “is necessary for exercising the right of freedom of information of internet users potentially interested in accessing that web page by means of such a search, a right protected by Article 11 of the Charter” (#66). It thus seems that the Court links the right to information to a “substantial public interest”.
Finally, the Court assesses the question of whether information related to ongoing criminal proceedings amount to data relating to “offences” and “criminal convictions”, falling thus under the restrictions pursuant to Article 10 GDPR. Subsequently, the Court provides guidance as to whether links leading to information relating to investigations should be deleted following a request from the data subject, if the investigation found the person concerned not guilty.
The Court takes a broad approach and establishes that “information relating to the judicial investigation and the trial and, as the case may be, the ensuing conviction, is data relating to ‘offences’ and ‘criminal convictions’” pursuant Article 10 GDPR, “regardless of whether or not, in the course of those legal proceedings, the offence for which the individual was prosecuted was shown to have been committed” (#72).
The Court then adds a couple of nuances concerning de-listing requests of such information.
First, it states that “where the information in question has been disclosed to the public by the public authorities in compliance with the applicable national law” is an indication that the processing is appropriate (#73).
Second, the Court adds that “even initially lawful processing of accurate data may over time become incompatible with [the GDPR] where those data are no longer necessary in the light of the purposes for which they were collected or processed” (#74).
The Court provided some detailed guidance on the elements that need to be taken into account in the balancing of rights. Specifically, it made a reference to the jurisprudence of the European Court of Human Rights balancing Article 8 of the European Convention on Human Rights (privacy) and Article 10 of the Convention (freedom of expression) in cases where freedom of the press is at stake and highlighted that “account must be taken of the essential role played by the press in a democratic society, which includes reporting and commenting on legal proceedings. Moreover, to the media’s function of communicating such information and ideas there must be added the public’s right to receive them” (#76).
The Court went further and recalled ECHR case-law stating that “the public had an interest not only in being informed about a topical event, but also in being able to conduct research into past events”.
However, as a last point, the Court acknowledged that the public’s interest as regards criminal proceedings is “varying in degree” and “possibly evolving over time according in particular to the circumstances of the case” (#76). This last point could justify, in limited cases, de-listing of links falling in this category.
The fact that the CJEU recalled ECHR case-law under Article 8 Convention is significant. After the EU Charter of Fundamental Rights entered into force, the CJEU built its profile as a human rights Court by building its own jurisprudence under the Charter.
The search engine will then have to assess “in the light of all circumstances of the case” whether the data subject has the right to the information in question no longer being linked with his or her name by a list of results displayed following a search carried out on the basis of that name. The Court provides detailed guidance on the circumstances to take into account (#77):
“the nature and seriousness of the offence in question”
“the progress and the outcome of the proceedings”
“the time elapsed”
“the part played by the data subject in public life and his past conduct”
“the public’s interest at the time of the request”
“the content and form of the publication” and
“the consequences of publication for the data subject”
The last finding of the Court is perhaps also the most consequential: the Court found that even if the link will not be de-listed following the request of the data subject, the search engine is in any case required to rank first on the search results webpage information relating to the outcome of the criminal case.
In the words of the Court, “the operator is in any event required, at the latest on the occasion of the request for de-referencing, to adjust the list of results in such a way that the overall picture it gives the internet user reflects the current legal position, which means in particular that links to web pages containing information on that point must appear in first place on the list” (#78).
In 2015, CNIL delivered a formal notice to Google that as a result of a successful de-listing request it must apply the link removal to all its search engine’s domain name extensions globally and not only to those versions of the website with EU Member States extensions (#30). Google challenged the decision in Court and that Court sent questions for a preliminary ruling to the CJEU on the interpretation of the scope of the right to erasure. Therefore, what was at issue was an automatic effect of successful de-listing requests: should a successful request be applied globally automatically?
Key points
Court makes it clear that its interpretation concerns both the Directive and the GDPR (#41), so the effects of the judgment are valid for applying Article 17 GDPR too.
The Court was deferential in its judgment to the legal systems outside the EU and it did emphasize that “numerous third States” don’t recognize a right to de-listing or that they have a different approach to that right (#60) and also that the balance between privacy, data protection and freedom of information “is likely to vary significantly around the world” (#60).
The Court found that “currently” there is no obligation under EU law to de-list search engine results globally following a successful de-listing request. There is however an obligation to de-list them throughout the EU and not only in the Member State where the request was made (#64, #66).
At the same time, the Court was also deferential to national Courts and to DPAs, explicitly allowing them to impose global de-listing orders. The Court “emphasized” that while EU law does not require search engines to automatically de-list results globally following a successful request, “it also does not prohibit such a practice” (#72).
Citing its Melloni and Fransson jurisprudence, the Court stated that “a supervisory or judicial authority of a Member State remains competent to weigh up, in the light of national standards of protection of fundamental rights a data subject’s right to privacy and the protection of personal data concerning him or her, on the one hand, and the right to freedom of information, on the other, and, after weighing those rights against each other, to order, where appropriate, the operator of that search engine to carry out a de-referencing concerning all versions of that search engine”.
Therefore, global de-listing orders are still possible in those Member States whose fundamental rights practice allows it (and to the extent that practice does not conflict with the EU Charter of Fundamental Rights, per Melloni and Fransson), following a case by case analysis of individual cases.
Interestingly enough, the Court does not make any findings concerning Articles 7 and 8 Charter in this judgment, other than mentioning them in one paragraph which recalled the findings in the first Google right to be forgotten judgment.
Relevant nuances
The Court included two findings in its judgment that justify a potential future clear legislative measured that would require successful erasure requests to automatically have a global scope.
Court states that the referencing of a link referring to information regarding a person whose “center of interests is situated in the Union” is likely to have “immediate and substantial effects on that person within the Union itself” (#57)
The Court then informs the EU legislature that due to the consideration above, it is competent “to lay down the obligation for a search engine operator to carry out, when granting a request for de-referencing made by such a person, a de-referencing on all the versions of its search engine” (#58).
In fact, the Court made a point of highlighting the current lack of a specific legal provision that extends the scope of GDPR rights outside of the EU. The Court thinks that “it is in no way apparent” that the EU legislature “would…have chosen to confer a scope on the rights enshrined in those provisions which would go beyond the territory of the Member States and that it would have intended to impose on an operator which, like Google, falls within the scope of that directive or that regulation a de-referencing obligation which also concerns the national versions of its search engine that do not correspond to the Member States” (#62). Thus, the Court seems to not take into account the intention of the EU legislature to generally confer to the GDPR extraterritorial effects, which is shown by the inclusion of Article 3(2) into the GDPR.
The consequences of the Court ignoring the potential extraterritorial scope of GDPR provisions has immediate effects on the cooperation and consistency mechanism. In the next paragraph the Court states that EU law does not currently provide for cooperation instruments and mechanisms at EDPB level as regards the scope of a de-referencing outside the Union (#63). Technically this means that if a global de-listing request is granted by one of the DPAs, then that DPA does not have to coordinate with the other DPAs at EDPB level and can act by itself.
Further, the Court acknowledged that even at Union level there will be differences regarding the result of weighing up the interest of the public to access information and the rights to privacy and data protection, especially in the light of the GDPR allowing derogations at MS level for processing for journalistic purposes or artistic/literary expression (#67).
If this occurs, in the case of cross-border processing, the Court stated that the EDPB must reach consensus and a single decision which is binding on all DPAs and with which the controller must ensure compliance as regards processing across the Union (#68). Therefore, for divergent practices concerning de-listing cases at Union level, the EDPB is competent to hear cases and to cooperate in order to reach a single decision and provide certainty, as opposed to divergent practices concerning de-listing cases globally where the Court decided the EDPB is not competent to cooperate on cases.
CCPA 2.0? A New California Ballot Initiative is Introduced
Introduction
On September 13, 2019, the California State Legislature passed the final CCPA amendments of 2019. Governor Newsom is expected to sign the recently passed CCPA amendments into law in advance of his October 13, 2019 deadline. Yesterday, proponents of the original CCPA ballot initiative released the text of a new initiative (The California Privacy Rights and Enforcement Act of 2020) that will be voted on in the 2020 election; if passed, the initiative would substantially expand CCPA’s protections for consumers and obligations on businesses. While the new proposal preserves key aspects of the current CCPA statute, there are some notable additions and amendments.
Notable Provisions
The California Privacy Rights and Enforcement Act of 2020 ballot initiative would:
Create the “California Privacy Protection Agency,” an independent executive agency tasked with protecting consumer privacy, ensuring that consumers are well-informed about their obligations and rights, promulgating regulations, and enforcing the law against businesses that violate of consumer privacy rights. The initiative provides for a hand-off process from the California Attorney General to this new agency; the AG’s office is currently responsible for CCPA education, rulemaking, and enforcement activities.
Add a new category of personal information to the CCPA called: “sensitive information,” which includes: precise geolocation information, social security number, passport number, customer’s account log in, financial account, personal information revealing a consumer’s racial or ethcinic origin, religion, union membership, or sexual orientation, among other categories.
Consumers are granted new rights over “sensitive information” such as the right to opt-out, at any time, from a business disclosing or using sensitive personal information for advertising and marketing or disclosing this information to a service provider or contractor for these purposes.
Businesses shall provide a separate link for users to exercise this opt-out right.
Businesses must obtain opt-in consent prior to the sale of a consumer’s sensitive personal information. A consumer who opted-in to the sale of sensitive personal information can revoke this authorization at any time.
Create a new right to correct inaccurate personal information.
Require opt-in consent for the collection of personal information from children under 16, and increase penalties for children’s privacy violations.
Provide that a consumer may request that a business disclose personal information collected beyond the currently-required 12-month period, and the business must provide such information unless doing so would be unduly burdensome or involve a disproportionate amount of information.
Require that a business must notify the consumer and state, when using a consumer’s personal information to advance the business’s political interests on their own behalf, or influence the outcome of an election.
Enact additional notice requirements for businesses, including but not limited to, specific requirements for “third parties.”
Amend the definition of a “business” as having 100,000 or more consumers or households, rather than the CCPA’s 50,000 or more consumers, households, or devices.
Amend the definition of “business purpose” to include new elements such as: “non-personalized advertising” (not based on a profile or predictions derived from a consumer’s past behavior) provided the information is not disclosed to a third party, used to build a profile of the consumer, or alter the consumer’s experience with the business.
Amend the definition of “deidentified” to: “information that cannot reasonably be used to infer information about, or otherwise be linked to, an identifiable consumer,” if the business meets certain requirements. The Attorney General will provide additional regulations related to the definition of “deidentified.”
Define “household” as “a group, however identified, of consumers who cohabitate with one another at the same residential address and share access to common device(s) or service(s) provided by a business.”
Provide that the provisions of the ballot initiative may be amended after it is approved by voters by a statute that was passed by a majority of members of the California State Legislature, and signed by the governor if the amendments are “consistent with and further the purpose and intent” of the Act.
Next Steps
According to the California Elections Code (ELECT CA ELEC § 9002), the California Attorney General will hold a 30-day review process and public comment period, followed by five additional days for proponents of the initiative to amend the proposal, prior to the initiative appearing on the ballot.
As stated above, this proposal provides an idiosyncratic approach to the legislative process for laws passed via a ballot initiative by allowing for amendments after it is signed by the governor if the amendments are “consistent with and further the purpose and intent” of the Act. This approach suggests a willingness to pass new amendments to help the law keep pace with emerging technology. The standard process for amending ballot initiatives requires a supermajority vote of the legislature.
Civic Data Privacy Leaders Convene at MetroLab Annual Summit
The MetroLab Network’s Annual Summit brought together an inspired group of civic, academic, industry, and nonprofit leaders to discuss the most important issues in smart cities and civic innovation. For the third year in a row, FPF partnered with MetroLab Network to promote data privacy perspectives and to advance responsible data practices within smart and connected communities.
This year at the Summit, I moderated a roundtable discussion of privacy officials representing Pittsburgh, Seattle, Boulder, and more than a dozen other cities who have joined the Civic Data Privacy Leaders Network, an FPF-led initiative supported by the National Science Foundation. Network members joined summit participants from academia, industry, and civil society to share their most pressing questions, concerns, and smart city success stories with each other. The roundtable highlighted the common privacy challenges and opportunities faced by today’s local government privacy leaders and sparked new ideas for promoting fair and transparent data practices.
In this candid and collaborative atmosphere, some common priorities emerged:
engaging and activating key stakeholders (including elected leaders, partner organizations, and diverse community members);
securing adequate resources and strengthening in-house privacy expertise;
increasing municipal access to cutting-edge privacy enhancing technologies like differential privacy or synthetic data;
integrating privacy considerations at each stage of the data and technology lifecycle (including procurement);
committing to clear, consistent privacy principles that reflect community values and priorities.
FPF also previewed a working draft of its forthcoming Smart Cities & Communities Privacy Risk Assessment at the roundtable, intended to help smart and connected communities ask the right questions and reach for the right tools to ensure that they are collecting, using, and sharing personal data responsibly.
Other important, data-centric discussions during the event included Thursday’s Mobility Data Management, Analytics, and Privacy session—in which Network member Ginger Ambruster of Seattle and I participated—and sessions dedicated to a new Model Data Handling Policy for Cities from UMKC, data equity and responsible data science, micromobility services, and digital equity and community engagement. Univision ran a Spanish language story on the event focused on how smart cities can ensure equitable treatment and access to resources for immigrants, which you can view here.
While the Civic Data Privacy Leaders roundtable—and the MetroLab Summit as a whole—underlined the significant challenges that communities around the world are facing as they explore new technologies and data uses, it also highlighted the potential for civic innovation to deliver more livable, equitable, and sustainable communities. By working together across sectoral and geographic boundaries, the event showcased how we can help city and community leaders strengthen their ability to collect, use, and share data in a responsible manner and promote the public’s trust in smart city technologies and in local government.
To learn more or join the Civic Data Privacy Leaders Network, a peer group for local government privacy leaders from more than 25 localities in the U.S. and abroad, please contact me at [email protected].
The Right to Be Forgotten: Future of Privacy Forum Statement on Decisions by European Court of Justice
WASHINGTON, DC – September 24, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding two European Court of Justice decisions announced today in its cases with Google:
Key decisions about the balance of privacy and free expression still remain to be settled by the European Court of Justice (ECJ). Although the ECJ’s two decisions generally support the rights of those searching the web to access links to information, both show the tremendous weight European law gives to privacy as a human right that is given the strongest consideration before it is limited. Even though the court found that European law does not mandate global delisting when the Right to Be Forgotten is asserted, it indicated that a data protection authority could seek global delisting if the privacy balance called for it in a specific circumstance.
The court also made clear that within Europe there can be national variances in how the Right to Be Forgotten can be applied, given differences in local law and culture.
In a second case also decided today, the court avoided banning in advance listing of results that include political, racial or other sensitive information. It did require heightened consideration for those results, to the extent that it even required that pages containing information about criminal histories include relevant context on the search page, when the affected party objects to the results.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
FTC should investigate app developers banned by Facebook – Statement by Future of Privacy Forum CEO
Future of Privacy Forum Calls on FTC to Investigate Apps That Misused Consumer Data
WASHINGTON, DC – September 20, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding Facebook’s announcement that it has banned 400 developers from its app store:
The FTC should quickly act against many of these app developers, since they share the blame with Facebook, and some could still be holding on to consumer data or continuing to sell it. If apps that misuse Facebook members’ data escape legal penalty, developers will get the message that there is no legal risk to improper data-sharing. Every company, and especially app developers, needs to understand that there are consequences for abusing consumer data. This situation demonstrates yet again that Congress should dramatically increase the human and technological resources available to the FTC and give it broader authority to levy civil penalties.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.