Adoption of augmented and virtual reality hardware and software technologies – collectively known as extended reality or “XR” – is taking hold among businesses and individuals. If you’d like to engage in the discussion about the ethical and privacy considerations of XR tech, join our XR Week activities April 19th to 23rd!
After decades of development, demonstrations, and improvements to hardware and software, immersive technologies are increasingly being implemented in education and training, gaming, multimedia, navigation, and communication. Emerging use cases will let individuals explore complicated moral dilemmas or experience a shared digital overlay of the physical world in real time. But XR technologies typically cannot function without collecting sensitive personal information – data that can create privacy risks.
FPF’s XR Week will explore key privacy and ethical questions surrounding augmented reality (AR), virtual reality (VR), and related immersive technologies. The week will feature several events, including a roundtable discussion with expert participants and several conversations hosted in virtual reality.
April 19th, 1:00 – 1:20PM EDT: Reel Virtuality
To kick off XR Week, FPF Policy Counsel and lead on XR technology Jeremy Greenberg and FPF Vice President of Policy John Verdi will discuss a report, Augmented Reality + Virtual Reality: Privacy & Autonomy Considerations in Emerging, Immersive Digital Worlds, to be released on the same day. Greenberg and Verdi will discuss the differences between various immersive technologies, primary use cases, and key privacy and ethical questions. The conversation, originally recorded in Real VR Fishing, can be viewed in 2-D on LinkedIn Live – register for the event on LinkedIn to receive a notification when it begins.
April 21st, 2:00 – 3:30PM EDT: AR + VR: Privacy & Autonomy Considerations for Immersive Digital Worlds
Our featured XR Week event, AR + VR: Privacy & Autonomy Considerations for Immersive Digital Worlds, will include a conversation between FPF Policy Counsel and lead on XR technology Jeremy Greenberg, and Facebook Reality Labs Director of Policy James Hairston. A panel, moderated by Greenberg, will discuss the recorded conversation. Panelists will include:
Ana Lang, Senior Vice President, General Counsel, Magic Leap.
Joe Jerome, Director of Platform Accountability & State Advocacy, Common Sense Media.
Jessica Outlaw, Behavioral Scientist, The Extended Mind.
April 22nd, 1:00 – 1:10PM EDT: Sculpting XR Compliance
On the Thursday of XR Week, Greenberg and BakerHostetler Data Protection Attorney Carolina Alonso will discuss the legal compliance challenges associated with XR technologies. The conversation, originally recorded in SculptrVR, can be viewed in 2-D on LinkedIn Live – register for the event on LinkedIn.
We hope you’ll join us!
A New Era for Japanese Data Protection: 2020 Amendments to the APPI
The recent amendments to Japan’s data protection law (the Act on the Protection of Personal Information, henceforth the ‘APPI‘) contain a number of new provisions certain to alter – and for many foreign businesses, transform – the ways in which companies conduct business in or with Japan. In addition to greatly expanding data subject rights, most notably, the amendments to the APPI (the ‘2020 Amendments‘):
(i) eliminate all former restrictions on the APPI’s extraterritorial application;
(ii) considerably heighten companies’ disclosure and due diligence obligations with respect to overseas data transfers;
(iii) introduce previously unregulated categories of personal information (each with corresponding obligations for companies), including ‘pseudonymously processed information’ and ‘personally referable information’; and
(iv) for the first time, mandate notifications for qualifying data breaches.
The 2020 Amendments will be enforced by the Personal Information Protection Commission of Japan (the “PPC”), pursuant to forthcoming PPC guidelines alongside the amended Enforcement Rules for the Act on the Protection of Personal Information (the ‘amended PPC Rules‘) and the amended Cabinet Order to Enforce the Act on the Protection of Personal Information (the ‘amended Cabinet Order‘) (both published on March 24, 2021).
As the 2020 Amendments are set to enter into force on April 1, 2022, Japanese and global companies that conduct business in or with Japan, have just less than one year to bring their operations into compliance. To facilitate such efforts, this blog post describes those provisions of the 2020 Amendments likely to have the greatest impact on businesses, as well as current events in Japan which will affect their implementation and should inform the manner by which companies address enforcement risks and compliance priorities.
1.LINE Data Transfers to China: A Wake-Up Call for Japan
To appreciate the effect that the 2020 Amendments will have on the Japanese data protection space, one must first consider the current political and societal contexts in Japan in which the 2020 Amendments will be introduced – and enforced – beginning with a recent incident of note involving LINE Corporation.
In March 2021, headlines across Japan shocked locals: Japan-based messaging app LINE, actively used and trusted by approximately 86 million Japanese citizens, had been transferring users’ personal information, including names, IDs and phone numbers, to a Chinese affiliate. It is neither unusual nor unlawful for Japanese tech companies to outsource certain of their operations, including personal information processing, overseas. But for Japanese nationals, the LINE matter is different for a number of important reasons, not least of which is the Japanese population’s awareness of the Chinese Government’s broad access rights to personal data managed by private-sector companies in China, pursuant to China’s National Intelligence Law.
LINE is not only the most utilized messaging application in Japan; it also occupies a special place in the country’s historical and cultural consciousness. When Japan was hit by the 2011 earthquake, use of voice networks failed and email exchanges were delayed, as citizens struggled to communicate with, and confirm the safety of, their loved ones. And so, LINE was born – a simple messaging and online calling tool to serve as a communications hotline in case of emergency. A decade on, LINE has become the major – and for many the only – means of communication in Japan – particularly in today’s socially-distanced world.
For the Japanese Government too, LINE serves a crucial role: national – and municipality – level government bodies use LINE for official communications, including of sensitive personal information such as for COVID-19 health data surveying. News of LINE’s transfer of user data to China, including potential access by the Chinese Government, therefore horrified private citizens and public officials both.
On March 31, 2021, the PPC launched an official investigation into LINE and its parent company, Z Holdings, over their management of personal information. Until such investigation is concluded, whether and to what extent LINE violated the APPI (and in particular, its provisions governing third party access and international transfers) will remain uncertain. Regardless, the impact of this matter on the Japanese data privacy space is already unfolding. In late March, a number of high-ranking Japanese politicians (including Mr. Akira Amari, Chairperson of the Rule-Making Strategy Representative Coalition of the Liberal Democratic Party of Japan) sent the PPC and other relevant Government ministries strongly-worded messages urging immediate action with respect to LINE, and more broadly, calling for a risk assessment to be conducted vis-à-vis all personal information transfers to China by companies in Japan.
Several days later, Japanese media reported that the PPC had requested members of both the KEIDANREN (the Japan Business Federation, comprised of 1,444 representative companies in Japan) and the Japan Association of New Economy (comprised of 534 member companies in Japan), to report their personal information transfer practices involving China, and to detail the privacy protection measures in place with respect to such transfers. For any APPI violations revealed, the PPC will issue a recommendation potentially followed with an injunctive order, the latter of which carries a criminal penalty (including possible imprisonment) if not implemented.
Importantly, recent political support for stronger data protection measures extends beyond transfers to China. For instance, Mr. Amari has also reportedly called on the PPC to broadly limit permissible overseas transfers of personal information to those countries with data protection standards equivalent to the APPI (a limitation which, if implemented, would greatly surpass restrictions on transfer under both the current APPI and the 2020 Amendments).
Although the PPC has yet to respond, it is evident that both political and popular sentiment in Japan strongly favor enhanced protections for Japanese persons’ personal information. The inevitable outcome of such sentiment, which may be further amplified depending on the PPC’s forthcoming conclusions regarding the LINE matter,will be the increasingly stringent enforcement of the APPI and its 2020 Amendments, and potentially, further amendments thereto. As recent events in Japan demonstrate, this transformation has already begun to take effect. Companies conducting business in or with Japan, whether Japanese or foreign, should therefore pay close attention to the Japanese data privacy space over the course of this year.
2.Broadened Extraterritorial Reach and International Transfer Restrictions
For ‘Personal Information Handling Business Operators’ (henceforth ‘Operators‘, a term used in joint reference to controllers and processors, upon which the APPI imposes the same obligations) arguably the greatest impact of the 2020 Amendments will derive from their drastic revisions to Article 75 (extraterritoriality) and Article 24 (international transfer).
To date, the APPI’s extraterritorial reach has been limited to a handful of its articles, primarily those governing purpose limitation and lawful acquisition of personal information (‘PI‘) by overseas Operators. From April 2022, however, Article 75 of the amended APPI will, without exception, fully bind all private-sector overseas entities, regardless of their size, which process the PI, pseudonymously processed PI or anonymously processed PI of individuals who are in Japan, in connection with supplying goods or services thereto.
With respect to international transfers, Article 24 of the current legislation prohibits the transfer of PI to a ‘third party’ outside of Japan absent the data subject’s prior consent, unless (i) the recipient country has been white-listed by the PPC or (ii) the recipient third party upholds data protection standards equivalent to the APPI (in practice, these would generally be imposed contractually). Otherwise, international transfers may also be conducted pursuant to legal obligation or necessity (for the protection of human life, public interest or governmental cooperation, provided that for each, the data subject’s consent would be difficult to obtain). The APPI’s international transfer mechanisms generally conform to those prescribed by other global data protection regimes, loosely resembling the EU GDPR’s adequacy decisions (with respect to (i) above), and standard contractual clauses or binding corporate rules (with respect to (ii) above, although there are no PPC-provided contractual clauses, and non-binding arrangements such as the APEC CPBR System are PPC-approved).
The 2020 Amendments and amended PPC Rules do not modify the above transfer mechanisms, but they do narrow their scope in two key aspects. First, pursuant to Article 24(2) of the 2020 Amendments, transfers conducted on the basis of data subject consent will henceforth require the transferring Operator (on top of preexisting notification obligations) to inform the data subject in advance as to the name of the recipient country, and the levels of PI protection provided by both that country (assessed using an “appropriate and reasonable method”) and therecipient third party. Absent such information, data subject consent will be rendered uninformed and the transfer, invalid.
Of greater impact on the transferring Operator, however, will be the second modification (pursuant to Article 24(3) of the 2020 Amendments): in the event that an international transfer is conducted in reliance on contractually or otherwise imposed APPI data protection standards (the primary transfer mechanism on which Operators in Japan rely), such contractual safeguards alone are to be rendered insufficient. Going forward, the transferring Operator must, in addition to imposing APPI-equivalent obligations upon a recipient third party, (i) take “necessary action to ensure continuous implementation” of such obligations by the recipient; and (ii) inform the data subject, upon request, regarding the actions the Operator has taken.
With respect to (i) above, the amended PPC Rules interpret “necessary action to ensure continuous implementation” as requiring the transferring Operator to: (1) periodically check the implementation status and content of the APPI-equivalent measures by the recipient third party, and assess (by an “appropriate and reasonable method”) the existence of any foreign laws which might impact such implementation; (2) take necessary and appropriate actions to remedy any obstacles that are found; and (3) suspend all PI transfer to the third-party recipient, should its continuous implementation of the APPI-equivalent measures become difficult.
In addition, following receipt of a data subject’s request for information (pursuant to (ii) above), the amended PPC Rules specify that the transferring Operator must, without undue delay, inform the requesting data subject of each of the following:
(1)the manner by which the APPI-equivalent measures were established by (or presumably with) the recipient third party (such as a data processing agreement or memorandum of understanding, or in the case of inter-group transfers, a privacy policy);
(2)details of the APPI-equivalent measures implemented by the recipient third party;
(3)the frequency and method by which the transferring Operator checked such implementation;
(4)the name of the recipient country;
(5)whether any foreign laws may affect the implementation of the APPI-equivalent measures, and a detailed overview of such laws;
(6)whether any obstacles to implementation exist, and a detailed overview of such obstacles; and
(7)the measures taken by the transferring Operator upon a finding of such obstacles.
Only if provision of the above items to the data subject is likely to ‘significantly hinder’ an Operator’s business operations, might that Operator refrain from such (complete or partial) disclosure.
In practice, Operators primarily rely upon contractual safeguards and consent (in that order) to transfer PI outside of Japan. Indeed, the PPC’s list of “adequacy decisions” on which transferring Operators may alternatively rely is significantly shorter than that of the European Commission: to date, only the UK and EEA members have been deemed adequate recipients of a PI transfer from Japan. Therefore, the onerous informational and due diligence obligations incumbent upon Operators from April 2022, which affect precisely these two transfer mechanisms, are certain to impact business operations in Japan. And, given the 2020 Amendments’ unbridled extraterritoriality, this burden will be equally felt overseas. Most importantly, in the wake of the March 2021 LINE matter, compliance with the current and amended APPI, and in particular its overseas transfers restrictions, will be at the top of the PPC’s enforcement priorities.
3.Mandatory Data Breach Notifications
In addition to expanding the types of security incidents subject to the amended APPI, more notably, data breach notifications will henceforth be mandatory (in contrast, data breach notifications are subject to ‘best efforts’ under current legislation). Going forward, Operators will be required – pursuant to Article 22-2 of the 2020 Amendments and the amended PPC Rules – to promptly notify both the PPC and data subjects of the occurrence and/or potential occurrence of any data leakage, loss, damage or other similar situation which poses a ‘high’ risk to the rights and interests of data subjects (henceforth, a ‘breach‘).
The types of breaches which meet this ‘high’ risk threshold, and thus trigger a notification obligation, are described by the amended PPC Rules as those which involve, or potentially involve, any of the following: (i) sensitive (‘special care-required’) PI; (ii) financial injury caused by unauthorized usage; (iii) a wrongful purpose(s) as the cause; or (iv) greater than 1,000 affected data subjects. However, a notification is not required in the event that the Operator implemented ‘necessary measures’ to safeguard the rights and interests of data subjects (such as sophisticated encryption).
The amended PPC Rules also stipulate the required content for such notifications, although Operators are granted thirty days to provide details unknown at the time of the initial notice:
(1) overview of the breach;
(2) the types of PI affected or possibly affected by the breach;
(3) the number of data subjects affected or possibly affected by the breach;
(4) causes of the breach;
(5) existence and nature of secondary damage or risks thereof;
(6) status and nature of communications to affected data subjects;
(7) whether and how the breach has been publicized;
(8) measures implemented to prevent a recurrence; and
(9) any additional matters which may serve as a useful reference.
For those Operators ‘entrusted‘ by another Operator with the processing of PI, the 2020 Amendments provide a second option: in lieu of notifying the PPC and data subjects, such “entrusted” Operators may instead alert the “entrusting” Operator as to the breach. In practice, this likely equates to the EU GDPR’s requirement for processors to notify controllers in the event of a breach (although under the 2020 Amendments, direct accountability to the PPC and data subjects is still the default, including for “entrusted” Operators).
In the event of a breach, amended Article 30(5) additionally confers upon data subjects the right to request deletion, suspension of use and suspension of transfer, of affected PI.
4.Expansion of ‘Personal Information’ Concepts and Categories
Another major modification to the APPI is the expanded scope of the types of PI covered. In addition to eliminating the APPI’s differential treatment of temporary PI (retained for up to six months), the 2020 Amendments introduce a new category of information, ‘pseudonymously processed information‘, thereby bringing the Japanese data protection regime one additional step closer to the EU GDPR framework.
As currently drafted, the APPI recognizes only two major types of information: PI and anonymously processed information. Notably, the method of rendering anonymously processed information under the APPI – in contrast to the EU GDPR– need not be technically irreversible (unless such data originates in the UK or EEA and the transfer is based on the European Commission’s adequacy decision on Japan, in which case special PPC-drafted Supplementary Rules do require irreversibility); instead, the APPI endeavors to preserve anonymity by requiring Operators to implement appropriate security measures to prevent reidentification.
Pseudonymously processed information is defined by the 2020 Amendments as information relating to an individual, which cannot identify such individual unless collated with additional information. The stated intention behind the drafters’ introduction of the pseudonymization process is to enable Operators to (i) utilize pseudonymously processed information for internal purposes including business analytics, the development of computational models, etc., and/or (ii) retain rather than delete, for potential future statistical analysis usage, pseudonymously processed information derived from PI which are no longer necessary for the original purpose(s) for which they were collected.
The 2020 Amendments and amended PPC Rules model the pseudonymization process on anonymization, requiring the removal of any (i) description, (ii) unique ‘personal identification code’ (as defined in the APPI), and (iii) information relating to the processing method performed to enable the removal of (i) and (ii) above. The immediate result is the creation, by separation, of two types of information: pseudonymously processed information and ‘removed’ PI, where the latter is the ‘key’ enabling reidentification.
The removed PI are treated as PI under the 2020 Amendments, and as such are subject to all of the same requirements and restrictions, although Operators in possession of both removed PI and pseudonymously processed information are additionally obligated to provide enhanced security in order to safeguard the integrity of the pseudonymously processed information (pursuant to the amended PPC Rules and amended Article 35-2(2)).
Notably, and in divergence from the EU GDPR approach to pseudonymously processed information, the 2020 Amendments’ rules governing treatment of such information vary according to the Operator involved. With respect to pseudonymously processed information handled by an Operator in simultaneous possession of the removed (and separately handled) PI, amended Article 35-2 stipulates the following specific requirements:
(i) a prohibition of the collation of such information with other data, such as the removed PI, in a manner which could identify data subjects;
(ii) strict application of the principles of purpose limitation and necessity thereto;
(iii) a prohibition on usage of any contact information contained therein to phone, mail, email or otherwise contact data subjects;
(iv) a prohibition of any transfer thereof to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI);
(v) in the event of their acquisition or the intended alteration of their processing purpose, limitation of the Operator’s disclosure obligation to that of notice by publication;
(vi) non-applicability of breach notification obligations pursuant to amended Article 22-2, provided that the removed PI are not also subject to the breach; and
(vii) the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).
In addition to the above, the APPI’s ‘general’ requirements pursuant to Articles 19-22 will apply to pseudonymously processed information handled by an Operator which simultaneously (but separately) possesses the removed PI. Such Operator will be required to:
(i) maintain accuracy of the pseudonymously processed information (for the duration their utilization remains necessary, after which their immediate deletion – alongside the deletion of the removed PI – is required, subject to the Operator’s best efforts);
(ii) implement necessary and appropriate security measures to prevent leakage, loss or damage of the pseudonymously processed information; and
(iii) exercise necessary and appropriate supervision over employees and entrusted persons handling the pseudonymously processed information.
In contrast, with respect to pseudonymously processed information handed by an Operator which does not simultaneously possess the removed PI, amended Article 35-3 prohibits such Operator from acquiring the removed PI and/or collating the pseudonymously processed information with other information in order to identify data subjects, and limits the applicable provisions of the 2020 Amendments to the following:
(i) the implementation of necessary and appropriate security measures to prevent leakage (a simplified version of Article 20);
(ii) the exercise of necessary and appropriate supervision over employees and entrusted persons handling such information (pursuant to Articles 21 and 22);
(iii) a prohibition on usage of any contact information contained in the pseudonymously processed information to phone, mail, email or otherwise contact data subjects;
(iv) a prohibition of any transfer of such information to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI); and
(v) the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).
In addition to pseudonymously processed information, the 2020 Amendments, pursuant to Article 26-2, introduce an additional, fourth category of information – namely, ‘personally referable information’. This fourth category includes cookies and purchase history (for example), which items may not independently be linkable to a specific individual (and thus would not constitute PI) but which could, if transferred to an Operator in possession of additional, related data, become PI. To account for such qualifying transfers, the 2020 Amendments introduce a consent requirement (such as an opt-in cookie banner).
In the case of overseas transfers, the transferring Operator must additionally inform the data subject as to the data protection system and safeguards of the recipient country and third party, as well as take ‘necessary action to ensure continuous implementation’ of APPI-equivalent safeguards by such recipient third party. Unlike for PI, the data subject does not have a right to request additional details regarding the ‘necessary action’ taken by the Operator with respect to an overseas transfer of personally referable information.
5.Preparing for the 2020 Amendments: Next Steps for Japanese and Foreign Operators
Companies conducting business in or with Japan should be mindful of the demanding nature of the 2020 Amendments to the APPI, and the stringency with which the PPC will seek to enforce them – particularly in view of the dismay caused by the LINE matter and the likelihood of efforts by the PPC to avoid similar incidents in the future.
Moreover, as the European Commission finalizes its first review of its 2019 adequacy decision on Japan, the PPC’s interpretative rules and enforcement trends may further intensify, with the aim of bringing Japanese data protection legislation closer to global standards, including the EU GDPR framework. Bearing this in mind, companies – including those not currently subject to the APPI, but which provide goods and/or services to individuals in Japan – would be wise to proactively conduct necessary modifications to their internal data protection policies and mechanisms, in order to ensure operational compliance with the amended APPI by April 2022.
For those Operators involved in international transfers of PI from Japan, absence of a PPC-issued “standard contractual clauses” template renders difficult, and from a compliance standpoint uncertain, any reliance on contractually-imposed APPI-equivalent standards pursuant to amended Article 24(3). However, one potential solution for Operators preparing to rely on this transfer mechanism for overseas PI transfers (excluding to the EEA or UK) may be the European Commission’s revised Standard Contractual Clauses (‘NewSCCs‘), which are due to be published in early 2021. Subject to certain necessary modifications (of jurisdictional clauses and so forth), Operators may consider utilizing the New SCCs as a starting point, to bind recipient third parties to the stringent data protection standards and obligations of the 2020 Amendments.
Operators engaged in transferring PI should also be mindful of the 2020 Amendments’ onerous due diligence obligations with respect to overseas third parties. Prior to and during any cross-border engagements involving Japan-origin PI, Operators must actively ensure that their third-party recipients of such PI (including partners, vendors and subcontractors, as well as each of their respective partner, vendor and subcontractor recipients, and so forth) successfully implement, and continuously maintain, APPI-equivalent measures.
The 2020 Amendments’ enhanced disclosure obligations invite data subjects to hold Operators accountable with respect to the preventative and/or reactive measures Operators take – or fail to take – to protect their PI. Operators engaging foreign third parties should therefore consider reviewing and amplifying their due diligence of such entities, in addition to assessing the laws in each recipient country, in order to proactively identify and devise solutions to address potential obstacles to APPI adherence overseas.
The 2020 Amendments’ broadened extraterritorial application will also require non-Japanese companies to modify their internal data breach assessment and notification systems, to ensure that the PPC and data subjects in Japan are appropriately notified in the event of a qualifying breach; and to implement any necessary changes to their data subject communications platforms or data subject rights request forms, to enable data subjects in Japan to successfully exercise their amended APPI rights from April 1, 2022.
Once published, the PPC guidelines to the 2020 Amendments will further clarify (and potentially amplify) Operators’ compliance obligations with respect to each of the topics addressed in this blog post. The PPC’s findings in regard to LINE’s conduct may also have significant bearing on future APPI enforcement trends and risks. Therefore, in addition to implementing necessary measures to ensure operational compliance with the 2020 Amendments, companies processing covered PI and interested data privacy professionals should look out for these items over the next several months.
Supporting Responsible Research and Data Protection
Scientific research is often dependent on access to personal information, whether collected directly from individuals or collected for a real-world use and then accessed for research. For research to be trusted, processing of personal information must be lawful, ethical and subject to privacy and security protections. Supporting responsible research is a priority for FPF:
Data held by companies is often essential for research, so we develop best practices for access to corporate data and ethical review structures to provide oversight.
Machine learning techniques can raise issues of research transparency and fairness and bias, so we work on methods to identify and counter bias.
De-identification can reduce the risks involved with research, so we work to advance de-identification that supports the utility of data.
We work with policymakers to develop legislative protections that support research with strong safeguards.
We develop and support leadership networks to facilitate privacy-protective data sharing and working partnership opportunities between academic researchers and industry practitioners.
We work to ensure access and protections for cross border data flows for research.
Access to Corporate Data & Ethical Review
Data held by companies is useful for researchers striving to discover new scientific insights and expand human knowledge. When corporations open their data stores and responsibly share this data with university researchers, they can support progress in medicine, public health, education, social sciences, computer science, and many other fields.
But access to the data needed is often unavailable due to a range of barriers – including the need to connect with appropriate partners, protect privacy, address commercial concerns, maintain ethical standards, and comply with legal obligations.
Issuing best practices and contract guidelines for companies sharing data with researchers. The Best Practices for Sharing Data with Academic Researchers were developed by the FPF Corporate Academic Data Stewardship Research Alliance, a group of more than two dozen companies and organizations. The best practices favor academic independence and freedom over tightly controlled research, and encourage broad publication and dissemination of research results, while protecting the privacy of individual research subjects. Specific best practices include having a written data sharing agreement, practicing data minimization, and developing a common understanding of relevant de-identification techniques, among many others. In addition, FPF published Contract Guidelines for Data Sharing Agreements Between Companies and Academic Researchers. The guidelines cover best practices and sample language that can be used in contracts with companies that supply data to researchers for academic or scientific research purposes. FPF’s Corporate Academic Data Stewardship Research Alliance and these resources, including FPF’s report, Understanding Corporate Data Sharing Decisions, were supported by the Alfred P. Sloan Foundation.
Establishing the Ethical Data Use Committee (EDUC). Through the generous support of the Schmidt Futures Foundation, FPF is preparing to launch an independent ethical review panel to evaluate the risks and benefits of organizations’ data sharing projects with academic researchers. The Ethical Data Use Committee will conduct prospective reviews of research projects using data not explicitly gathered for research purposes, such as data shared by companies to academic researchers. The EDUC is designed to work in compliment with the remainder of the research review process. The purpose of the EDUC review is to offer organizations recommendations to improve the privacy, security, and ethical profile of the research data that is not subject to review by other components of the research review infrastructure such as Institutional Review Boards or Institutional Biosafety Committees.
This work builds on FPF’s project, Beyond IRBs: Designing Ethical Review Processes for Big Data Research, supported by the Alfred P. Sloan Foundation and U.S. National Science Foundation, which brought together government, industry, civil society, and researchers in law, ethics, and computer science to consider ethical review mechanisms for data collected in corporate, non-profit, and other non-academic settings.
Building Communities of Practice
Honoring effective data-sharing partnerships for research and sharing best practices. The FPF Award for Research Data Stewardship is a first-of-its-kind award recognizing a research partnership between a company that has shared data with an academic institution in a responsible, privacy protective manner. The 2020 award-winning partnership was between University for California, Irvine, Professor of Cognitive Science Dr. Mark Steyvers and Lumos Labs. In an FPF virtual event on September 22, 2020, Professor Steyvers and Bob Schafer, General Manager at Lumosity, discussed their award-winning collaboration and lessons learned for future data sharing partnerships between companies and academic researchers. The annual FPF Award for Research Data Stewardship is supported by the Alfred P. Sloan Foundation.
FPF has continued this award and is currently working on reviewing submissions and looks forward to announcing a 2021 winner in the early summer months.
Bringing the best academic privacy research into practice. Through its Applied Privacy Research Coordination Network, a project supported by the U.S. National Science Foundation, FPF introduces academic researchers to industry practitioners to develop working partnership opportunities and share best practices. This project builds on FPF’s first NSF-supported Research Coordination Network established to foster industry-academic collaboration on priority research issues identified in the National Privacy Research Strategy (NPRS) and inform the public debate on privacy. These projects have provided ongoing support to FPF’s Privacy Papers for Policymakers program which brings academic expertise to members of Congress and leaders of executive agencies and their staffs to better inform policy approaches to data protection issues.
Providing governments and researchers tools and guidance for evidence-based policymaking. Integrated Data Systems (IDS) use data that government agencies routinely collect in the course of delivering public services to shape local policy and practice. FPF and Actionable Intelligence for Social Policy (AISP) created the Nothing to Hide: Tools for Talking (and Listening) About Data Privacy for Integrated Data Systems toolkit to provide stakeholders with tools to lead privacy-sensitive, inclusive government IDS efforts. In addition, FPF worked with the Administrative Data Research Facilities Network (ADRF) to develop a guide for researchers and practitioners who want to share administrative data for evidence-based policy and social science research. FPF’s paper Privacy Protective Research: Facilitating Ethically Responsible Access to Administrative Data published in The Annals of Political and Social Science, Vol 675 (2018) outlines the infrastructures that will need to be built to make sure data providers and empirical researchers can best serve national policy needs. FPF’s work on administrative data research was made possible by the support of the Alfred P. Sloan Foundation.
Exploring Legal Structures and Policies to Support Processing Personal Data for Research
Hosting expert discussions about processing personal data for research under the GDPR. The topic of theBrussels Privacy Symposium 2020, organized by FPF and the Brussels Privacy Hub of Vrije Universiteit Brussel (VUB), was “Research and the Protection of Personal Data Under the GDPR.”The symposium, which brought together a mix of industry practitioners, academic researchers, policymakers, and international data protection regulators, focused on striking a balance during the Covid-19 pandemic between the utility of research, on one hand, and the rights to privacy and data protection on the other. Panelists discussed strategies to mitigate risks to data protection in scientific research, including vulnerabilities related to AI and machine learning systems; consent structures; and the role of international frameworks and cross-border data flows. In a closing keynote, European Data Protection Supervisor Wojciech Wiewiórowski discussed the need to intensify the dialogue between Data Protection Authorities and ethical review boards to develop a common understanding of what qualifies as scientific research, and on codes of conduct for it.
Examining country-level legal frameworks for secondary uses of healthcare data. On January 19-20, 2021, the Israel Tech Policy Institute (ITPI), an FPF affiliate based in Israel, co-hosted a virtual workshop in collaboration with the Organization for Economic Cooperation and Development (OECD) and the Israel Ministry of Health (IMoH), titled “Supporting Health Innovation with Fair Information Practice Principles.” The workshop furthered international dialogue on issues critical for the successful use of health data for the benefit of the public, focusing on the implementation of privacy protection principles and the challenges that arise in the process. The discussion included lessons learned during Covid-19. It provided an opportunity for delegates of the OECD Health group (HCQO) and the OECD Data Governance and Privacy in the Digital Economy group (DGP), together with experts in these fields, to discuss progress made toward implementing the 2017 OECD Recommendation on Health Data Governance, and to contribute to the ongoing review of the 2013 OECD Privacy Guidelines. Specific topics discussed included:
Significant national health data governance reforms implemented recently by four countries, which lead legal and operational reforms to strengthen health data governance. These examples were viewed in the context of the WHO Global Strategy on Digital Health.
Safeguards for health data sharingto promote innovation while protecting people’s privacy. These may include: 1) ethical review board oversight; 2) de-identification; 3) administrative, technical, and contractual safeguards; and 4) safeguards around cross border data flows.
Privacy by Design and state-of-the-art solutions for safeguarding digital health data against unauthorised access and use. The mechanisms available are context-dependent and present unique benefits and limitations.
Individual & community perspectives on using health data for research. Some focus on alternative legal bases, other than consent, for the secondary use of patient data for research, and the imperative to respect the individual’s interest alongside that of the community and society.
The workshop was attended by delegates from approximately 40 governments from all over the world, as well as industry and academia participants.
In conjunction with the OECD event, FPF and the Israel Tech Policy Institute have conducted a study (to be published soon) on the laws underpinning secondary uses of healthcare data for research purposes in eight countries: Australia, England, Finland, France, India, Ireland, Israel, and the U.S. We found large commonalities across legal systems and regimes, permitting secondary use of healthcare data for research purposes under certain conditions, such as review by ethical boards, proper de-identification, and other administrative, technical, and contractual safeguards. Still, differences and ambiguities remain around specific situations such as the use of ‘Consent’ or other legal bases allowing data processing, the level of anonymization and de-identification employed and how it is regarded in different countries, and a variety of approaches to transborder data flows and data localization requirements.
Guidance to government, companies and civil society on responsible data sharing in a public health crisis. FPF launched its Privacy & Pandemics series immediately after the COVID-19 pandemic began to provide information and guidance to governments, companies, academics and civil society on responsible data sharing to support public health. As a featured part of the series, FPF’s Corporate Data Sharing Workshop on March 26, 2020 convened ethicists, academic researchers, government officials and corporate leaders to discuss best practices and policy recommendations for responsible data sharing. FPF’s international tech & data conference in October 2020, presented in collaboration with the US National Science Foundation, Duke Sanford School of Public Policy, SFI ADAPT Research Centre, Dublin City University, and Intel Corporation, produced a roadmap for research, practice improvements, and development of privacy-preserving products and services to further inform responses to COVID-19 and prepare for future pandemics and crises.
Summarizing U.S. federal and state laws that apply to health data research. As a resource for policymakers, researchers, and ethicists, FPF canvassed federal and state laws and regulations regarding health data research. Regulations like the Common Rule include a wide range of protections, but only apply to certain situations, while other safeguards are triggered by high-stakes research or particularly sensitive categories of data or vulnerable research subjects.
Educating policymakers on the value of data for research and strategies for oversight. FPF has shared model bill language with lawmakers developing comprehensive privacy laws in California, Washington, and Virginia to encourage them to both protect data-driven research and create oversight by requiring it to be approved, monitored, and governed by an independent oversight entity.
Exploring how the GDPR can work for health scientific research. On October 22, 2018, FPF, together with the European Federation of Pharmaceutical Industries and Associations (EFPIA), and the Centre for Information Policy Leadership (CIPL) hosted a workshop in Brussels, “Can GDPR Work for Health Scientific Research?,” to discuss the processing of personal data for health scientific research purposes under the European Union’s General Data Protection Regulation (GDPR). The workshop identified several challenges that researchers are facing when trying to comply with the GDPR, such as identifying the appropriate lawful ground for processing personal data for clinical trials and for secondary use of health data for health scientific research purposes, the relationship between the EU Clinical Trials Regulation and the GDPR, or the lack of clarity surrounding institutional responsibility and the role of ethical committees.
Providing guidance to US based higher education institutions on how to align their research and educational activities to the GDPR. In May 2020, FPF released, “The General Data Protection Regulation: Analysis and Guidance for US Higher Education Institutions.” The report includes a 10-step checklist with instructions for executing an effective GDPR compliance program. Many of the case-studies and examples used in the report focus on academic research. It is designed to assist both organizations with established compliance programs seeking to update or refresh their understanding of their obligations under GDPR, as well as those that are still in the process of creating or sustaining a compliance structure and seeking more in-depth guidance.
Advancing tools to support responsible research in artificial intelligence. Tofacilitate discussions around bias in artificial intelligence, FPF produced a framework to identify, articulate, and categorize the types of harm that may result from automated decision-making, see Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making (December 2017). FPF has recently provided resources and guidance to state policymakers on this topic.
Sharing methods and techniques for de-identification. FPF is recognized for its signature expertise in de-identification, publishing A Visual Guide to De-Identification (April 2016), as well as law review articles like Shades of Gray: Seeing the Full Spectrum of Practical Data De-identification, 56 Santa Clara L. Rev.593 (2016).
Facilitating ethically responsible access to administrative data for privacy protective research. A paper titled Privacy Protective Research: Facilitating Ethically Responsible Access to Administrative Data was featured at a Bill and Melinda Gates Foundation funded workshop along with other white papers written by researchers and practitioners that help inform the development of a roadmap identifying what data infrastructures need to be built to ensure that data providers and empirical researchers can best serve national policy needs. The paper – by FPF CEO Jules Polonetsky, FPF Senior Fellow Omer Tene, and Alfred P. Sloan Foundation Vice President and Program Director Daniel Goroff – provides strategies for organizations to minimize risks of re-identification and privacy violations for individual data subjects.
Privacy Impact Assessment Policies Help Cities Use and Share Data Responsibly with their Communities
As the world urbanizes, local governments are turning to “Smart City” initiatives and the data they generate to more effectively manage transportation systems, support real-time infrastructure maintenance, automatically administer public services, enable transparent governance and open data, and support emergency services in public areas. Data held by public and private organizations have the potential to lead to urban planning insights that can benefit governments and communities worldwide – if it can be collected, stored, and accessed in a responsible manner that respects personal privacy.
Click on image to watch on YouTube
As part of its work on smart communities in 2020 FPF co-led a task force of experts to develop a Model Privacy Impact Assessment (PIA) Policy for governments and communities that are considering sharing personal data collected from “smart city” solutions. This Model PIA Policy was developed as part of the G20 Global Smart Cities Alliance on Technology Governance, a partnership of leading international organizations and city networks working to source tried-and-tested policy approaches to govern the use of smart city technologies. Its institutional partners — including FPF — represent more than 200,000 cities and local governments, leading companies, startups, research institutions, and civil society communities.
Privacy Impact Assessments in “Smart Cities”
Cities that adopt a clear policy about how and when to conduct PIAs are taking an important first step in being able to consistently and confidently identify, evaluate, and mitigate privacy risks. PIAs (or similar privacy or data protection risk assessments) are already considered a best practice by public and private organizations around the world. In some places they are even required by law, such as cities covered by the EU’s General Data Protection Regulation.
Although PIAs traditionally focus on identifying privacy risks, they can also be an important mechanism for organizations to articulate the specific value and benefits that they expect to achieve from new smart city data flows and technologies. While PIAs are only one part of a comprehensive privacy program and should sit alongside other safeguards (such training, data minimization, and regulation), cities acquiring or using smart city technologies will find PIAs to be valuable tools to:
increase transparency and accountability and earn public trust,
embed privacy by design throughout Smart City project and data lifecycles,
mitigate potential privacy harms or disparate impacts before they occur,
improve compliance or reduce legal risk,
encourage innovation by supporting ethical decision-making,
facilitate internal and external communication and cooperation, and
enable more confident and consistent decision-making about data and technology by city officials, their partners, and the public.
Cities must balance their own need to use and share data to conduct business with the broader public welfare and individual privacy interests in a way that builds and maintains public trust. Without public trust, the benefits of smart city technologies will be ultimately unsustainable. Cities must invest in policies and practices that will help individuals, local communities, and technology providers maximize the benefits of responsible data use while minimizing privacy risks to individuals and communities.
The Model Policy
This Model PIA Policy is a flexible, scalable policy framework based on proven best practices that cities around the world are already beginning to adopt. Every component of the policy reflects real-world practices and examples by leading cities and counties, including policies and templates from Seattle, Wellington, Helsinki, Santa Clara, Huron, and Toronto. The framework also builds on expert guidance from organizations like NIST, the Article 29 Working Party, and UN Global Pulse.
The Model PIA Policy is divided into two parts: a “foundations” section that describes the process that cities should follow to conduct PIAs and a “fundamentals” section describes the issues that cities should consider in their PIAs. There are also optional guidance and examples throughout the policy framework for cities that have a greater privacy maturity level or that wish to conduct PIAs in more participatory ways.
In the “Foundations” section, cities are provided with a recommended processfor conducting PIAs, which includes:
Identifying organizational values, legal requirements, and risk tolerance around data and privacy,
Defining what types of activities that should be evaluated through a PIA and when to conduct PIAs (such as before a smart city technology is acquired, or whenever there are material changes to existing data processes),
Incorporating threshold assessments and outside expertise to ensure higher risk activities are prioritized and to reduce the likelihood of bottlenecks,
Recognizing the key roles and responsibilities needed to effectively carry out a PIA (including senior privacy officials, executive supporters, and program staff),
Ensuring that PIAs are regularly reviewed and integrated into monitoring and recordkeeping systems, and
Providing options to encourage transparency and engagement around the results of PIAs.
In the “Fundamentals” section, cities are provided with recommended issues for their PIAs to consider when evaluating a proposed smart city technology. These include:
Identifying who within the city will be using the proposed smart city technology, for what purposes and public benefit, and under what authority (as appropriate),
Articulating the city’s public values and relevant principles, privacy commitments, legal standards, or organizational risk frameworks,
Describing the proposed smart city technology, including its technical capabilities and potential privacy impacts on individuals and communities,
Documenting how the city will respond to any anticipated privacy risks (including potential impacts on civil rights and disparate impacts on marginalized communities), including any safeguards or controls that may be used to mitigate those risks and any data use or management policies for the proposed smart city technology,
Discussing the availability of funding and resources to provide for ongoing privacy and data protection costs related to the smart city technology, and
Articulating any additional factors or contextual considerations that may be relevant, such as any community engagement conducted or exigent circumstances that could impact the current privacy risks or safeguards.
Supporting ethical data collection and sharing to enable governments and municipalities to use data from their respective community members is a priority for FPF. Drafting a model PIA policy for a global audience is a complicated process, as wide variation exists in cultural and legal approaches to privacy and data protection around the world. Smart city initiatives also vary considerably in their size and complexity. Our hope is that by providing a model policy for local governments to follow, we can increase the likelihood that cities will consider and address privacy risks in a manner consistent with community expectations.
Acknowledgements: This Model PIA Policy was a collaborative effort by members of the Privacy & Transparency Task Force and other G20 Smart Cities Alliance contributors and reviewers. Special thanks to Task Force Co-Chair Michael Mattmiller (Microsoft) and Task Force Members Pasquale Annicchino (Lex Digital), Sean Audain (Wellington City Council), Chandra Bhusan (Quantela), Dylan Gilbert (NIST), Eugene Kim (Sidewalk Labs), Naomi Lefkovitz (NIST), Jacqueline Lu (Helpful Places), and Daniel Wu (Immuta).
FPF’s participation was funded in part by the National Science Foundation (NSF SPOKES #1761795).
The Future of Privacy Forum works on privacy issues regarding smart communities. To learn more, please contact [email protected] and [email protected].
Future of Privacy Forum Releases New Youth Privacy and Data Protection Infographic
WASHINGTON, D.C. – The Future of Privacy Forum (FPF) today released a new infographic, Youth Privacy and Data Protection 101 which provides an overview of the opportunities and risks for kids online, along with potential protection strategies. It also features young people’s voices from around the world on their preferences and attitudes toward privacy.
“We all want to keep kids safe online, but the desire to shield them from risk can also limit their access to important opportunities,” said Amelia Vance, FPF’s Director of Youth and Education Privacy. “When considering any protection strategies, policymakers must carefully evaluate both opportunity and risk in order to foster the development of a robust, thriving online ecosystem that is also suitable for kids. We hope that this infographic effectively conveys that challenge, as well as the diversity of approaches to youth privacy protections being considered and implemented around the world.”
View the infographic and an accompanying blog post here.
Risks for youth online include well-known concerns such as coming across age-inappropriate content, encountering predators, and being a victim of cyberbullying or cyber harassment. Other, less visible risks include commercial exploitation through profiling and targeted marketing as well as societal shifts such as surveillance normalization, as young people may become accustomed to constantly being watched and recorded.
Of course, there are also a wealth of opportunities for youth online. With school closures due to the pandemic, many students now access their education virtually. Unable to connect with their friends and communities in person, young people rely on social media and other online tools to play, build their communities, explore their identities, and participate in civic and political forums. Online spaces are also integral to fostering creative expression and providing resources related to health and well-being.
The Youth Privacy and Data Protection 101 infographic highlights the range of strategies that governments, online service providers, educators, parents, and others can use and encourage to find that appropriate balance between protecting kids online and not limiting their opportunities. These strategies include things like limiting access to age-inappropriate content, requiring age verification prior to accessing a service, and incorporating privacy into services by default. Those strategies and others that policymakers may wish to consider are discussed in detail in FPF’s latest blog post, available here.
Manipulative UX Design & the Role of Regulation: Event Highlights
On March 24, the FPF hosted “Dark Patterns:” Manipulative UX Design and the Role of Regulation. So-called “dark patterns” are user interface design choices that benefit an online service by coercing, manipulative, or deceiving users into making unintended or potentially harmful decisions. The event provided a critical examination of the ways in which manipulative interfaces can limit consumer choice and explored how regulation of manipulative designs continues to expand – from California’s recent Attorney General regulations, to the California Privacy Rights Act, to other state and federal privacy bills. Participants also discussed whether truly neutral design is ever possible, and the differences between acceptable persuasion (such as in advertising) and manipulation, coercion, and deception.
The event, moderated by FPF Senior Counsel Stacey Gray, began with a survey of legislative proposals that would regulate manipulative user interface design choices. Stacey highlighted several prominent state privacy laws, including the California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA), which define and address dark patterns in certain contexts, such as in the California Attorney General’s regulations for the design of “opt-out of sale” mechanisms for personal data collection and use. Gray also addressed relevant legislative proposals at the state and federal level – the Washington Privacy Act (SB 5062), CA SB 980, and the SAFE DATA Act (S. 4626) – that explicitly define or create regulations around manipulative design choices. Finally, Gray explained that manipulative design is an “ongoing focus” of the Federal Trade Commission, citing past enforcement actions related to manipulative user interface design choices and referencing the FTC’s upcoming April 29 workshop, “Bringing Dark Patterns to Light.”
Dr. Jennifer King, Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence, provided the keynote presentation, which defined dark patterns, the contexts they target, how they work, types of dark patterns, examples, and key thoughts for policymakers and regulators. Specifically, Dr. King recommended that lawmakers consider the following questions:
Is current FTC section 5 authority enough to address dark patterns generally? Or is expanded authority necessary?
How do we evaluate and measure dark patterns, and who should do this type of work?
Identifying the “dark” vs. the “gray”: what defines the line between permissible persuasion and manipulation or coercion?
Are neutral designs a realistic and enforceable option, particularly at decision points, such as opt-in or opt-outs?
What are the implications for Privacy by Design? How is success in privacy measured?
How does the CPRA’s “effect” standard differ from a potential “intent” standard? Which standard is more measurable and enforceable?
Following Dr. King’s address, the event moved to a panel discussion with Mihir Kshirsagar, Clinic Lead for Princeton’s Center for Information Technology Policy, Tanya Forsheit, Chair of the Privacy & Data Security Group at Frankfurt Kurnit Klein + Selz, as well as Gray and Dr. King. Together, the panel considered manipulative design from legal, policy, and technology perspectives, providing insightful answers to questions from the audience.
Gray closed the event by noting that manipulative design will continue to be a focus for FPF, previewing future convenings on manipulative design under EU and global law and in specific contexts, such as in online products and services for children and teens.
FPF Hosted a CPDP 2021 Panel on US Privacy Law: The Beginning of a New Era
By Srivats Shankar, FPF Legal Intern
For the 14th annual Computers, Privacy and Data Protection conference, which took place between 27 and 29 January, 2021, FPF hosted a panel of experts to discuss “US Privacy Law: The Beginning of a New Era”, whose recording has just been published. The panel was moderated by Dr. Gabriela Zanfir-Fortuna, who was joined by Anupam Chander, Professor of Law at Georgetown University; Jared Bomberg, Senior Counsel for the Senate Committee on Commerce, Science and Transportation; Stacey Schesser, Office of California Attorney General; and Lydia Parnes, Partner at Wilson Sonsini’s Privacy and Cybersecurity Practice.
Broadly, the panel discussed the events that have prompted the shift towards privacy protection in the US in recent years, including the latest privacy law initiatives at the state and federal level. The discussion addressed how regulators are enforcing current laws and preparing for what’s to come, and how these developments may strengthen the Trans-Atlantic relationship in the digital age.
Professor Anupam Chander discussed the most consequential developments in US privacy law in recent years, which he identified as the passage of the California Consumer Privacy Act (CCPA) in 2018, the Supreme Court decision ofCarpenter v. US, and the passage of theConsumer Privacy Rights Act (CPRA) in 2020. According to Professor Chander, these developments will define the law of privacy over the next decade.
Jared Bomberg discussed developments at the federal level in the United States, including the increasing focus by Congress on a comprehensive consumer privacy legislation. In the Senate, the two leading proposals are the Consumer Online Privacy Rights Act (COPRA), led by Senator Cantwell (D-WA) and the SAFE DATA Act, led by Senator Wicker (R-MS). Both bills have many cosponsors. Among these and other privacy bills, there is commonality regarding the right of access, correction, deletion, and portability. Meanwhile, key differences include the existence of a private right of action, the extent to which a federal law would preempt state laws, and the incorporation of fiduciary responsibilities.
Stacey Schesser discussed the privacy law in California, including the enactment of the CCPA and the response of companies to the law. Following the passage of the GDPR, many companies have come to support compliance with the CCPA. California, by virtue of its large population and major economy, has required many businesses across the United States to come into compliance with the CCPA. Schesser notes that they have seen consumer frustration with opt-out mechanisms and deletion of personal information, alongside challenges with companies interpreting the law in different ways. However, she noted that many companies have complied with the CCPA within the 30 day notice and cure period after being notified of a violation. The initial rollout of Attorney General regulations have attempted to identify the scope of enforcement especially with reference to unique problems such as dark patterns.
Lydia Parnes discussed the enforcement of privacy law in the US. She observed that the Federal Trade Commission (FTC) has been fairly aggressive in exercising its enforcement powers. Commissioner Slaughter who became Acting Chairwoman has promoted the usage of civil penalties in privacy rights cases. These enforcement actions have become “baseline norms” for companies to follow. They don’t just affect the individual company but the industry at large. Parnes noted that the FTC has limited resources and enforcement by state agencies would be an effective way to facilitate change.
In the Q&A session, attendees raised issues of global interoperability, agency enforcement, and competition. Professor Anupam Chander emphasized the importance of the Schrems II decision, and the need for the US and Europe to come to another “modus vivendi.” This could be established without a “national” policy on privacy, to protect the information of foreign individuals whose data may be stored in the United States.
In response to a question about enforcement, Jared Bomberg emphasized that agencies like the FTC need more resources and that there is some acceptance that the FTC should continue enforcement in its existing fashion. He further noted that the Attorney General could also supplement and collaborate on enforcement. Bomberg also stressed the need for a private right of action. Market constraints also play a role in limiting the ability of the customer to protect their rights, and the current lack of transparency with the power dynamic has created a situation where customers do not understand what they have signed up for.
In closing, the panelists received a question on the likelihood of seeing a federal privacy law in the next two years. The consensus as Jared put it was that it could be “100% and 0%.”
Watch the full recording of the panel by following this link.
The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?
The Brazilian Supreme Federal Court, or “STF” in its Brazilian acronym, recently took a landmark decision concerning the right to be forgotten (RTBF), finding that it is incompatible with the Brazilian Constitution. This attracted international attention to Brazil for a topic quite distant than the sadly frequent environmental, health, and political crises.
Readers should be warned that while reading this piece they might experience disappointment, perhaps even frustration, then renewed interest and curiosity and finally – and hopefully – an increased open-mindedness, understanding a new facet of the RTBF debate, and how this is playing out at constitutional level in Brazil.
This might happen because although the STF relies on the “RTBF” label, the content behind such label is quite different from what one might expect after following the same debate in Europe. From a comparative law perspective, this landmark judgment tellingly shows how similar constitutional rights play out in different legal cultures and may lead to heterogeneous outcomes based on the constitutional frameworks of reference.
How it started: insolvency seasoned with personal data
As it is well-known, the first global debate on what it means to be “forgotten” in the digital environment arose in Europe, thanks to Mario Costeja Gonzalez, a Spaniard who, paradoxically, will never be forgotten by anyone due to his key role in the construction of the RTBF.
Costeja famously requested to deindex from Google Search information about himself that he considered to be no longer relevant. Indeed, when anyone “googled” his name, the search engine provided as the top results some link to articles reporting Costeja’s past insolvency as a debtor. Costeja argued that, despite having been convicted for insolvency, he had already paid his debt with Justice and society many years before and it was therefore unfair that his name would continue to be associated ad aeternum with a mistake he made in the past.
The follow up is well known in data protection circles. The case reached the Court of Justice of the European Union (CJEU), which, in its landmark Google Spain Judgment (C-131/12), established that search engines shall be considered as data controllers and, therefore, they have an obligation to de-index information that is inappropriate, excessive, not relevant, or no longer relevant, when a data subject to whom such data refer requests it. Such an obligation was a consequence of Article 12.b of Directive 95/46 on the protection of personal data, a pre-GDPR provision that set the basis for the European conception of the RTBF, providing for the “rectification, erasure or blocking of data the processing of which does not comply with the provisions of [the] Directive, in particular because of the incomplete or inaccurate nature of the data.”
The indirect consequence of this historic decision, and the debate it generated, is that we have all come to consider the RTBF in the terms set by the CJEU. However, what is essential to emphasize is that the CJEU approach is only one possible conception and, importantly, it was possible because of the specific characteristics of the EU legal and institutional framework. We have come to think that RTBF means the establishment of a mechanism like the one resulting from the Google Spain case, but this is the result of a particular conception of the RTBF and of how this particular conception should – or could – be implemented.
The fact that the RTBF has been predominantly analyzed and discussed through the European lenses does not mean that this is the only possible perspective, nor that this approach is necessary the best. In fact, the Brazilian conception of the RTBF is remarkably different from a conceptual, constitutional, and institutional standpoint. The main concern of the Brazilian RTBF is not how a data controller might process personal data (this is the part where frustration and disappointment might likely arise in the reader) but the STF itself leaves the door open to such possibility (this is the point where renewed interest and curiosity may arise).
The Brazilian conception of the right to be forgotten
Although the RTBF has acquired a fundamental relevance in digital policy circles, it is important to emphasize that, until recently, Brazilian jurisprudence had mainly focused on the juridical need for “forgetting” only in the analogue sphere. Indeed, before the CJEU Google Spain decision, the Brazilian Supreme Court of Justice or “STJ” – the other Brazilian Supreme Court that deals with the interpretation of the Law, differently from the previously mentioned STF, which deals with the interpretation of constitutional matters – had already considered the RTBF as a right not to be remembered, affirmed by the individual vis-à-vis traditional media outlets.
This interpretation first emerged in the “Candelaria massacre” case, a gloomy page of Brazilian history, featuring a multiple homicide perpetrated in 1993 in front of the Candelaria Church, a beautiful colonial Baroque building in Rio de Janeiro’s downtown. The gravity and the particularly picturesque stage of the massacre led Globo TV, a leading Brazilian broadcaster, to feature the massacre in a TV show called Linha Direta. Importantly, the show included in the narration some details about a man suspected of being one of the perpetrators of the massacre but later discharged.
Understandably, the man filed a complaint arguing that the inclusion of his personal information in the TV show was causing him severe emotional distress, while also reviving suspects against him, for a crime he had already been discharged of many years before. In September 2013, further to Special Appeal No. 1,334,097, the STJ agreed with the plaintiff establishing the man’s “right not to be remembered against his will, specifically with regard to discrediting facts.” This is how the RTBF was born in Brazil.
Importantly for our present discussion, this interpretation is not born out of digital technology and does not impinge upon the delisting of specific type of information as results of search engine queries. In Brazilian jurisprudence the RTBF has been conceived as a general right to effectively limit the publication of certain information. The man included in the Globo reportage had been discharged many years before, hence he had a right to be “let alone,” as Warren and Brandeis would argue, and not to be remembered for something he had not even committed. The STJ, therefore, constructed its vision of the RTBF, based on article 5.X of the Brazilian Constitution, enshrining the fundamental right to intimacy and preservation of image, two fundamental features of privacy.
Hence, although they utilize the same label, the STJ and CJEU conceptualize two remarkably different rights, when they refer to the RTBF. While both conceptions aim at limiting access to specific types of personal information, the Brazilian conception differs from the EU one on at least three different levels.
First, their constitutional foundations. While both conceptions are intimately intertwined with individuals’ informational self-determination, the STJ built the RTBF based on the protection of privacy, honour and image, whereas the CJEU built it upon the fundamental right to data protection, which in the EU framework is a standalone fundamental right. Conspicuously, in the Brazilian constitutional framework an explicit right to data protection did not exist at the time of the Candelaria case and only since 2020 it has been in the process of being recognized.
Secondly, and consequently, the original goal of the Brazilian conception of the RTBF was not to regulate how a controller should process personal data but rather to protect the private sphere of the individual. In this perspective, the goal of STJ was not – and could not have been – to regulate the deindexation of specific incorrect or outdated information, but rather to regulate the deletion of “discrediting facts” so that the private life, honour and image of any individual might be illegitimately violated.
Finally, yet extremely importantly, the fact that, at the time of the decision, an institutional framework dedicated to data protection was simply absent in Brazil did not allow the STJ to have the same leeway of the CJEU. The EU Justices enjoyed the privilege of delegating to search engine the implementation of the RTBF because, such implementation would have received guidance and would have been subject to the review of a well-consolidated system of European Data Protection Authorities. At the EU level, DPAs are expected to guarantee a harmonious and consistent interpretation and application of data protection law. At the Brazilian level, a DPA has just been established in late 2020 and announced its first regulatory agenda only in late January 2021.
This latter point is far from trivial and, in the opinion of this author, an essential preoccupation that might have driven the subsequent RTBF conceptualization of the STJ.
The stress-test
The soundness of the Brazilian definition of the RTBF, however, was going to be tested again by the STJ, in the context of another grim and unfortunate page of Brazilian story, the Aida Curi case. This case originated with the sexual assault and subsequent homicide of the young Aida Curi, in Copacabana, Rio de Janeiro, on the evening of 14 July 1958. At the time the case crystallized considerable media attention, not only because of its mysterious circumstances and the young age of the victim, but also because the sexual assault perpetrators tried to dissimulate it by throwing the body of the victim from the rooftop of a very high building on the Avenida Atlantica, the fancy avenue right in front of the Copacabana beach.
Needless to say, Globo TV considered the case as a perfect story for yet another Linha Direta episode. Aida Curi’s relatives, far from enjoying the TV show, sued the broadcaster for moral damages and demanded the full enjoyment of their RTBF – in the Brazilian conception, of course. According to the plaintiffs, it was indeed not conceivable that, almost 50 years after the murder, Globo TV could publicly broadcast personal information about the victim – and her family – including the victim’s name and address, in addition to unauthorized images, thus bringing back a long-closed and extremely traumatic set of events.
The brothers of Aida Curi claimed reparation against Rede Globo, but the STJ, decided that the time passed was enough to mitigate the effects of anguish and pain on the dignity of Aida Curi’s relatives, while arguing that it was impossible to report the events without mentioning the victim. This decision was appealed by Ms Curi’s family members, who demanded by means of Extraordinary Appeal No. 1,010,606, that STF recognized “their right to forget the tragedy.” It is interesting to note that the way the demand is constructed in this Appeal exemplifies tellingly the Brazilian conception of “forgetting” as erasure and prohibition from divulgation.
At this point, the STF identified in the Appeal the interest of debating the issue “with general repercussion” which is a peculiar judicial process that the Court can utilize when recognizes that a given case has particular relevance and transcendence for the Brazilian legal and judicial system. Indeed, the decision of a case with general repercussion does not only bind the parties but rather establishes a jurisprudence that must be replicated by all lower-level courts.
In February 2021, the STF finally deliberated on the Aida Curi case, establishing that “the idea of a right to be forgotten is incompatible with the Constitution, thus understood as the power to prevent, due to the passage of time, the disclosure of facts or data that are true and lawfully obtained and published in analogue or digital media” and that “any excesses or abuses in the exercise of freedom of expression and information must be analyzed on a case-by-case basis, based on constitutional parameters – especially those relating to the protection of honor, image, privacy and personality in general – and the explicit and specific legal provisions existing in the criminal and civil spheres.”
In other words, what the STF has deemed as incompatible with the Federal Constitution is a specific interpretation of the Brazilian version of the RTBF. What is not compatible with the Constitution is to argue that the RTBF allows to prohibit publishing true facts, lawfully obtained. At the same time, however, the STF clearly states that it remains possible for any Court of law to evaluate, on a case-by-case basis and according to constitutional parameters and existing legal provisions, if a specific episode can allow the use of the RTBF to prohibit the divulgation of information that undermine the dignity, honour, privacy, or other fundamental interests of the individual.
Hence, while explicitly prohibiting the use of the RTBF as a general right to censorship, the STF leaves room for the use of the RTBF for delisting specific personal data in an EU-like fashion, while specifying that this must be done finding guidance in the Constitution and the Law.
What next?
Given the core differences between the Brazilian and EU conception of the RTBF, as highlighted above, it is understandable in the opinion of this author that the STF adopted a less proactive and more conservative approach. This must be especially considered in light of the very recent establishment of a data protection institutional system in Brazil.
It is understandable that the STF might have preferred to de facto delegate the interpretation of when and how the RTBF could be rightfully invoked before Courts, according to constitutional and legal parameters. First, in the Brazilian interpretation of the RTBF, this right fundamentally insist on the protection of privacy – i.e. the private sphere of an individual – and, while admitting the existence of data protection concerns, these are not the main ground on which the Brazilian RTBF conception relays.
It is understandable that in a country and a region where the social need to remember and shed light on what happened in a recent history, marked by dictatorships, well-hidden atrocities, and opacity, outweighs the legitimate individual interest to prohibit the circulation of truthful and legally obtained information. In the digital sphere, however, the RTBF quintessentially translates into an extension of informational self-determination, which the Brazilian General Data Protection Law, better known as “LGPD” (Law No. 13.709 / 2018), enshrines in its article 2 as one of the “foundations” of data protection in the country and that whose fundamental character was recently recognized by the STF itself.
In this perspective, it is useful to remind the dissenting opinion of Justice Luiz Edson Fachin, in the Aida Curi case, stressing that “although it does not expressly name it, the Constitution of the Republic, in its text, contains the pillars of the right to be forgotten, as it celebrates the dignity of the human person (article 1, III), the right to privacy (article 5, X) and the right to informational self-determination – which was recognized, for example, in the disposal of the precautionary measures of the Direct Unconstitutionality Actions No. 6,387, 6,388, 6,389, 6,390 and 6,393, under the rapporteurship of Justice Rosa Weber (article 5, XII).”
It is the opinion of this author that the Brazilian debate on the RTBF in the digital sphere would be clearer if it its dimension as a right to deindexation of search engines results were to be clearly regulated. It is understandable that the STF did not dare regulating this, given its interpretation of the RTBF and the very embryonic data protection institutional framework in Brazil. However, given the increasing datafication we are currently witnessing, it would be naïve not to expect that further RTBF claims concerning the digital environment and, specifically, the way search engines process personal data will keep emerging.
The fact that the STF has left the door open to apply the RTBF in the case-by-case analysis of individual claims may reassure the reader regarding the primacy of constitutional and legal arguments in such case-by-case analysis. It may also lead the reader to – very legitimately – wonder whether such a choice is the facto the most efficient to deal with the potentially enormous number of claims and in the most coherent way, given the margin of appreciation and interpretation that each different Court may have.
An informed debate able to clearly highlight what are the existing options and what might be the most efficient and just ways to implement them, considering the Brazilian context, would be beneficial. This will likely be one of the goals of the upcoming Latin American edition of the Computers, Privacy and Data Protection conference (CPDP LatAm) that will take place in July, entirely online, and will aim at exploring the most pressing issues for Latin American countries regarding privacy and data protection.
If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].
FPF announces appointment of Malavika Raghavan as Senior Fellow for India
The Future of Privacy Forum announces the appointment of Malavika Raghavan as Senior Fellow for India, expanding our Global Privacy team to one of the key jurisdictions for the future of privacy and data protection law.
Malavika is a thought leader and a lawyer working on interdisciplinary research, focusing on the impacts of digitisation on the lives of lower-income individuals. Her work since 2016 has focused on the regulation and use of personal data in service delivery by the Indian State and private sector actors. She has founded and led the Future of Finance Initiative for Dvara Research (an Indian think tank) in partnership with the Gates Foundation from 2016 until 2020, anchoring its research agenda and policy advocacy on emerging issues at the intersection of technology, finance and inclusion. Research that she led at Dvara Research was cited by the India’s Data Protection Committee in its White Paper as well as its final report with proposals for India’s draft Personal Data Protection Bill, with specific reliance placed on such research on aspects of regulatory design and enforcement. See Malavika’s full bio here.
“We are delighted to welcome Malavika to our Global Privacy team. For the following year, she will be our adviser to understand the most significant developments in privacy and data protection in India, from following the debate and legislative process of the Data Protection Bill and the processing of non-personal data initiatives, to understanding the consequences of the publication of the new IT Guidelines. India is one of the most interesting jurisdictions to follow in the world, for many reasons: the innovative thinking on data protection regulation, the potentially groundbreaking regulation of non-personal data and the outstanding number of individuals whose privacy and data protection rights will be envisaged by these developments, which will test the power structures of digital regulation and safeguarding fundamental rights in this new era”, said Dr. Gabriela Zanfir-Fortuna, Global Privacy lead at FPF.
We have asked Malavika to share her thoughts for FPF’s blog on what are the most significant developments in privacy and digital regulation in India and about India’s role in the global privacy and digital regulation debate.
FPF: What are some of the most significant developments in the past couple of years in India in terms of data protection, privacy, digital regulation?
Malavika Raghavan: “Undoubtedly, the turning point for the privacy debate India was the 2017 judgement of the Indian Supreme Court in Justice KS Puttaswamy v Union of India. The judgment affirmed the right to privacy as a constitutional guarantee, protected by Part III (Fundamental Rights) of the Indian Constitution. It was also regenerative, bringing our constitutional jurisprudence into the 21st century by re-interpreting timeless principles for the digital age, and casting privacy as a prerequisite for accessing other rights—including the right to life and liberty, to freedom of expression and to equality—given the ubiquitous digitisation of human experience we are witnessing today.
Overnight, Puttaswamy also re-balanced conversations in favour of privacy safeguards to make these equal priorities for builders of digital systems, rather than framing these issues as obstacles to innovation and efficiency. In addition, it challenged the narrative that privacy is an elite construct that only wealthy or privileged people deserve— since many litigants in the original case that had created the Puttaswamy reference were from marginalised groups. Since then, a string of interesting developments have arisen as new cases are reassessing the impact of digital technology on individuals in India, for e.g. the boundaries case of private sector data sharing (such as between Whatsapp and Facebook), or the State’s use of personal data (as in the case concerning Aadhaar, our national identification system) among others.
Puttaswamy also provided fillip for a big legislative development, which is the creation of an omnibus data protection law in India. A bill to create this framework was proposed by a Committee of Experts under the chairmanship of Justice Srikrishna (an ex-Supreme Court judge), which has been making its way through ministerial and Parliamentary processes. There’s a large possibility that this law will be passed by the Indian parliament in 2021! Definitely a big development to watch.
FPF: How do you see India’s role in the global privacy and digital regulation debate?
Malavika Raghavan: “India’s strategy on privacy and digital regulation will undoubtedly have global impact, given that India is home to 1/7th of the world’s population! The mobile internet revolution has created a huge impact on our society with millions getting access to digital services in the last couple of decades. This has created nuanced mental models and social norms around digital technologies that are slowly being documented through research and analysis.
The challenge for policy makers is to create regulations that match these expectations and the realities of Indian users to achieve reasonable, fair regulations. As we have already seen from sectoral regulations (such as those from our Central Bank around cross border payments data flows) such regulations also have huge consequences for global firms interacting with Indian users and their personal data.
In this context, I think India can have the late-mover advantage in some ways when it comes to digital regulation. If we play our cards right, we can take the best lessons from the experience of other countries in the last few decades and eschew the missteps. More pragmatically, it seems inevitable that India’s approach to privacy and digital regulation will also be strongly influenced by the Government’s economic, geopolitical and national security agenda (both internationally and domestically).
One thing is for certain: there is no path-dependence. Our legislators and courts are thinking in unique and unexpected ways that are indeed likely to result in a fourth way (as described by the Srikrishna Data Protection Committee’s final report), compared to the approach in the US, EU and China.”
If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].
India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content
On February 25, the Indian Government notified and published Information Technology (Guidelines for Intermediaries and Digital media Ethics Code) Rules 2021. These rules mirror the Digital Services Act (DSA) proposal of the EU to some extent, since they propose a tiered approach based on the scale of the platform, they touch on intermediary liability, content moderation, take-down of illegal content from online platforms, as well as internal accountability and oversight mechanisms, but they go beyond such rules by adding a Code of Ethics for digital media, similar to the Code of Ethics classic journalistic outlets must follow, and by proposing an “online content” labelling scheme for content that is safe for children.
The Code of Ethics applies to online news publishers, as well as intermediaries that “enable the transmission of news and current affairs”. This part of the Guidelines (the Code of Ethics) has already been challenged in the Delhi High Court by news publishers this week.
The Guidelines have raised several types of concerns in India, from their impact on freedom of expression, impact on the right to privacy through the automated scanning of content and the imposed traceability of even end-to-end encrypted messages so that the originator can be identified, to the choice of the Government to use executive action for such profound changes. The Government, through the two Ministries involved in the process, is scheduled to testify in the Standing Committee of Information Technology of the Parliament on March 15.
New obligations for intermediaries
“Intermediaries” include “websites, apps and portals of social media networks, media sharing websites, blogs, online discussion forums, and other such functionally similar intermediaries” (as defined in rule 2(1)(m)).
Here are some of the most important rules laid out in Part II of the Guidelines, dedicated to Due Diligence by Intermediaries:
All intermediaries, regardless of size or nature, will be under an obligation to “remove or disable access” as early as possible and no later than 36 hours of content subject to a Court order or an order of a Government agency (see rule 4(1)(d)).
All intermediaries will be under an obligation to inform users at least once per year about their content policies, which must at a minimum include rules such as not uploading, storing or sharing information that “belongs to another person and to which the user does not have any right”, “deceives or misleads the addressee about the origin of the message”, “is patently false and untrue” or “is harmful to minors” (see rules 4(1)(b) and (f)).
All intermediaries will have to provide information to authorities for the purpose of identity verification and for investigating and prosecuting offenses, within 72 hours of receiving an order from an authorised government agency (see rule 4(1)(j)).
All intermediaries will have to take all measures to remove or limit accesswithin 24 hours of receiving a complaint from a user, to any content that reveals nudity, amounts to sexual harassment, or represents a deep fake, and the content is transmitted with the intent to harass, intimidate, threaten or abuse an individual (see rule 4(1)(p)).
“Significant social media intermediaries” have enhanced obligations
“Significant social media intermediaries” are social media services with a number of users above a threshold which will be defined and notified by the Central Government. This concept is similar to the the DSA’s “Very Large Online Platform”, however the DSA includes clear criteria in the proposed act itself on how to identify a VLOP.
As for Significant Social Media Intermediaries” in India, they will have additional obligations (similar to how the DSA proposal in the EU scales obligations):
“Significant social media intermediaries” that provide messaging services will be under an obligation to identify the “first originator” of a message following a Court order or an order from a Competent Authority (see rule 5(2)). This provision raises significant concerns over end-to-end encryption and encryption backdoors.
They will have to appoint a Chief Compliance Officer for the purposes of complying with these rules and who will be liable for failing to ensure that the intermediary observes due diligence obligations; the CCO will have to hold an Indian passport and will have to be based in India;
They will have to appoint a Chief Grievance Officer, who also must be based in India.
Publish compliance reports every 6 months.
Deploy automated scanning to proactively identify all identical information to content removed following an order (under the 36 hours rule), as well as child sexual abuse and related content (see rule 5(4)).
Set up an internal mechanism for receiving complaints.
These “Guidelines” seem to have the legal effect of a statute, and they are being adopted through executive action to replace Guidelines adopted in 2011 by the Government, under powers conferred to it in the Information Technology Act 2000. The new Guidelines would enter into force immediately after publication in the Official Gazette (no information as to when publication is scheduled). The Code of Ethics would enter into force three months after the publication in the Official Gazette. As mentioned above, there are already some challenges in Court against part of these rules.
This analysis by Rahul Matthan, who raises questions with regard to “identifying the first originator” rule, arguing that it is likely the Indian Supreme Court would declare such a measure unconstitutional: “Traceability is Antithetical to Liberty”.
Another jurisdiction to keep your eyes on: Australia
Also note that, while the European Union is starting its heavy and slow legislative machine, by appointing Rapporteurs in the European Parliament and having first discussions on the DSA proposal in the relevant working group of the Council, another country is set to soon adopt digital content rules: Australia. The Government is currently considering an Online Safety Bill, which was open to public consultation until mid February and which would also include a “modernised online content scheme”, creating new classes of harmful online content, as well as take-down requirements for image-based abuse, cyber abuse and harmful content online, requiring removal within 24 hours of receiving a notice from the eSafety Commissioner.
If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].