India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

car 3075497 1920

by Chelsey Colbert

The author thanks Hunter Dorwart for his contribution to this text.

The Cyberspace Administration of China (CAC) released a draft regulation on car privacy and data security on May 12, 2021. China has been very active in automated vehicle development and deployment and has also proposed last fall a draft comprehensive privacy law, which is moving towards adoption likely by the end of this year.

The draft car privacy and data security regulation (“Several Provisions on the Management of Automobile Data Security”; hereinafter, “draft regulation”) is interesting for those tracking automated vehicle (AV) and privacy regulations around the world and is relevant beyond China – not only due to the size of the Chinese market and its potential impact on all actors in the “connected cars” space present there, but also because dedicated legislation for car privacy and data security is novel for most jurisdictions. In fact, the draft regulation raises several interesting privacy and data protection aspects worthy of further consideration, such as its strict rules on consent, privacy by design, and data localization requirements. The CAC is seeking public comment on the draft, and the deadline for comments is June 11, 2021. 

The draft regulation complements other regulatory developments around connected and automated vehicles and data. For example, on April 29, 2021, the National Information Security Standardization Technical Committee (TC 260), which is jointly administered by the CAC and the Standardization Administration of China, published a draft Standard on Information Security Technology Security Requirements for Data Collected by Connected Vehicles. The Standard sets forth security requirements for data collection to ensure compliance with other laws and facilitate a safe environment for networked vehicles. Standards like this are an essential component of corporate governance in China and notably fill in compliance gaps left in the law. 

The publication of the draft regulation and the draft standard indicate that the Chinese government is turning its attention towards the data and security practices of the connected cars industry. Below we explain the key aspects of this draft regulation, summarize some of the noteworthy provisions, and conclude with the key takeaways for everyone in the car ecosystem. 

Broad scope of covered entities: from OEMs to online ride-hailing companies

The draft regulation aims to strengthen the protection of “personal information” and “important data,” regulate data processing related to cars, and maintain national security and public interests. The scope of application of this draft regulation is fairly broad, both in terms of who it applies to and the types of data it covers. 

The draft regulation applies to “operators” that collect, analyze, store, transmit, query, utilize, delete, and provide (activities collectively referred to as processing) personal information or important information overseas (during the design, production, sales, operation, maintenance, and management of cars) and “within the territory of the People’s Republic of China.” 

“Operators” are entities that design or manufacture cars, or service institutions such as OEMs (original equipment manufacturers), component and software providers, dealers, maintenance organizations, online car-hailing companies, insurance companies, etc. (Note: The draft regulation includes “etc.,” here and throughout, which appears to mean that it is a non-exhaustive list.)

Covered data: Distinction among “personal information,” “important data,” and “sensitive personal information”

The draft regulation considers three data types, with an emphasis on “personal information” and “important data”, which are defined terms under Article 3. In addition, there is also a third type mentioned within the draft, at Article 8, and in a separate press release document: “sensitive personal information.”  

Personal information includes data from car owners, drivers, passengers, pedestrians, etc. (non-exhaustive list) and also includes information that can infer personal identity and describe personal behavior. This is a broad definition and is notable because it explicitly includes information about passengers and pedestrians. As the business models evolve and the ecosystem of players in the car space grows, it has become more important to consider individuals other than just the driver or registered user of the car. The draft regulation appears to use the words “users” and “personal information subjects” when referring to this group of individuals broadly and also uses “driver,” “owner,” and “passenger” throughout.

The second type of data covered is “important data,” which includes:

The inclusion of this data type is notable because it is defined in addition to “sensitive personal information” and includes data about users and infrastructure (i.e., the car charging network). Article 11 prescribes that when handling important data, operators should report to the provincial cyberspace administration and relevant departments the type, scale, scope, storage location and retention period, the purposes for collection, whether it was shared with a third party, etc. in advance (presumably in advance of processing this type of data, but this is something that may need to be clarified).

The third type of data mentioned in the draft regulation is “sensitive personal information,” and this includes vehicle location, driver or passenger audio and video, and data that can be used to determine illegal driving. There are certain obligations for operators processing this type of data (Articles 8 and 16).

Article 8 prescribes that where “sensitive personal information” is collected or provided outside of the vehicle, operators must meet certain obligations:

The definitions of these three types of data mirror similar definitions in other Chinese laws or draft laws currently being considered for adoption, such as the Civil Code and, respectively, the Personal Information Protection Law and the Cybersecurity Law. Consistency across these laws indicates a harmonization of China’s emerging data governance regulatory model. 

Obligations based on the Fair Information Practice Principles

Articles 4 – 10 include many of the fair information practice principles, such as purpose specification and data minimization in Article 4 and security safeguards in Article 5, as well as privacy by design (Articles 6(4), 6(5), and 9). There are a few notable provisions worth discussing in more detail which are organized under the following headings below: local processing, transparency and notice, consent and user control, biometric data, annual data security management, and violations and penalties. 

Local (“on device”) processing

Personal information and important data should be processed inside the vehicle, wherever possible (Article 6(1)). Where data processing outside of the car is necessary, operators should ensure the data has been anonymized wherever possible (Article 6(2)).

Transparency and Notice

When processing personal information, the operator is required to give notice of the types of data being collected and provide the contact information for the person responsible for processing user rights (Article 7). This notice can be provided through user manuals, onboard display panels, or other appropriate methods. The notice should include the purpose for collection, the moment that personal information is collected, how users can stop the collection, where and for how long data is stored, and how to delete data stored in the car and outside of the vehicle.

Regarding sensitive personal information (Article 8(3)), the operator is obliged to inform the driver and passengers that this data is being collected through a display panel or a voice in the car. This provision does not include “user manuals” as an example of how to provide notice, which potentially means that this data type is worthy of more active notice than personal information. This is notable because operators cannot rely on notice being given through a privacy notice placed on a website or in the car’s manual.

Consent and User Control, including a two-week deletion deadline

Article 9 requires operators to obtain consent to collect personal information, except where laws do not require consent. This provision notes that consent is often difficult to obtain (e.g., collecting audio and video of pedestrians outside the car). Because of this difficulty, data should only be collected when necessary and should be processed locally in the vehicle. Operators should also employ privacy by design measures, such as de-identification on devices.

Article 8(2) (requirements when collecting sensitive personal information) requires operators to obtain the driver’s consent and authorization each time the driver enters the car. Once the driver leaves the driver’s seat, that consent session has ended, and a new one must begin once the driver gets back into the seat. The driver must be able to stop the collection of this type of data at any time, be able to view and make inquiries about the data collected, and request the deletion of the data (the operator has two weeks to delete the data). It is worth noting that Article 8 includes six subsections, some of which appear to apply only to the driver or owner and not passengers or pedestrians. 

These consent and user control requirements are quite notable and would have a non-trivial impact on the design of the car, the user experience, as well as the internal operations of the operator. It could potentially impact the user experience negatively if consent and authorization were required each time the driver got into the driver’s seat. For example, a relevant comparable experience is using a website and facing the consent-related pop-ups that must be closed out before being able to read or use the website at every visit. Furthermore, stopping the collection of location data, video data, and other telematics data (if used to determine illegal driving) could also present safety and functionality risks and cause the car not to operate as intended or safely. These are some of the areas where stakeholders are expected to submit comments for the public consultation. 

Biometric data

Biometric data is mentioned throughout the draft regulation, as this type of data is implicitly or explicitly included in the definitions of personal information, important data, and sensitive personal information. Biometric data is specifically mentioned in Article 10, which is about the biometric data of drivers. Biometric data is an increasingly common data type collected by cars and deserves special attention. Article 10 would require that the biometric data of the driver (e.g., fingerprints, voiceprints, faces, heart rhythms, etc.) only be collected for the convenience of the user or to increase the security of the vehicle. Operators should also provide alternatives to biometrics. 

Data localization

Articles 12-15 and 18 concern data localization. Both personal information and important data should be stored within China, but if it is necessary to store elsewhere, the operator must complete an “outbound security assessment” through the State Cyberspace Administration, and the operator is permitted to send only the data specified in that assessment overseas. The operator is also responsible for overseeing the overseas recipient’s use of the data to ensure appropriate security and for handling all user complaints. 

Annual data security management status

Article 17 places additional obligations on operators to report their annual data security management status to relevant authorities before December 15 of each year when:

  1. They process personal information of more than 100,000 users, or
  2. They process important data. 

Given that this draft regulation applies to passengers and pedestrians in addition to drivers, it would not take long for the threshold of 100,000 users to be met, especially for operators who manage a fleet of cars for rental or ride-hail. Additionally, since the definitions of personal information and important data are so broad, it is likely that many operators would trigger this reporting obligation. The obligations include recording the contact information of the person responsible for data security and handling user rights; recording relevant information about the scale and scope of data processing; recording with whom data is shared domestically; and other security conditions to be specified. If data is transferred overseas, there are additional obligations (Article 18). 

Violations and Penalties

Violation of the regulations would result in punishment in accordance with the “Network Security Law of the People’s Republic of China” and other laws and regulations. Operators may also be held criminally responsible. 

Conclusion 

China’s draft car privacy and security regulation provides relevant information for policymakers and others thinking carefully about privacy and data protection regarding cars. The draft regulation’s scope is very broad and includes many players in the mobility ecosystem beyond OEMs and suppliers (e.g., online car-hailing companies and insurance companies).

With regards to user rights, the draft regulation recognizes that other individuals, in addition to the driver, will have their personal information processed and provides data protection and user rights to these individuals (e.g., passengers and pedestrians). The draft regulation would apply to three broad categories of data (personal information, important data, and sensitive personal information).

In privacy and data protection laws from the EU to the US, we have continued to see different obligations arise depending on the type or sensitivity of data and how data is used. This underscores the need for organizations to have a complete data map; indeed, it is crucial that all operators in the connected and automated car ecosystem have a sound understanding of what data is being collected from which person and where that data is flowing. 

The draft regulation also highlights the importance of transparency and notice, as well as the challenges of consent and user control. It is a challenge to appropriately notify drivers, passengers, and pedestrians about all of the data types being collected by a vehicle.

Privacy and data protection laws will have a direct impact on the design, user experience, and even the enjoyment and safety of cars. It is crucial that all stakeholders are given the opportunity to provide feedback in the drafting of privacy and data protection laws that regulate data flows in the car ecosystem and that privacy professionals, engineers, and designers become much more comfortable working together to operationalize these rules. 

Image by Tayeb MEZAHDIA from Pixabay 

Check out other blogs in the Global Privacy series:

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

The Right to Be Forgotten is Not Compatible with the Brazilian Constitution. Or is it?

India: Massive Overhaul of Digital Regulation with Strict Rules for Take-down of Illegal Content and Automated Scanning of Online Content

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

mount fuji 3801827 1920

Authors: Takeshige Sugimoto, Akihiro Kawashima, Tobyn Aaron from S&K Brussels LPC; Authors can be contacted at [email protected].

The recent amendments to Japan’s data protection law (the Act on the Protection of Personal Information, henceforth the ‘APPI‘) contain a number of new provisions certain to alter – and for many foreign businesses, transform – the ways in which companies conduct business in or with Japan. In addition to greatly expanding data subject rights, most notably, the amendments to the APPI (the ‘2020 Amendments‘): 

(i) eliminate all former restrictions on the APPI’s extraterritorial application; 

(ii) considerably heighten companies’ disclosure and due diligence obligations with respect to overseas data transfers; 

(iii) introduce previously unregulated categories of personal information (each with corresponding obligations for companies), including ‘pseudonymously processed information’ and ‘personally referable information’; and 

(iv) for the first time, mandate notifications for qualifying data breaches.

The 2020 Amendments will be enforced by the Personal Information Protection Commission of Japan (the “PPC”), pursuant to forthcoming PPC guidelines alongside the amended Enforcement Rules for the Act on the Protection of Personal Information (the ‘amended PPC Rules‘) and the amended Cabinet Order to Enforce the Act on the Protection of Personal Information (the ‘amended Cabinet Order‘) (both published on March 24, 2021).

As the 2020 Amendments are set to enter into force on April 1, 2022, Japanese and global companies that conduct business in or with Japan, have just less than one year to bring their operations into compliance. To facilitate such efforts, this blog post describes those provisions of the 2020 Amendments likely to have the greatest impact on businesses, as well as current events in Japan which will affect their implementation and should inform the manner by which companies address enforcement risks and compliance priorities.

1. LINE Data Transfers to China: A Wake-Up Call for Japan

To appreciate the effect that the 2020 Amendments will have on the Japanese data protection space, one must first consider the current political and societal contexts in Japan in which the 2020 Amendments will be introduced – and enforced – beginning with a recent incident of note involving LINE Corporation. 

In March 2021, headlines across Japan shocked locals: Japan-based messaging app LINE, actively used and trusted by approximately 86 million Japanese citizens, had been transferring users’ personal information, including names, IDs and phone numbers, to a Chinese affiliate. It is neither unusual nor unlawful for Japanese tech companies to outsource certain of their operations, including personal information processing, overseas. But for Japanese nationals, the LINE matter is different for a number of important reasons, not least of which is the Japanese population’s awareness of the Chinese Government’s broad access rights to personal data managed by private-sector companies in China, pursuant to China’s National Intelligence Law.

LINE is not only the most utilized messaging application in Japan; it also occupies a special place in the country’s historical and cultural consciousness. When Japan was hit by the 2011 earthquake, use of voice networks failed and email exchanges were delayed, as citizens struggled to communicate with, and confirm the safety of, their loved ones. And so, LINE was born – a simple messaging and online calling tool to serve as a communications hotline in case of emergency. A decade on, LINE has become the major – and for many the only – means of communication in Japan – particularly in today’s socially-distanced world.

For the Japanese Government too, LINE serves a crucial role: national – and municipality – level government bodies use LINE for official communications, including of sensitive personal information such as for COVID-19 health data surveying. News of LINE’s transfer of user data to China, including potential access by the Chinese Government, therefore horrified private citizens and public officials both.

On March 31, 2021, the PPC launched an official investigation into LINE and its parent company, Z Holdings, over their management of personal information. Until such investigation is concluded, whether and to what extent LINE violated the APPI (and in particular, its provisions governing third party access and international transfers) will remain uncertain. Regardless, the impact of this matter on the Japanese data privacy space is already unfolding. In late March, a number of high-ranking Japanese politicians (including Mr. Akira Amari, Chairperson of the Rule-Making Strategy Representative Coalition of the Liberal Democratic Party of Japan) sent the PPC and other relevant Government ministries strongly-worded messages urging immediate action with respect to LINE, and more broadly, calling for a risk assessment to be conducted vis-à-vis all personal information transfers to China by companies in Japan.

Several days later, Japanese media reported that the PPC had requested members of both the KEIDANREN (the Japan Business Federation, comprised of 1,444 representative companies in Japan) and the Japan Association of New Economy (comprised of 534 member companies in Japan), to report their personal information transfer practices involving China, and to detail the privacy protection measures in place with respect to such transfers. For any APPI violations revealed, the PPC will issue a recommendation potentially followed with an injunctive order, the latter of which carries a criminal penalty (including possible imprisonment) if not implemented.

Importantly, recent political support for stronger data protection measures extends beyond transfers to China. For instance, Mr. Amari has also reportedly called on the PPC to broadly limit permissible overseas transfers of personal information to those countries with data protection standards equivalent to the APPI (a limitation which, if implemented, would greatly surpass restrictions on transfer under both the current APPI and the 2020 Amendments).

Although the PPC has yet to respond, it is evident that both political and popular sentiment in Japan strongly favor enhanced protections for Japanese persons’ personal information. The inevitable outcome of such sentiment, which may be further amplified depending on the PPC’s forthcoming conclusions regarding the LINE matter, will be the increasingly stringent enforcement of the APPI and its 2020 Amendments, and potentially, further amendments thereto. As recent events in Japan demonstrate, this transformation has already begun to take effect. Companies conducting business in or with Japan, whether Japanese or foreign, should therefore pay close attention to the Japanese data privacy space over the course of this year.

2. Broadened Extraterritorial Reach and International Transfer Restrictions

For ‘Personal Information Handling Business Operators’ (henceforth ‘Operators‘, a term used in joint reference to controllers and processors, upon which the APPI imposes the same obligations) arguably the greatest impact of the 2020 Amendments will derive from their drastic revisions to Article 75 (extraterritoriality) and Article 24 (international transfer).

To date, the APPI’s extraterritorial reach has been limited to a handful of its articles, primarily those governing purpose limitation and lawful acquisition of personal information (‘PI‘) by overseas Operators. From April 2022, however, Article 75 of the amended APPI will, without exception, fully bind all private-sector overseas entities, regardless of their size, which process the PI, pseudonymously processed PI or anonymously processed PI of individuals who are in Japan, in connection with supplying goods or services thereto.

With respect to international transfers, Article 24 of the current legislation prohibits the transfer of PI to a ‘third party’ outside of Japan absent the data subject’s prior consent, unless (i) the recipient country has been white-listed by the PPC or (ii) the recipient third party upholds data protection standards equivalent to the APPI (in practice, these would generally be imposed contractually). Otherwise, international transfers may also be conducted pursuant to legal obligation or necessity (for the protection of human life, public interest or governmental cooperation, provided that for each, the data subject’s consent would be difficult to obtain). The APPI’s international transfer mechanisms generally conform to those prescribed by other global data protection regimes, loosely resembling the EU GDPR’s adequacy decisions (with respect to (i) above), and standard contractual clauses or binding corporate rules (with respect to (ii) above, although there are no PPC-provided contractual clauses, and non-binding arrangements such as the APEC CPBR System are PPC-approved).

The 2020 Amendments and amended PPC Rules do not modify the above transfer mechanisms, but they do narrow their scope in two key aspects. First, pursuant to Article 24(2) of the 2020 Amendments, transfers conducted on the basis of data subject consent will henceforth require the transferring Operator (on top of preexisting notification obligations) to inform the data subject in advance as to the name of the recipient country, and the levels of PI protection provided by both that country (assessed using an “appropriate and reasonable method”) and the recipient third party. Absent such information, data subject consent will be rendered uninformed and the transfer, invalid.

Of greater impact on the transferring Operator, however, will be the second modification (pursuant to Article 24(3) of the 2020 Amendments): in the event that an international transfer is conducted in reliance on contractually or otherwise imposed APPI data protection standards (the primary transfer mechanism on which Operators in Japan rely), such contractual safeguards alone are to be rendered insufficient. Going forward, the transferring Operator must, in addition to imposing APPI-equivalent obligations upon a recipient third party, (i) take “necessary action to ensure continuous implementation” of such obligations by the recipient; and (ii) inform the data subject, upon request, regarding the actions the Operator has taken.

With respect to (i) above, the amended PPC Rules interpret “necessary action to ensure continuous implementation” as requiring the transferring Operator to: (1) periodically check the implementation status and content of the APPI-equivalent measures by the recipient third party, and assess (by an “appropriate and reasonable method”) the existence of any foreign laws which might impact such implementation; (2) take necessary and appropriate actions to remedy any obstacles that are found; and (3) suspend all PI transfer to the third-party recipient, should its continuous implementation of the APPI-equivalent measures become difficult.

In addition, following receipt of a data subject’s request for information (pursuant to (ii) above), the amended PPC Rules specify that the transferring Operator must, without undue delay, inform the requesting data subject of each of the following:

(1)  the manner by which the APPI-equivalent measures were established by (or presumably with) the recipient third party (such as a data processing agreement or memorandum of understanding, or in the case of inter-group transfers, a privacy policy);

(2)  details of the APPI-equivalent measures implemented by the recipient third party;

(3)  the frequency and method by which the transferring Operator checked such implementation;

(4)  the name of the recipient country;

(5)  whether any foreign laws may affect the implementation of the APPI-equivalent measures, and a detailed overview of such laws;

(6)  whether any obstacles to implementation exist, and a detailed overview of such obstacles; and

(7)  the measures taken by the transferring Operator upon a finding of such obstacles.

Only if provision of the above items to the data subject is likely to ‘significantly hinder’ an Operator’s business operations, might that Operator refrain from such (complete or partial) disclosure.

In practice, Operators primarily rely upon contractual safeguards and consent (in that order) to transfer PI outside of Japan. Indeed, the PPC’s list of “adequacy decisions” on which transferring Operators may alternatively rely is significantly shorter than that of the European Commission: to date, only the UK and EEA members have been deemed adequate recipients of a PI transfer from Japan. Therefore, the onerous informational and due diligence obligations incumbent upon Operators from April 2022, which affect precisely these two transfer mechanisms, are certain to impact business operations in Japan. And, given the 2020 Amendments’ unbridled extraterritoriality, this burden will be equally felt overseas. Most importantly, in the wake of the March 2021 LINE matter, compliance with the current and amended APPI, and in particular its overseas transfers restrictions, will be at the top of the PPC’s enforcement priorities.

3. Mandatory Data Breach Notifications

In addition to expanding the types of security incidents subject to the amended APPI, more notably, data breach notifications will henceforth be mandatory (in contrast, data breach notifications are subject to ‘best efforts’ under current legislation). Going forward, Operators will be required – pursuant to Article 22-2 of the 2020 Amendments and the amended PPC Rules – to promptly notify both the PPC and data subjects of the occurrence and/or potential occurrence of any data leakage, loss, damage or other similar situation which poses a ‘high’ risk to the rights and interests of data subjects (henceforth, a ‘breach‘).

The types of breaches which meet this ‘high’ risk threshold, and thus trigger a notification obligation, are described by the amended PPC Rules as those which involve, or potentially involve, any of the following: (i) sensitive (‘special care-required’) PI; (ii) financial injury caused by unauthorized usage; (iii) a wrongful purpose(s) as the cause; or (iv) greater than 1,000 affected data subjects. However, a notification is not required in the event that the Operator implemented ‘necessary measures’ to safeguard the rights and interests of data subjects (such as sophisticated encryption).

The amended PPC Rules also stipulate the required content for such notifications, although Operators are granted thirty days to provide details unknown at the time of the initial notice:

(1) overview of the breach;

(2) the types of PI affected or possibly affected by the breach;

(3) the number of data subjects affected or possibly affected by the breach;

(4) causes of the breach;

(5) existence and nature of secondary damage or risks thereof;

(6) status and nature of communications to affected data subjects;

(7) whether and how the breach has been publicized;

(8) measures implemented to prevent a recurrence; and

(9) any additional matters which may serve as a useful reference.

For those Operators ‘entrusted‘ by another Operator with the processing of PI, the 2020 Amendments provide a second option: in lieu of notifying the PPC and data subjects, such “entrusted” Operators may instead alert the “entrusting” Operator as to the breach. In practice, this likely equates to the EU GDPR’s requirement for processors to notify controllers in the event of a breach (although under the 2020 Amendments, direct accountability to the PPC and data subjects is still the default, including for “entrusted” Operators).

In the event of a breach, amended Article 30(5) additionally confers upon data subjects the right to request deletion, suspension of use and suspension of transfer, of affected PI.

4. Expansion of ‘Personal Information’ Concepts and Categories

Another major modification to the APPI is the expanded scope of the types of PI covered. In addition to eliminating the APPI’s differential treatment of temporary PI (retained for up to six months), the 2020 Amendments introduce a new category of information, ‘pseudonymously processed information‘, thereby bringing the Japanese data protection regime one additional step closer to the EU GDPR framework.

As currently drafted, the APPI recognizes only two major types of information: PI and anonymously processed information. Notably, the method of rendering anonymously processed information under the APPI – in contrast to the EU GDPR– need not be technically irreversible (unless such data originates in the UK or EEA and the transfer is based on the European Commission’s adequacy decision on Japan, in which case special PPC-drafted Supplementary Rules do require irreversibility); instead, the APPI endeavors to preserve anonymity by requiring Operators to implement appropriate security measures to prevent reidentification.

Pseudonymously processed information is defined by the 2020 Amendments as information relating to an individual, which cannot identify such individual unless collated with additional information. The stated intention behind the drafters’ introduction of the pseudonymization process is to enable Operators to (i) utilize pseudonymously processed information for internal purposes including business analytics, the development of computational models, etc., and/or (ii) retain rather than delete, for potential future statistical analysis usage, pseudonymously processed information derived from PI which are no longer necessary for the original purpose(s) for which they were collected.

The 2020 Amendments and amended PPC Rules model the pseudonymization process on anonymization, requiring the removal of any (i) description, (ii) unique ‘personal identification code’ (as defined in the APPI), and (iii) information relating to the processing method performed to enable the removal of (i) and (ii) above. The immediate result is the creation, by separation, of two types of information: pseudonymously processed information and ‘removed’ PI, where the latter is the ‘key’ enabling reidentification.

The removed PI are treated as PI under the 2020 Amendments, and as such are subject to all of the same requirements and restrictions, although Operators in possession of both removed PI and pseudonymously processed information are additionally obligated to provide enhanced security in order to safeguard the integrity of the pseudonymously processed information (pursuant to the amended PPC Rules and amended Article 35-2(2)).

Notably, and in divergence from the EU GDPR approach to pseudonymously processed information, the 2020 Amendments’ rules governing treatment of such information vary according to the Operator involved. With respect to pseudonymously processed information handled by an Operator in simultaneous possession of the removed (and separately handled) PI, amended Article 35-2 stipulates the following specific requirements:

(i)         a prohibition of the collation of such information with other data, such as the removed PI, in a manner which could identify data subjects;

(ii)       strict application of the principles of purpose limitation and necessity thereto;

(iii)     a prohibition on usage of any contact information contained therein to phone, mail, email or otherwise contact data subjects;

(iv)      a prohibition of any transfer thereof to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI);

(v)       in the event of their acquisition or the intended alteration of their processing purpose, limitation of the Operator’s disclosure obligation to that of notice by publication;

(vi)      non-applicability of breach notification obligations pursuant to amended Article 22-2, provided that the removed PI are not also subject to the breach; and

(vii)    the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).

In addition to the above, the APPI’s ‘general’ requirements pursuant to Articles 19-22 will apply to pseudonymously processed information handled by an Operator which simultaneously (but separately) possesses the removed PI. Such Operator will be required to:

(i) maintain accuracy of the pseudonymously processed information (for the duration their utilization remains necessary, after which their immediate deletion – alongside the deletion of the removed PI – is required, subject to the Operator’s best efforts);

(ii) implement necessary and appropriate security measures to prevent leakage, loss or damage of the pseudonymously processed information; and

(iii) exercise necessary and appropriate supervision over employees and entrusted persons handling the pseudonymously processed information.  

In contrast, with respect to pseudonymously processed information handed by an Operator which does not simultaneously possess the removed PI, amended Article 35-3 prohibits such Operator from acquiring the removed PI and/or collating the pseudonymously processed information with other information in order to identify data subjects, and limits the applicable provisions of the 2020 Amendments to the following:

(i) the implementation of necessary and appropriate security measures to prevent leakage (a simplified version of Article 20);

(ii) the exercise of necessary and appropriate supervision over employees and entrusted persons handling such information (pursuant to Articles 21 and 22);

(iii) a prohibition on usage of any contact information contained in the pseudonymously processed information to phone, mail, email or otherwise contact data subjects;

(iv) a prohibition of any transfer of such information to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI); and

(v) the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).

In addition to pseudonymously processed information, the 2020 Amendments, pursuant to Article 26-2, introduce an additional, fourth category of information – namely,personally referable information’. This fourth category includes cookies and purchase history (for example), which items may not independently be linkable to a specific individual (and thus would not constitute PI) but which could, if transferred to an Operator in possession of additional, related data, become PI. To account for such qualifying transfers, the 2020 Amendments introduce a consent requirement (such as an opt-in cookie banner).

In the case of overseas transfers, the transferring Operator must additionally inform the data subject as to the data protection system and safeguards of the recipient country and third party, as well as take ‘necessary action to ensure continuous implementation’ of APPI-equivalent safeguards by such recipient third party. Unlike for PI, the data subject does not have a right to request additional details regarding the ‘necessary action’ taken by the Operator with respect to an overseas transfer of personally referable information.

5. Preparing for the 2020 Amendments: Next Steps for Japanese and Foreign Operators

Companies conducting business in or with Japan should be mindful of the demanding nature of the 2020 Amendments to the APPI, and the stringency with which the PPC will seek to enforce them – particularly in view of the dismay caused by the LINE matter and the likelihood of efforts by the PPC to avoid similar incidents in the future.

Moreover, as the European Commission finalizes its first review of its 2019 adequacy decision on Japan, the PPC’s interpretative rules and enforcement trends may further intensify, with the aim of bringing Japanese data protection legislation closer to global standards, including the EU GDPR framework. Bearing this in mind, companies – including those not currently subject to the APPI, but which provide goods and/or services to individuals in Japan – would be wise to proactively conduct necessary modifications to their internal data protection policies and mechanisms, in order to ensure operational compliance with the amended APPI by April 2022.

For those Operators involved in international transfers of PI from Japan, absence of a PPC-issued “standard contractual clauses” template renders difficult, and from a compliance standpoint uncertain, any reliance on contractually-imposed APPI-equivalent standards pursuant to amended Article 24(3). However, one potential solution for Operators preparing to rely on this transfer mechanism for overseas PI transfers (excluding to the EEA or UK) may be the European Commission’s revised Standard Contractual Clauses (‘New SCCs‘), which are due to be published in early 2021. Subject to certain necessary modifications (of jurisdictional clauses and so forth), Operators may consider utilizing the New SCCs as a starting point, to bind recipient third parties to the stringent data protection standards and obligations of the 2020 Amendments.

Operators engaged in transferring PI should also be mindful of the 2020 Amendments’ onerous due diligence obligations with respect to overseas third parties. Prior to and during any cross-border engagements involving Japan-origin PI, Operators must actively ensure that their third-party recipients of such PI (including partners, vendors and subcontractors, as well as each of their respective partner, vendor and subcontractor recipients, and so forth) successfully implement, and continuously maintain, APPI-equivalent measures.

The 2020 Amendments’ enhanced disclosure obligations invite data subjects to hold Operators accountable with respect to the preventative and/or reactive measures Operators take – or fail to take – to protect their PI. Operators engaging foreign third parties should therefore consider reviewing and amplifying their due diligence of such entities, in addition to assessing the laws in each recipient country, in order to proactively identify and devise solutions to address potential obstacles to APPI adherence overseas.

The 2020 Amendments’ broadened extraterritorial application will also require non-Japanese companies to modify their internal data breach assessment and notification systems, to ensure that the PPC and data subjects in Japan are appropriately notified in the event of a qualifying breach; and to implement any necessary changes to their data subject communications platforms or data subject rights request forms, to enable data subjects in Japan to successfully exercise their amended APPI rights from April 1, 2022.

Once published, the PPC guidelines to the 2020 Amendments will further clarify (and potentially amplify) Operators’ compliance obligations with respect to each of the topics addressed in this blog post. The PPC’s findings in regard to LINE’s conduct may also have significant bearing on future APPI enforcement trends and risks. Therefore, in addition to implementing necessary measures to ensure operational compliance with the 2020 Amendments, companies processing covered PI and interested data privacy professionals should look out for these items over the next several months.   

Photo Credit: Ben Thai from Pixabay

For more Global Privacy thought leadership, see:

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and automated scanning of online content

Russia: New law requires express consent for making personal data available to the public and for any subsequent dissemination

FPF Hosted a CPDP 2021 Panel on US Privacy Law: The Beginning of a New Era

cpdp

By Srivats Shankar, FPF Legal Intern

For the 14th annual Computers, Privacy and Data Protection conference, which took place between 27 and 29 January, 2021, FPF hosted a panel of experts to discuss “US Privacy Law: The Beginning of a New Era”, whose recording has just been published. The panel was moderated by Dr. Gabriela Zanfir-Fortuna, who was joined by Anupam Chander, Professor of Law at Georgetown University; Jared Bomberg, Senior Counsel for the Senate Committee on Commerce, Science and Transportation; Stacey Schesser, Office of California Attorney General; and Lydia Parnes, Partner at Wilson Sonsini’s Privacy and Cybersecurity Practice.

Broadly, the panel discussed the events that have prompted the shift towards privacy protection in the US in recent years, including the latest privacy law initiatives at the state and federal level. The discussion addressed how regulators are enforcing current laws and preparing for what’s to come, and how these developments may strengthen the Trans-Atlantic relationship in the digital age.

Professor Anupam Chander discussed the most consequential developments in US privacy law in recent years, which he identified as the passage of the California Consumer Privacy Act (CCPA) in 2018, the Supreme Court decision of Carpenter v. US, and the passage of the Consumer Privacy Rights Act (CPRA) in 2020. According to Professor Chander, these developments will define the law of privacy over the next decade. 

Jared Bomberg discussed developments at the federal level in the United States, including the increasing focus by Congress on a comprehensive consumer privacy legislation. In the Senate, the two leading proposals are the Consumer Online Privacy Rights Act (COPRA), led by Senator Cantwell (D-WA) and the SAFE DATA Act, led by Senator Wicker (R-MS). Both bills have many cosponsors. Among these and other privacy bills, there is commonality regarding the right of access, correction, deletion, and portability. Meanwhile, key differences include the existence of a private right of action, the extent to which a federal law would preempt state laws, and the incorporation of fiduciary responsibilities. 

Stacey Schesser discussed the privacy law in California, including the enactment of the CCPA and the response of companies to the law. Following the passage of the GDPR, many companies have come to support compliance with the CCPA. California, by virtue of its large population and major economy, has required many businesses across the United States to come into compliance with the CCPA. Schesser notes that they have seen consumer frustration with opt-out mechanisms and deletion of personal information, alongside challenges with companies interpreting the law in different ways. However, she noted that many companies have complied with the CCPA within the 30 day notice and cure period after being notified of a violation. The initial rollout of Attorney General regulations have attempted to identify the scope of enforcement especially with reference to unique problems such as dark patterns.

Lydia Parnes discussed the enforcement of privacy law in the US. She observed that the Federal Trade Commission (FTC) has been fairly aggressive in exercising its enforcement powers. Commissioner Slaughter who became Acting Chairwoman has promoted the usage of civil penalties in privacy rights cases. These enforcement actions have become “baseline norms” for companies to follow. They don’t just affect the individual company but the industry at large. Parnes noted that the FTC has limited resources and enforcement by state agencies would be an effective way to facilitate change.

In the Q&A session, attendees raised issues of global interoperability, agency enforcement, and competition. Professor Anupam Chander emphasized the importance of the Schrems II decision, and the need for the US and Europe to come to another “modus vivendi.” This could be established without a “national” policy on privacy, to protect the information of foreign individuals whose data may be stored in the United States.

In response to a question about enforcement, Jared Bomberg emphasized that agencies like the FTC need more resources and that there is some acceptance that the FTC should continue enforcement in its existing fashion. He further noted that the Attorney General could also supplement and collaborate on enforcement. Bomberg also stressed the need for a private right of action. Market constraints also play a role in limiting the ability of the customer to protect their rights, and the current lack of transparency with the power dynamic has created a situation where customers do not understand what they have signed up for.

In closing, the panelists received a question on the likelihood of seeing a federal privacy law in the next two years. The consensus as Jared put it was that it could be “100% and 0%.” 

Watch the full recording of the panel by following this link.

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

Brazilian Supreme Federal Court

Author: Dr. Luca Belli

Dr. Luca Belli is Professor at FGV Law School, Rio de Janeiro, where he leads the CyberBRICS Project and the Latin American edition of the Computers, Privacy and Data Protection (CPDP) conference. The opinions expressed in his articles are strictly personal. The author can be contacted at [email protected].

The Brazilian Supreme Federal Court, or “STF” in its Brazilian acronym, recently took a landmark decision concerning the right to be forgotten (RTBF), finding that it is incompatible with the Brazilian Constitution. This attracted international attention to Brazil for a topic quite distant than the sadly frequent environmental, health, and political crises.

Readers should be warned that while reading this piece they might experience disappointment, perhaps even frustration, then renewed interest and curiosity and finally – and hopefully – an increased open-mindedness, understanding a new facet of the RTBF debate, and how this is playing out at constitutional level in Brazil.

This might happen because although the STF relies on the “RTBF” label, the content behind such label is quite different from what one might expect after following the same debate in Europe. From a comparative law perspective, this landmark judgment tellingly shows how similar constitutional rights play out in different legal cultures and may lead to heterogeneous outcomes based on the constitutional frameworks of reference.   

How it started: insolvency seasoned with personal data

As it is well-known, the first global debate on what it means to be “forgotten” in the digital environment arose in Europe, thanks to Mario Costeja Gonzalez, a Spaniard who, paradoxically, will never be forgotten by anyone due to his key role in the construction of the RTBF.

Costeja famously requested to deindex from Google Search information about himself that he considered to be no longer relevant. Indeed, when anyone “googled” his name, the search engine provided as the top results some link to articles reporting Costeja’s past insolvency as a debtor. Costeja argued that, despite having been convicted for insolvency, he had already paid his debt with Justice and society many years before and it was therefore unfair that his name would continue to be associated ad aeternum with a mistake he made in the past.

The follow up is well known in data protection circles. The case reached the Court of Justice of the European Union (CJEU), which, in its landmark Google Spain Judgment (C-131/12), established that search engines shall be considered as data controllers and, therefore, they have an obligation to de-index information that is inappropriate, excessive, not relevant, or no longer relevant, when a data subject to whom such data refer requests it. Such an obligation was a consequence of Article 12.b of Directive 95/46 on the protection of personal data, a pre-GDPR provision that set the basis for the European conception of the RTBF, providing for the “rectification, erasure or blocking of data the processing of which does not comply with the provisions of [the] Directive, in particular because of the incomplete or inaccurate nature of the data.”

The indirect consequence of this historic decision, and the debate it generated, is that we have all come to consider the RTBF in the terms set by the CJEU. However, what is essential to emphasize is that the CJEU approach is only one possible conception and, importantly, it was possible because of the specific characteristics of the EU legal and institutional framework. We have come to think that RTBF means the establishment of a mechanism like the one resulting from the Google Spain case, but this is the result of a particular conception of the RTBF and of how this particular conception should – or could – be implemented.

The fact that the RTBF has been predominantly analyzed and discussed through the European lenses does not mean that this is the only possible perspective, nor that this approach is necessary the best. In fact, the Brazilian conception of the RTBF is remarkably different from a conceptual, constitutional, and institutional standpoint. The main concern of the Brazilian RTBF is not how a data controller might process personal data (this is the part where frustration and disappointment might likely arise in the reader) but the STF itself leaves the door open to such possibility (this is the point where renewed interest and curiosity may arise).

The Brazilian conception of the right to be forgotten

Although the RTBF has acquired a fundamental relevance in digital policy circles, it is important to emphasize that, until recently, Brazilian jurisprudence had mainly focused on the juridical need for “forgetting” only in the analogue sphere. Indeed, before the CJEU Google Spain decision, the Brazilian Supreme Court of Justice or “STJ” – the other Brazilian Supreme Court that deals with the interpretation of the Law, differently from the previously mentioned STF, which deals with the interpretation of constitutional matters – had already considered the RTBF as a right not to be remembered, affirmed by the individual vis-à-vis traditional media outlets.

This interpretation first emerged in the “Candelaria massacre” case, a gloomy page of Brazilian history, featuring a multiple homicide perpetrated in 1993 in front of the Candelaria Church, a beautiful colonial Baroque building in Rio de Janeiro’s downtown. The gravity and the particularly picturesque stage of the massacre led Globo TV, a leading Brazilian broadcaster, to feature the massacre in a TV show called Linha Direta. Importantly, the show included in the narration some details about a man suspected of being one of the perpetrators of the massacre but later discharged.

Understandably, the man filed a complaint arguing that the inclusion of his personal information in the TV show was causing him severe emotional distress, while also reviving suspects against him, for a crime he had already been discharged of many years before. In September 2013, further to Special Appeal No. 1,334,097, the STJ agreed with the plaintiff establishing the man’s “right not to be remembered against his will, specifically with regard to discrediting facts.” This is how the RTBF was born in Brazil.

Importantly for our present discussion, this interpretation is not born out of digital technology and does not impinge upon the delisting of specific type of information as results of search engine queries. In Brazilian jurisprudence the RTBF has been conceived as a general right to effectively limit the publication of certain information. The man included in the Globo reportage had been discharged many years before, hence he had a right to be “let alone,” as Warren and Brandeis would argue, and not to be remembered for something he had not even committed. The STJ, therefore, constructed its vision of the RTBF, based on article 5.X of the Brazilian Constitution, enshrining the fundamental right to intimacy and preservation of image, two fundamental features of privacy. 

Hence, although they utilize the same label, the STJ and CJEU conceptualize two remarkably different rights, when they refer to the RTBF. While both conceptions aim at limiting access to specific types of personal information, the Brazilian conception differs from the EU one on at least three different levels.

First, their constitutional foundations. While both conceptions are intimately intertwined with individuals’ informational self-determination, the STJ built the RTBF based on the protection of privacy, honour and image, whereas the CJEU built it upon the fundamental right to data protection, which in the EU framework is a standalone fundamental right. Conspicuously, in the Brazilian constitutional framework an explicit right to data protection did not exist at the time of the Candelaria case and only since 2020 it has been in the process of being recognized

Secondly, and consequently, the original goal of the Brazilian conception of the RTBF was not to regulate how a controller should process personal data but rather to protect the private sphere of the individual. In this perspective, the goal of STJ was not – and could not have been – to regulate the deindexation of specific incorrect or outdated information, but rather to regulate the deletion of “discrediting facts” so that the private life, honour and image of any individual might be illegitimately violated.

Finally, yet extremely importantly, the fact that, at the time of the decision, an institutional framework dedicated to data protection was simply absent in Brazil did not allow the STJ to have the same leeway of the CJEU. The EU Justices enjoyed the privilege of delegating to search engine the implementation of the RTBF because, such implementation would have received guidance and would have been subject to the review of a well-consolidated system of European Data Protection Authorities. At the EU level, DPAs are expected to guarantee a harmonious and consistent interpretation and application of data protection law. At the Brazilian level, a DPA has just been established in late 2020 and announced its first regulatory agenda only in late January 2021.

This latter point is far from trivial and, in the opinion of this author, an essential preoccupation that might have driven the subsequent RTBF conceptualization of the STJ.

The stress-test

The soundness of the Brazilian definition of the RTBF, however, was going to be tested again by the STJ, in the context of another grim and unfortunate page of Brazilian story, the Aida Curi case. This case originated with the sexual assault and subsequent homicide of the young Aida Curi, in Copacabana, Rio de Janeiro, on the evening of 14 July 1958. At the time the case crystallized considerable media attention, not only because of its mysterious circumstances and the young age of the victim, but also because the sexual assault perpetrators tried to dissimulate it by throwing the body of the victim from the rooftop of a very high building on the Avenida Atlantica, the fancy avenue right in front of the Copacabana beach.

Needless to say, Globo TV considered the case as a perfect story for yet another Linha Direta episode. Aida Curi’s relatives, far from enjoying the TV show, sued the broadcaster for moral damages and demanded the full enjoyment of their RTBF – in the Brazilian conception, of course. According to the plaintiffs, it was indeed not conceivable that, almost 50 years after the murder, Globo TV could publicly broadcast personal information about the victim – and her family – including the victim’s name and address, in addition to unauthorized images, thus bringing back a long-closed and extremely traumatic set of events.

The brothers of Aida Curi claimed reparation against Rede Globo, but the STJ, decided that the time passed was enough to mitigate the effects of anguish and pain on the dignity of Aida Curi’s relatives, while arguing that it was impossible to report the events without mentioning the victim. This decision was appealed by Ms Curi’s family members, who demanded by means of Extraordinary Appeal No. 1,010,606, that STF recognized “their right to forget the tragedy.” It is interesting to note that the way the demand is constructed in this Appeal exemplifies tellingly the Brazilian conception of “forgetting” as erasure and prohibition from divulgation.

At this point, the STF identified in the Appeal the interest of debating the issue “with general repercussion” which is a peculiar judicial process that the Court can utilize when recognizes that a given case has particular relevance and transcendence for the Brazilian legal and judicial system. Indeed, the decision of a case with general repercussion does not only bind the parties but rather establishes a jurisprudence that must be replicated by all lower-level courts.

In February 2021, the STF finally deliberated on the Aida Curi case, establishing that “the idea of ​​a right to be forgotten is incompatible with the Constitution, thus understood as the power to prevent, due to the passage of time, the disclosure of facts or data that are true and lawfully obtained and published in analogue or digital media” and that “any excesses or abuses in the exercise of freedom of expression and information must be analyzed on a case-by-case basis, based on constitutional parameters – especially those relating to the protection of honor, image, privacy and personality in general – and the explicit and specific legal provisions existing in the criminal and civil spheres.”

In other words, what the STF has deemed as incompatible with the Federal Constitution is a specific interpretation of the Brazilian version of the RTBF. What is not compatible with the Constitution is to argue that the RTBF allows to prohibit publishing true facts, lawfully obtained. At the same time, however, the STF clearly states that it remains possible for any Court of law to evaluate, on a case-by-case basis and according to constitutional parameters and existing legal provisions, if a specific episode can allow the use of the RTBF to prohibit the divulgation of information that undermine the dignity, honour, privacy, or other fundamental interests of the individual.

Hence, while explicitly prohibiting the use of the RTBF as a general right to censorship, the STF leaves room for the use of the RTBF for delisting specific personal data in an EU-like fashion, while specifying that this must be done finding guidance in the Constitution and the Law.

What next?

Given the core differences between the Brazilian and EU conception of the RTBF, as highlighted above, it is understandable in the opinion of this author that the STF adopted a less proactive and more conservative approach. This must be especially considered in light of the very recent establishment of a data protection institutional system in Brazil.

It is understandable that the STF might have preferred to de facto delegate the interpretation of when and how the RTBF could be rightfully invoked before Courts, according to constitutional and legal parameters. First, in the Brazilian interpretation of the RTBF, this right fundamentally insist on the protection of privacy – i.e. the private sphere of an individual – and, while admitting the existence of data protection concerns, these are not the main ground on which the Brazilian RTBF conception relays.

It is understandable that in a country and a region where the social need to remember and shed light on what happened in a recent history, marked by dictatorships, well-hidden atrocities, and opacity, outweighs the legitimate individual interest to prohibit the circulation of truthful and legally obtained information. In the digital sphere, however, the RTBF quintessentially translates into an extension of informational self-determination, which the Brazilian General Data Protection Law, better known as “LGPD” (Law No. 13.709 / 2018), enshrines in its article 2 as one of the “foundations” of data protection in the country and that whose fundamental character was recently recognized by the STF itself.

In this perspective, it is useful to remind the dissenting opinion of Justice Luiz Edson Fachin, in the Aida Curi case, stressing that “although it does not expressly name it, the Constitution of the Republic, in its text, contains the pillars of the right to be forgotten, as it celebrates the dignity of the human person (article 1, III), the right to privacy (article 5, X) and the right to informational self-determination – which was recognized, for example, in the disposal of the precautionary measures of the Direct Unconstitutionality Actions No. 6,387, 6,388, 6,389, 6,390 and 6,393, under the rapporteurship of Justice Rosa Weber (article 5, XII).”

It is the opinion of this author that the Brazilian debate on the RTBF in the digital sphere would be clearer if it its dimension as a right to deindexation of search engines results were to be clearly regulated. It is understandable that the STF did not dare regulating this, given its interpretation of the RTBF and the very embryonic data protection institutional framework in Brazil. However, given the increasing datafication we are currently witnessing, it would be naïve not to expect that further RTBF claims concerning the digital environment and, specifically, the way search engines process personal data will keep emerging.

The fact that the STF has left the door open to apply the RTBF in the case-by-case analysis of individual claims may reassure the reader regarding the primacy of constitutional and legal arguments in such case-by-case analysis. It may also lead the reader to – very legitimately – wonder whether such a choice is the facto the most efficient to deal with the potentially enormous number of claims and in the most coherent way, given the margin of appreciation and interpretation that each different Court may have.  

An informed debate able to clearly highlight what are the existing options and what might be the most efficient and just ways to implement them, considering the Brazilian context, would be beneficial. This will likely be one of the goals of the upcoming Latin American edition of the Computers, Privacy and Data Protection conference (CPDP LatAm) that will take place in July, entirely online, and will aim at exploring the most pressing issues for Latin American countries regarding privacy and data protection.

Photo Credit: “Brasilia – The Supreme Court” by Christoph Diewald is licensed under CC BY-NC-ND 2.0

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content

Taj Mahal 1209004 1920

On February 25, the Indian Government notified and published Information Technology (Guidelines for Intermediaries and Digital media Ethics Code) Rules 2021. These rules mirror the Digital Services Act (DSA) proposal of the EU to some extent, since they propose a tiered approach based on the scale of the platform, they touch on intermediary liability, content moderation, take-down of illegal content from online platforms, as well as internal accountability and oversight mechanisms, but they go beyond such rules by adding a Code of Ethics for digital media, similar to the Code of Ethics classic journalistic outlets must follow, and by proposing an “online content” labelling scheme for content that is safe for children.

The Code of Ethics applies to online news publishers, as well as intermediaries that “enable the transmission of news and current affairs”. This part of the Guidelines (the Code of Ethics) has already been challenged in the Delhi High Court by news publishers this week. 

The Guidelines have raised several types of concerns in India, from their impact on freedom of expression, impact on the right to privacy through the automated scanning of content and the imposed traceability of even end-to-end encrypted messages so that the originator can be identified, to the choice of the Government to use executive action for such profound changes. The Government, through the two Ministries involved in the process, is scheduled to testify in the Standing Committee of Information Technology of the Parliament on March 15.

New obligations for intermediaries

“Intermediaries” include “websites, apps and portals of social media networks, media sharing websites, blogs, online discussion forums, and other such functionally similar intermediaries” (as defined in rule 2(1)(m)).

Here are some of the most important rules laid out in Part II of the Guidelines, dedicated to Due Diligence by Intermediaries:

“Significant social media intermediaries” have enhanced obligations

“Significant social media intermediaries” are social media services with a number of users above a threshold which will be defined and notified by the Central Government. This concept is similar to the the DSA’s “Very Large Online Platform”, however the DSA includes clear criteria in the proposed act itself on how to identify a VLOP.

As for Significant Social Media Intermediaries” in India, they will have additional obligations (similar to how the DSA proposal in the EU scales obligations): 

These “Guidelines” seem to have the legal effect of a statute, and they are being adopted through executive action to replace Guidelines adopted in 2011 by the Government, under powers conferred to it in the Information Technology Act 2000. The new Guidelines would enter into force immediately after publication in the Official Gazette (no information as to when publication is scheduled). The Code of Ethics would enter into force three months after the publication in the Official Gazette. As mentioned above, there are already some challenges in Court against part of these rules.

Get smart on these issues and their impact

Check out these resources: 

Another jurisdiction to keep your eyes on: Australia

Also note that, while the European Union is starting its heavy and slow legislative machine, by appointing Rapporteurs in the European Parliament and having first discussions on the DSA proposal in the relevant working group of the Council, another country is set to soon adopt digital content rules: Australia. The Government is currently considering an Online Safety Bill, which was open to public consultation until mid February and which would also include a “modernised online content scheme”, creating new classes of harmful online content, as well as take-down requirements for image-based abuse, cyber abuse and harmful content online, requiring removal within 24 hours of receiving a notice from the eSafety Commissioner.

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

Russia: New Law Requires Express Consent for Making Personal Data Available to the Public and for Any Subsequent Dissemination

Authors: Gabriela Zanfir-Fortuna and Regina Iminova

Moscow 2742642 1920 1
Source: Pixabay.Com, by Opsa

Amendments to the Russian general data protection law (Federal Law No. 152-FZ on Personal Data) adopted at the end of 2020 enter into force today (Monday, March 1st), with some of them having the effective date postponed until July 1st. The changes are part of a legislative package that is also seeing the Criminal Code being amended to criminalize disclosure of personal data about “protected persons” (several categories of government officials). The amendments to the data protection law envision the introduction of consent based restrictions for any organization or individual that publishes personal data initially, as well as for those that collect and further disseminate personal data that has been distributed on the basis of consent in the public sphere, such as on social media, blogs or any other sources. 

The amendments:

The potential impact of the amendments is broad. The new law prima facie affects social media services, online publishers, streaming services, bloggers, or any other entity who might be considered as making personal data available to “an indefinite number of persons.” They now have to collect and prove they have separate consent for making personal data publicly available, as well as for further publishing or disseminating PDD which has been lawfully published by other parties originally.

Importantly, the new provisions in the Personal Data Law dedicated to PDD do not include any specific exception for processing PDD for journalistic purposes. The only exception recognized is processing PDD “in the state and public interests defined by the legislation of the Russian Federation”. The Explanatory Note accompanying the amendments confirms that consent is the exclusive lawful ground that can justify dissemination and further processing of PDD and that the only exception to this rule is the one mentioned above, for state or public interests as defined by law. It is thus expected that the amendments might create a chilling effect on freedom of expression, especially when also taking into account the corresponding changes to the Criminal Code.

The new rules seem to be part of a broader effort in Russia to regulate information shared online and available to the public. In this context, it is noteworthy that other amendments to Law 149-FZ on Information, IT and Protection of Information solely impacting social media services were also passed into law in December 2020, and already entered into force on February 1st, 2021. Social networks are now required to monitor content and “restrict access immediately” of users that post information about state secrets, justification of terrorism or calls to terrorism, pornography, promoting violence and cruelty, or obscene language, manufacturing of drugs, information on methods to commit suicide, as well as calls for mass riots. 

Below we provide a closer look at the amendments to the Personal Data Law that entered into force on March 1st, 2021. 

A new category of personal data is defined

The new law defines a category of “personal data allowed by the data subject to be disseminated” (PDD), the definition being added as paragraph 1.1 to Article 3 of the Law. This new category of personal data is defined as “personal data to which an unlimited number of persons have access to, and which is provided by the data subject by giving specific consent for the dissemination of such data, in accordance with the conditions in the Personal Data Law” (unofficial translation). 

The old law had a dedicated provision that referred to how this type of personal data could be lawfully processed, but it was vague and offered almost no details. In particular, Article 6(10) of the Personal Data Law (the provision corresponding to Article 6 GDPR on lawful grounds for processing) provided that processing of personal data is lawful when the data subject gives access to their personal data to an unlimited number of persons. The amendments abrogate this paragraph, before introducing an entirely new article containing a detailed list of conditions for processing PDD only on the basis of consent (the new Article 10.1).

Perhaps in order to avoid misunderstanding on how the new rules for processing PDD fit with the general conditions on lawful grounds for processing personal data, a new paragraph 2 is introduced in Article 10 of the law, which details conditions for processing special categories of personal data, to clarify that processing of PDD “shall be carried out in compliance with the prohibitions and conditions provided for in Article 10.1 of this Federal Law”.

Specific, express, unambiguous and separate consent is required

Under the new law, “data operators” that process PDD must obtain specific and express consent from data subjects to process personal data, which includes any use, dissemination of the data. Notably, under the Russian law, “data operators” designate both controllers and processors in the sense of the General Data Protection Regulation (GDPR), or businesses and service providers in the sense of the California Consumer Privacy Act (CCPA).

Specifically, under Article 10.1(1), the data operator must ensure that it obtains a separate consent dedicated to dissemination, other than the general consent for processing personal data or other type of consent. Importantly, “under no circumstances” may individuals’ silence or inaction be taken to indicate their consent to the processing of their personal data for dissemination, under Article 10.1(8).

In addition, the data subject must be provided with the possibility to select the categories of personal data which they permit for dissemination. Moreover, the data subject also must be provided with the possibility to establish “prohibitions on the transfer (except for granting access) of [PDD] by the operator to an unlimited number of persons, as well as prohibitions on processing or conditions of processing (except for access) of these personal data by an unlimited number of persons”, per Article 10.1(9). It seems that these prohibitions refer to specific categories of personal data provided by the data subject to the operator (out of a set of personal data, some categories may be authorized for dissemination, while others may be prohibited from dissemination).

If the data subject discloses personal data to an unlimited number of persons without providing to the operator the specific consent required by the new law, not only the original operator, but all subsequent persons or operators that processed or further disseminated the PDD have the burden of proof to “provide evidence of the legality of subsequent dissemination or other processing”, under Article 10.1(2), which seems to imply that they must prove consent was obtained for dissemination (probatio diabolica in this case). According to the Explanatory Note to the amendments, it seems that the intention was indeed to turn the burden of proof of legality of processing PDD from data subjects to the data operators, since the Note makes a specific reference to the fact that before the amendments the burden of proof rested with data subjects.

If the separate consent for dissemination of personal data is not obtained by the operator, but other conditions for lawfulness of processing are met, the personal data can be processed by the operator, but without the right to distribute or disseminate them – Article 10.1.(4). 

A Consent Management Platform for PDD, managed by the Roskomnadzor

The express consent to process PDD can be given directly to the operator or through a special “information system” (which seems to be a consent management platform) of the Roskomnadzor, according to Article 10.1(6). The provisions related to setting up this consent platform for PDD will enter into force on July 1st, 2021. The Roskomnadzor is expected to provide technical details about the functioning of this consent management platform and guidelines on how it is supposed to be used in the following months. 

Absolute right to opt-out of dissemination of PDD

Notably, the dissemination of PDD can be halted at any time, on request of the individual, regardless of whether the dissemination is lawful or not, according to Article 12.1(12). This type of request is akin to a withdrawal of consent. The provision includes some requirements for the content of such a request. For instance, it requires writing contact information and listing the personal data that should be terminated. Consent to the processing of the provided personal data is terminated once the operator receives the opt-out request – Article 10.1(13).

A request to opt-out of having personal data disseminated to the public when this is done unlawfully (without the data subject’s specific, affirmative consent) can also be made through a Court, as an alternative to submitting it directly to the data operator. In this case, the operator must terminate the transmission of or access to personal data within three business days from when such demand was received or within the timeframe set in the decision of the court which has come into effect – Article 10.1(14).

A new criminal offense: The prohibition on disclosure of personal data about protected persons

Sharing personal data or information about intelligence officers and their personal property is now a criminal offense under the new rules, which amended the Criminal Code. The law obliges any operators of personal data, including government departments and mobile operators, to ensure the confidentiality of personal information concerning protected persons, their relatives, and their property. Under the new law, “protected persons” include employees of the Investigative Committee, FSB, Federal Protective Service, National Guard, Ministry of Internal Affairs, and Ministry of Defense judges, prosecutors, investigators, law enforcement officers and their relatives. Moreover, the list of protected persons can be further detailed by the head of the relevant state body in which the specified persons work.

Previously, the law allowed for the temporary prohibition of the dissemination of personal data of protected persons only in the event of imminent danger in connection with official duties and activities. The new amendments make it possible to take protective measures in the absence of a threat of encroachment on their life, health and property.

What to watch next: New amendments to the general Personal Data Law are on their way in 2021

There are several developments to follow in this fast changing environment. First, at the end of January, the Russian President gave the government until August 1 to create a set of rules for foreign tech companies operating in Russia, including a requirement to open branch offices in the country.

Second, a bill (No. 992331-7) proposing new amendments to the overall framework of the Personal Data Law (No. 152-FZ) was introduced in July 2020 and was the subject of a Resolution that passed in the State Duma on February 16, allowing for a period for amendments to be submitted, until March 16. The bill is on the agenda for a potential vote in May. The changes would entail expanding the possibility to obtain valid consent through other unique identifiers which are currently not accepted by the law, such as unique online IDs, changes to purpose limitation, a possible certification scheme for effective methods to erase personal data and new competences for the Roskomnadzor to establish requirements for deidentification of personal data and specific methods for effective deidentification.

If you have any questions on Global Privacy and Data Protection developments, contact Gabriela Zanfir-Fortuna at [email protected]

Schrems II: Article 49 GDPR derogations may not be so narrow and restrictive after all?

by Rob van Eijk and Gabriela Zanfir-Fortuna

On January 28, 2021, the German Federal Ministry of the Interior organized a conference celebrating the 40th Data Protection Day, the date on which the Council of Europe’s data protection convention, known as “Convention 108”, was opened for signature. One of the invited speakers and panelists was Prof. Dr. Dr. von Danwitz, the judge-rapporteur in the CJEU Schrems I Case (C‑362/14), the CJEU Schrems II Case (C‑311/18), and the CJEU Case La Quadrature du Net and Others (C-511/18). He spoke at length about the Schrems II judgment and its significance for cementing the importance of the fundamental right to personal data protection, as well as generously replied to specific questions about the options available now to companies to lawfully transfer personal data from the European Union to the United States. In particular, his comments on the possibility to rely on Article 49 GDPR derogations were noteworthy, as they seemed to contradict the narrow approach taken by the European Data Protection Board in its interpretation.

The recording of the keynote by Prof. Dr. Dr. von Danwitz starts here (1h12m).
The recording of the panel intervention by von Danwitz starts here (2h17m).

Please note that all quotes used in this article are a translation from German.

The broader context: A fundamental right that still needs to be taken seriously

In his introductory remarks providing context to the Schrems II judgment of the CJEU, judge von Danwitz admitted that he had “quite a lot of sleepless nights over this case”. He added that, “however, it is not the task of a court to find the least problematic solution to a case”. 

According to the judge, the awe that the judgment created for some comes from the fact that the consequences of having the right to the protection of personal data elevated as fundamental right in the EU Charter and in the Treaty of Lisbon “are still not fully understood and recognized today”. He added that “the EU attaches great importance to the protection of these rights (respect for private life under Article 7 and protection of personal data under Article 8 of the Charter – n.), especially in comparison to the partial or rudimentary textual references in national constitutions”. 

Judge von Danwitz showed that he is fully aware of the breadth and the pervasiveness of cross-border data transfers in today’s digitalized society: “data transfers to third countries are not rare incidents. It is common practice to outsource certain data-based services to third countries. This may be economically useful and desirable for enterprises, but it should not compromise the level of protection of personal data”. 

He explained that “the necessary balance between the legitimate interests of economic operators and the promotion of international trade on the one hand, and the right to the protection of personal data on the other, is reflected in the legal requirements that an essentially equivalent level of protection of personal data must be ensured when data is transferred to third countries”.

The judge also acknowledged that some countries may not ensure at all this high level of protection of personal data and this fact “may have economic disadvantages for companies in individual cases”. He explained that “nevertheless, this is the necessary consequence of the fundamental decision taken by the European Union to ensure a high level of protection of personal data”. As he ultimately framed it, the entire discussion “is about the much more fundamental question of what is the society we want to live in and our aspiration to shape this society in line with European law and values”.

No legal void: The Court saw Article 46 safeguards and Article 49 derogations as filling the gap

First in his keynote and later on in the panel, judge von Danwitz explained that the Court decided to annul the Privacy Shield with immediate effect – as opposed to allowing for a grace period, because there was no legal void in its absence. He mentioned that GDPR Article 46 safeguards and Article 49 derogations “cover the absence of an adequacy decision”. 

During the panel, the discussion centered around the following question: “How, for example, am I as an entrepreneur supposed to implement these requirements in data transfer after Schrems II? Which guidelines apply?”

In response to the question, von Danwitz remarked: “The question is very legitimate. The question is this, as an enterprise, do I have to transfer data to third countries for which there is no adequacy decision by the European Commission? Yes or no? That’s the fundamental question.” 

“And if this question is to be answered to the effect that [transfers] are absolutely necessary, then I need to use, e.g., standard contractual clauses. At least that’s the standard approach. (…) If standard contractual clauses are not possible, because my process in the third country cannot comply with these clauses under the applicable national law, then, of course, there’s the question of the transfer of data by relying on [the derogations for specific situations in] Article 49 GDPR.” 

Von Danwitz mentioned that Article 49 derogations are in particular an option for intra-group transfers and that they should be more attentively explored. “(…) In my opinion, the opportunities granted by Article 49 have not been fully explored yet. I believe they are not so narrow that they restrict any kind of transfer, especially when we’re talking about transfers within one corporation or group of companies.” 

Von Danwitz indicated that the conditions from the text of the GDPR in any case must be met. For example, in the case of the derogation relying on necessity to enter a contract or for the performance of a contract, the first test is to ask “is the transfer of that data really required? Is it really necessary to fulfill the contract?” He added: “In my opinion, this gives people sufficient scope of action”.

The judge didn’t go into further details and also clarified that questions related to the scope of Article 49 derogations might be posed to the court in the future, and he doesn’t want to “preempt any judgments by making a statement now”. 

Although von Danwitz made the observations in a personal capacity, they mark a new opening in the discussion on data transfers which some refer to as a proverbial Gordian Knot. Furthermore, the remarks are important against the background of the European Data Protection Board (EDPB) Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data. The public consultation period for the recommendations has ended and the EDPB has started processing the comments submitted by stakeholders.

You can find the recordings here: in German, English, and French. The intervention in German by Prof. Dr. Dr. von Danwitz on the exploration of the Article 49 derogations starts at this bookmark: 02h23m12s.

To listen to the keynote by Prof. Dr. Dr. von Danwitz, click here (1h12m).
To listen to the panel intervention by von Danwitz, click here (2h17m).

Photo credit: arbyreed CC BY-NC-SA 2.0.

To learn more about FPF in Europe, please visit fpf.org/about/eu.