FPF Report: Automated Decision-Making Under the GDPR – A Comprehensive Case-Law Analysis

On May 17, the Future of Privacy Forum launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision Making (ADM). The Report is informed by extensive research covering more than 70 Court judgments, decisions from Data Protection Authorities (DPAs), specific Guidance and other policy documents issued by regulators.

The GDPR has a particular provision applicable to decisions based solely on automated processing of personal data, including profiling, which produces legal effects concerning an individual or similarly affects that individual: Article 22. This provision enshrines one of the “rights of the data subject”, particularly the right not to be subject to decisions of that nature (i.e., ‘qualifying ADM’), which has been interpreted by DPAs as a prohibition rather than a prerogative that individuals can exercise.

However, the GDPR’s protections for individuals against forms of automated decision-making (ADM) and profiling go significantly beyond Article 22. In this respect, there are several safeguards that apply to such data processing activities, notably the ones stemming from the general data processing principles in Article 5, the legal grounds for processing in Article 6, the rules on processing special categories of data (such as biometric data) under Article 9, specific transparency and access requirements regarding ADM under Articles 13 to 15, and the duty to carry out data protection impact assessments in certain cases under Article 35.

This new FPF Report outlines how national courts and DPAs in the European Union (EU)/European Economic Area (EEA) and the UK have interpreted and applied the relevant EU data protection law provisions on ADM so far – before and after the GDPR became applicable -, as well as the notable trends and outliers in this respect. To compile the Report, we have looked into publicly available judicial and administrative decisions and regulatory guidelines across EU/EEA jurisdictions and the UK. It draws from more than 70 cases – 19 court rulings and more than 50 enforcement decisions, individual opinions, or general guidance issued by DPAs, – from a span of 18 EEA Member-States, the UK, and the European Data Protection Supervisor (EDPS). To complement the facts of the cases discussed, we have also looked into press releases, DPAs’ annual reports, and media stories.

Some examples of ADM and profiling activities assessed by EU courts and DPAs and analyzed in the Report include:

FPF Training: Automated Decision-Making under the GDPR

Ready to get an in-depth understanding of the GDPR’s Automated Decision-Making requirements? Register for our upcoming virtual training session on November 9, where FPF experts will cover the critical elements of Article 22, recent DPA decisions, consent requirements, and more.

Register today!

Our analysis shows that the GDPR as a whole is relevant for ADM cases and has been effectively applied to protect the rights of individuals in such cases, even in situations where the ADM at issue did not meet the high threshold established by Article 22 GDPR. Among those, we found detailed transparency obligations about the parameters that led to an individual automated decision, a broad reading of the fairness principle to avoid situations of discrimination, and strict conditions for valid consent in cases of profiling and ADM.

Moreover, we found that when enforcers are assessing the threshold of applicability for Article 22 (“solely” automated, and “legal or similarly significant effects”), the criteria they use are increasingly sophisticated. This means that:

A recent preliminary ruling request sent by an Austrian court in February 2022 to the Court of Justice of the European Union (CJEU) may soon help clarify these concepts, as well as other related to the information which controllers need to give data subjects about ADM’s underlying logic, significance and envisaged consequences for the individual.

The findings of this Report may also serve to inform the discussions about pending legislative initiatives in the EU that regulate technologies or business practices that foster, rely on, or relate to ADM and profiling, such as the AI Act, the Consumer Credits Directive, and the Platform Workers Directive.

On May 20, the authors of the report discussed with prominent European data protection experts some of the most impactful analyzed decisions during an FPF roundtable. These include cases related to the algorithmic management of platform workers in Italy and the Netherlands, the use of automated recruitment and social assistance tools, and creditworthiness assessment algorithms. The discussion also covered pending questions sent by national courts to the CJEU on matters of algorithmic transparency under the GDPR. View a recording of the conversation here, and download the slides here.

What the Biden Executive Order on Digital Assets Means for Privacy

Author: Dale Rappaneau

Dale Rappaneau is a policy intern at the Future of Privacy Forum and a 3L at the University of Maine School of Law.

On March 9, the Biden Administration issued an Executive Order on “Ensuring Responsible Developments of Digital Assets” (“the Order”), published together with an explanatory Fact Sheet. The Order states that the growing adoption of digital assets throughout the economy and inconsistent controls to mitigate their risks necessitates a new governmental approach to regulating digital assets.

The Order outlines a whole-of-government approach to address a wide range of technological frameworks, including blockchain protocols and centralized systems. The Order frames this approach as an important step toward safeguarding consumers and businesses from illicit activities and potential privacy harms involving digital assets. In particular, it calls for a list of federal agencies and regulators to assess digital assets, consider future action, and ultimately provide reports recommending how to achieve the Order’s numerous policy goals. The Order recognizes the importance of incorporating data and privacy protections into this approach, which indicates that the Administration is actively considering the privacy risks associated with digital assets.

1. Covered Technologies

Definitions

Digital Assets – The Order defines digital assets broadly, including cryptocurrencies, stablecoins, and all central bank digital currencies (CBDCs), regardless of the technology used. The term also refers to any other representations of value or financial instrument issued or represented in a digital form through the use of distributed ledger technology relying on cryptography, such as a blockchain protocol.

CBDC – The Order defines a Central Bank Digital Currency (“CBDC”) as digital money that is a direct liability of the central bank, not of a commercial bank. This definition aligns with the recent Federal Reserve Board CBDC report. A U.S. CBDC could support a faster and more modernized financial system, but it would also raise important policy questions including how it would affect the current rules and regulations of the U.S. financial sector.

Cryptocurrencies – These are digital assets that may operate as a medium of exchange and are recorded through distributed ledger technologies that rely on cryptography. This definition is notable because blockchain is often mistaken as the only form of distributed ledger technology, leading some to believe that all cryptocurrencies require a blockchain. However, the Order defines cryptocurrencies by reference to distributed ledger technology – not blockchain – and seems to cover both mainstream cryptocurrencies utilizing a blockchain (e.g., bitcoin or Ether) and alternative cryptocurrencies built on distributed ledger technology without a blockchain (e.g., IOTA).

Stablecoins – The Order recognizes stablecoins as a category of cryptocurrencies featuring mechanisms aimed at maintaining a stable value. As reported by relevant agencies, stablecoin arrangements may utilize distributed or centralized ledger technology.

Implications of Covered Technologies

From a technical perspective, distributed ledger technologies such as blockchain stand in stark contrast to centralized systems. Blockchain protocols, for example, allow users to conduct financial transactions on a peer-to-peer level, without requiring oversight from the private sector or government. Centralized ledger technology, as used by most credit cards and banks, typically requires a private sector or government actor to facilitate financial transactions. In this environment, the data flows through the actor, who carries obligations to monitor and protect the data.

screen shot 2022 04 27 at 9.37.15 am

Despite the technical differences between these approaches, the Order appears to group these very different financial transaction systems into the single umbrella term of digital assets. It does this by including within the scope of the definition of digital assets all CBDCs, even ones utilizing centralized ledger technology, and other assets using distributed ledger technology. This homogenization of technological concepts may indicate that the Administration is seeking a uniform regulatory approach to these technologies.

2. Privacy Considerations of the EO

Section 2 of the Order states the principal policy objectives with respect to digital assets, which include: exploring a U.S. CBDC; ensuring responsible development and use of digital assets and their underlying ledger technologies; and mitigating finance and national security risks posed by the illicit use of digital assets.

Notably, the Administration uses the word “privacy” five times in this section, declaring that digital assets should maintain privacy, shield against arbitrary or unlawful surveillance, and incorporate privacy protections into their architecture. The need to ensure that digital assets preserve privacy raises notable, albeit different, implications for both centralized and decentralized technologies.

Privacy Implications of a United States CBDC

The Order places the highest urgency on developing and deploying a U.S. CBDC, which must be designed to include privacy protections. The Order states that a United States CBDC would be the liability of the Federal Reserve, which is currently experimenting with a number of CBDC system designs, including centralized and decentralized ledger technologies, as well as alternative technologies. Although the Federal Reserve has not chosen a particular system, the monetary authority has listed numerous privacy-related characteristics that should be incorporated into a United States CBDC regardless of the technology used.

First, the Federal Reserve recognizes that a CBDC would generate data about users’ financial transactions in the same ways that commercial banks and nonbanks do today. This may include a user’s name, email address, physical address, know-your-customer (KYC) data, and more. Depending on the design chosen for the CBDC, this data may be centralized under the control of a single entity or distributed across ledgers held by multiple entities or users.

Second, because of the robust rules designed to combat money laundering and financing of terrorism, a CBDC would need to allow intermediaries to verify the identity of the person accessing CBDC, just as banks and financial institutions currently do so. For this reason, the Federal Reserve states that a CBDC would need to safeguard an individual’s privacy while deterring criminal activity.

This intersection between consumer privacy and the transparency needed to monitor criminal activity gets to the heart of the Order. On one hand, a United States CBDC would provide certain data security and privacy protections for consumers under the current rules and regulations imposed on financial institutions. The Gramm-Leach-Bliley Act (GLBA), for example, includes privacy and data security provisions that regulate the collection, use, protection, and disclosure of nonpublic personal information by financial institutions (15 U.S.C.A. §§ 6801 to 6809). But on the other hand, the CBDC would likely require the Federal Reserve, or entrusted intermediaries, to monitor and verify the identity of users to reduce the likelihood of illicit transactions.

It is unclear whether current rules and regulations would apply if the CBDC utilizes distributed ledger technology, given that they typically establish scope via definitions of applicable entities using particular data. Because users (and not financial institutions) hold copies of the data ledger under distributed ledger technology systems, pre-existing privacy laws may fail to cover large amounts of data processing and provide adequate safeguards to consumers. In addition, as the next section suggests, it is unclear how monitoring and verification would occur under a CBDC that uses distributed ledger technology. This raises further questions in how policymakers can navigate the intersection of privacy and transaction monitoring.

Privacy Implications of Distributed Ledger Technologies

Distributed ledger technologies often attempt to create an environment where users do not have to reveal their personal information. Transactions under these systems typically do not filter through a singular entity such as the Federal Reserve, but instead happen on a peer-to-peer level, with users directly exchanging digital assets without third-party oversight. In this environment, users can complete transactions utilizing hashed identifiers rather than their own information, and these transactions usually occur without the supervision of a private or government entity. Together, the use of hashed identifiers and lack of supervision creates a digital environment ripe with identity-shielding protections.

However, experts recognize that distributed ledger technologies also create a multitude of financial risks. If users can conduct transactions on a peer-to-peer level without supervision or revealing their identity, they can more easily conduct illicit activities, including money laundering, terror funding, and human and drug trafficking.

The Order acknowledges these benefits and risks. The Fact Sheet prioritizes privacy protections and efforts to combat criminal activities, which indicates that the Order seeks to emphasize the privacy-preserving aspects of new distributed ledger technologies while finding ways to restrict illicit financial activity. Such an emphasis may represent an enhanced governmental effort to address criminal activities in the digital asset landscape while avoiding measures that would create risks to privacy and data protection.

3. Future Action: Privacy and Law Enforcement Equities

The Order’s repeated emphasis on privacy seems to align with the Biden Administration’s current focus on prioritizing privacy and data protection rulemaking. The Order acknowledges both necessary safeguards to combat illicit activities and the need to embed privacy protections in the regulation of digital assets.

The U.S. Department of the Treasury and the Federal Reserve have articulated concerns regarding how bad actors exploit distributed ledger technologies for illicit purposes, and those agencies will likely make recommendations to strengthen government oversight and supervision capabilities. However, the Order’s emphasis on privacy seems to indicate that the Administration wants to ensure privacy protections while also enabling traceability to monitor users, verify identities, and investigate illicit activities.

The question is, will the Administration find a way to preserve the privacy protections of centralized and distributed ledger technology, while also promoting the efficacy of monitoring illicit activities? That answer will likely come once agencies and regulators start providing reports that recommend steps to achieve the Order’s goals. Until then, the answer remains unknown, and entities utilizing cryptocurrencies or other digital assets should stay aware of a possible shift in how the Federal Government regulates the digital asset landscape.

Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework

The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield. 

This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached. 

The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.

This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases. 

After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement. 

In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.

Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.

Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.

The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.

The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”. 

Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.

One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.

Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side. 

Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.  

What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.

Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.

In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.

Understanding why the first pieces fell in the transatlantic transfers domino

The Austrian DPA and the EDPS decided EU websites placing US cookies breach international data transfer rules 

Two decisions issued by Data Protection Authorities (DPAs) in Europe and published in the second week of January 2022 found that two websites, one run by a contractor of the European Parliament (EP), and the other one by an Austrian company, have unlawfully transferred personal data to the US merely by placing cookies (Google Analytics and Stripe) provided by two US-based companies on the devices of their visitors. Both decisions looked into the transfers safeguards put in place by the controllers (the legal entities responsible for the websites), and found them to be either insufficient – in the case against the EP, or ineffective – in the Austrian case. 

Both decisions affirm that all transfers of personal data from the EU to the US need “supplemental measures” on top of their Article 46 GDPR safeguards, in the absence of an adequacy decision and under the current US legal framework for government access to personal data for national security purposes, as assessed by the Court of Justice of the EU in its 2020 Schrems II judgment. Moreover, the Austrian case indicates that in order to be effective, the supplemental measures adduced to safeguard transfers to the US must “eliminate the possibility of surveillance and access [to the personal data] by US intelligence agencies”, seemingly putting to rest the idea of the “risk based approach” in international data transfers post-Schrems II

This piece analyzes the two cases comparatively, considering they have many similarities other than their timing: they  both target widely used cookies (Google Analytics, in addition to Stripe in the EP case), they both stem from complaints where individuals are represented by the Austrian NGO noyb, and it is possible that they will be followed by similar decisions from the other DPAs that received a batch of 101 complaints in August 2020 from the same NGO, relying on identical legal arguments and very similar facts. This piece analyzes the most important findings made by the two regulators, showing how their analyses were in sync and how these analyses likely preface similar decisions for the rest of the complaints.         

1. “Personal data” is being “processed” through cookies, even if users are not identified and even if the cookies are thought to be “inactive”

In the first decision, the European Data Protection Supervisor (EDPS) investigated a complaint made by several Members of the European Parliament against a website made available by the EP to its Members and staff in the context of managing COVID-19 testing. The complainants raised concerns with regard to transfers of their personal data to the US through cookies provided by US based companies (Google and Stripe) and placed on their devices when accessing the COVID-19 testing website. The case was brought under the Data Protection Regulation for EU Institutions (EUDPR), which has identical definitions and overwhelmingly similar rules to the GDPR. 

One of the key issues that was analyzed in order for the case to be considered falling under the scope of the EUDPR was whether personal data was being processed through the website by merely placing cookies on the devices of those who accessed it. Relying on its 2016 Guidelines on the protection of personal data processed through Web Services, the EDPS noted in the decision that “tracking cookies, such as the Stripe and Google Analytics cookies, are considered personal data, even if the traditional identity parameters of the tracked users are unknown or have been deleted by the tracker after collection”. It also noted that “all records containing identifiers that can be used to single out users, are considered as personal data under the Regulation and must be treated and protected as such”. 

The EP argued in one of its submissions to the regulator that the Stripe cookie “had never been active, since registration for testing for EU Staff and Members did not require any form of payment”. However, the EP also confirmed that the dedicated COVID-19 testing website, which was built by its contractor, copied code from another website run by the same contractor, and “the parts copied included the code for a cookie from Stripe that was used for online payment for users” of the other website. In its decision, the EDPS highlighted that “upon installation on the device, a cookie cannot be considered ‘inactive’. Every time a user visited [the website], personal data was transferred to Stripe through the Stripe cookie, which contained an identifier. (…) Whether Stripe further processed the data transferred through the cookie is not relevant”. 

With regard to the Google Analytics cookies, the EDPS only notes that the EP (as controller) acknowledged that the cookies “are designed to process ‘online identifiers, including cookie identifiers, internet protocol addresses and device identifiers’ as well as ‘client identifiers’”. The regulator concluded that personal data were therefore transferred “through the above-mentioned trackers”.  

In the second decision, which concerned the use of Google Analytics by a website owned by an Austrian company and targeting Austrian users, the DPA argued in more detail what led it to find that personal data was being processed by the website through Google Analytics cookies, under the GDPR. 

1.1 Cookie identification numbers, by themselves, are personal data

The DPA found that the cookies contained identification numbers, including a UNIX timestamp at the end, which shows when a cookie was set. It also noted that the cookies were placed either on the device or the browser of the complainant. The DPA affirmed that relying on these identification numbers makes it possible for both the website and Google Analytics “to distinguish website visitors … and also to obtain information as to whether the visitor is new or returning”. 

In its legal analysis, the DPA noted that “an interference with the fundamental right to data protection … already exists if certain entities take measures – in this case, the assignment of such identification numbers – to individualize website visitors”. Analyzing the “identifiability” component of the definition of “personal data” in the GDPR, and relying on its Recital 26, as well as on Article 29 Working Party Opinion 4/2007 on the concept of “personal data”, the DPA clarified that “a standard of identifiability to the effect that it must also be immediately possible to associate such identification numbers with a specific natural person – in particular with the name of the complainant – is not required” for data thus processed to be considered “personal data”. 

The DPA also recalled that “a digital footprint, which allows devices and subsequently the specific user to be clearly individualized, constitutes personal data”. The DPA concluded that the identification numbers contained in the cookies placed on the complainant’s device or browser are personal data, highlighting their “uniqueness”, their ability to single out specific individuals and rebutting specifically the argument the respondents made that no means are in fact used to link these numbers to the identity of the complainant. 

1.2 Cookie identification numbers combined with other elements are additional personal data

However, the DPA did not stop here and continued at length in the following sections of the decision to underline why placing the cookies at issue when accessing the website constitutes processing of personal data. It noted that the classification as personal data “becomes even more apparent if one takes into account that the identification numbers can be combined with other elements”, like the address and HTML title of the website and the subpages visited by the complainant; information about the browser, operating system, screen resolution, language selection and the date and time of the website visit; the IP address of the device used by the complainant. The DPA considers that “the complainant’s digital footprint is made even more unique following such a combination [of data points]”. 

The “anonymization function of the IP address” – which is a function that Google Analytics provides to users if they wish to activate it – was expressly set aside by the DPA, considering that during fact finding it was shown the function was not correctly implemented by the website at the time of the complaint. However, later in the decision, with regard to the same function and the fact that it was not implemented by the website, the regulator noted that “the IP address is in any case only one of many pieces of the puzzle of the complainant’s digital footprint”, hinting therefore that even if the function would have been correctly implemented, it wouldn’t have necessarily led to the conclusion that the data being processed was not personal. 

1.3 Controllers and other persons “with lawful means and justifiable effort” will count for the identifiability test

Drilling down even more on the notion of “identifiability” in a dedicated section of the decision, the DPA highlights that in order for the data processed through the cookies at issue to be personal, “it is not necessary that the respondents can establish a personal reference on their own, i.e. that all information required for identification is with them. […] Rather, it is sufficient that anyone, with lawful means and justifiable effort, can establish this personal reference”. Therefore, the DPA took the position that “not only the means of the controller [the website in this case] are to be taken into account in the question of identifiability, but also those of ‘another person’”.

After recalling that the CJEU repeatedly found that “the scope of application of the GDPR is to be understood very broadly” (e.g. C-439/19 B, C-434/16 Nowak, C-553/07 Rijkeboer), the DPA nonetheless stated that in its opinion, the term “anyone” it referred to above, and thus the scope of the definition of personal data, “should not be interpreted so broadly that any unknown actor could theoretically have special knowledge to establish a reference; this would lead to almost any information falling within the scope of application of the GDPR and a demarcation from non-personal data would become difficult or even impossible”.

This being said, the DPA considers that the “decisive factor is whether identifiability can be established with a justifiable and reasonable effort”. In the case at hand, the DPA considers that there are “certain actors who possess special knowledge that makes it possible to establish a reference to the complainant and identify him”. These actors are, from the DPA’s point of view, certainly the provider of the Google Analytics service and, possibly the US authorities in the national security area. As for the provider of Google Analytics, the DPA highlights that, first of all, the complainant was logged in with his Google account at the time of visiting the website. 

The DPA indicates this is a relevant fact only “if one takes the view that the online identifiers cited above must be assignable to a certain ‘face’”. The DPA finds that such an assignment to a specific individual is in any case possible in the case at hand. As such, the DPA states that: “[…] if the identifiability of a website visitor depends only on whether certain declarations of intent are made in the account (user’s Google account – our note), then, from a technical point of view, all possibilities of identifiability are present”, since, as noted by the DPA, otherwise Google “could not comply with a user’s wishes expressed in the account settings for ‘personalization’ of the advertising information received”. It is not immediately clear how the ad preferences expressed by a user in their personal account are linked to the processing of data for Google Analytics (and thus website traffic measurement) purposes, and it seems that this was used in the argumentation to substantiate the claim that the second respondent generally has additional knowledge across its various services that could lead to the identification or the singling out of the website visitor.  

However, following the arguments of the DPA, on top of the autonomous finding that cookie identification numbers are personal data, it seems that even if the complainant wouldn’t have been logged into his account, the data processed through the Google Analytics cookies would have still been considered personal. In this context, the DPA “expressly” notes that “the wording of Article 4(1) of the GDPR is unambiguous and is linked to the ability to identify and not to whether identification is ultimately carried out”.

Moreover, “irrespective of the second respondent” – so even if Google admittedly did not have any possibility or ability to render the complainant identifiable or to single him out, other third parties in this case were considered to have the potential ability to identify the complainant: US authorities.

1.4 Additional information potentially available to US intelligence authorities, taken into account for the identifiability test

Lastly, according to the decision, the US authorities in the national security area “must be taken into account” when assessing the potential of identifiability of the data processed through cookies in this case. The DPA considers that “intelligence services in the US take certain online identifiers, such as the IP address or unique identification numbers, as a starting point for monitoring individuals. In particular, it cannot be ruled out that intelligence services have already collected information with the help of which the data transmitted here can be traced back to the person of the complainant.” 

To show that this is not merely a “theoretical danger”, the DPA relies on the findings of the CJEU in Schrems II with regard to the US legal framework and the “access possibilities” it offers to authorities, and on Google’s Transparency Report, “which proves that data requests are made to [it] by US authorities.” The regulator further decided that even if it is admittedly not possible for the website to check whether such access requests are made in individual cases and with regard to the visitors of the website, “this circumstance cannot be held against affected persons, such as the complainant. Thus, it was ultimately the first respondent as the website operator who, despite publication of the Schrems II judgment, continued to use the Google Analytics tool”. 

Therefore, based on the findings of the Austrian DPA in this case, at least two of the “any persons” mentioned in Recital 26 GDPR that will be considered when deciding who can have lawful means to identify data so that the data is deemed personal are the processor of a specific processing operation, as well as the national security authorities that may have access to that data, at least in cases where this access is relevant (like in international data transfers). This latter finding of the DPA raises questions whether national security agencies in general in a specific jurisdiction may be considered by DPAs as an actor who has “lawful means” and additional knowledge when deciding if a data set links to an “identifiable” person, also in cases where international data transfers are not at issue. 

The DPA concluded that the data processed by the Google Analytics cookies is personal data and falls under the scope of the GDPR. Importantly, the cookie identification numbers were found to be personal data by themselves. Additionally, the other data elements potentially collected through cookies together with the identification numbers are also personal data.

2. Data transfers to the US are taking place by placing cookies provided by US-based companies on EU-based websites

Once the supervisory authorities established that the data processed through Google Analytics and, respectively, Stripe cookies, were personal data and were covered by the GDPR or EUDPR respectively, they had to ascertain whether an international transfer of personal data from the EU to the US was taking place in order to see whether the provisions relevant to international data transfers were applicable.

The EDPS was again concise. It stated that because the personal data were processed by two entities located in the US (Stripe and Google LLC) on the EP website, “personal data processed through them were transferred to the US”. The regulator strengthened its finding by stating that this conclusion “is reinforced by the circumstances highlighted by the complainants, according to which all data collected through Google Analytics is hosted (i.e. stored and further processed) in the US”. For this particular finding, the EDPS referred, under footnote 27 of the decision, to the proceedings in Austria “regarding the use of Google Analytics in the context of the 101 complaints filed by noyb on the transfer of data to the US when using Google Analytics”, in an evident indication that the supervisory authorities are coordinating their actions. 

In turn, the Austrian DPA applied the criteria laid out by the EDPB in its draft Guidelines 5/2021 on the relationship between the scope of Article 3 and Chapter V GDPR, and found that all the conditions are met. The administrator of the website is the controller and it is based in Austria, and, as data exporter, it “disclosed personal data of the complainant by proactively implementing the Google Analytics tool on its website and as a direct result of this implementation, among other things, a data transfer to the second respondent to the US took place”. The DPA also noted that the second respondent, in its capacity as processor and data importer, is located in the US. Hence, Chapter V of the GDPR and its rules for international data transfers are applicable in this case. 

However, it should also be highlighted that, as part of fact finding in this case, the Austrian DPA noted that the version of Google Analytics subject to this case was provided by Google LLC (based in the US) until the end of April 2021. Therefore, for the facts of the case which occurred in August 2020, the relevant processor and eventual data importer was Google LLC. But the DPA also noted that since the end of April 2021, Google Analytics has been provided by Google Ireland Limited (based in Ireland). 

One important question that remains for future cases is whether, under these circumstances, the DPA would find that an international data transfer occurred, considering the criteria laid out in the draft EDPB Guidelines 5/2021, which specifically require (at least in the draft version, currently subject to public consultation) that “the data importer is located in a third country”, without any further specifications related to corporate structures or location of the means of processing. 

2.1 In the absence of an adequacy decision, all data transfers to the US based on “additional safeguards”, like SCCs, need supplementary measures 

After establishing that international data transfers occurred from the EU to the US in the cases at hand, the DPAs assessed the lawful ground for transfers used. 

The EDPS noted that EU institutions and bodies “must remain in control and take informed decisions when selecting processors and allowing transfers of personal data outside the EEA”. It followed that, absent an adequacy decision, they “may transfer personal data to a third country only if appropriate safeguards are provided, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available”. Noting that the use of Standard Contractual Clauses (SCCs) or another transfer tool do not substitute individual case-by-case assessments that must be carried out in accordance with the Schrems II judgment, the EDPS stated that EU institutions and bodies must carry out such assessments “before any transfer is made”, and, where necessary, they must implement supplemental measures in addition to the transfer tool.

The EDPS recalled some of the key findings of the CJEU in Schrems II, in particular the fact that “the level of protection of personal data in the US was problematic in view of the lack of proportionality caused by mass surveillance programs based on Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order (EO) 12333 read in conjunction with Presidential Policy Directive (PPD) 28 and the lack of effective remedies in the US essentially equivalent to those required by Article 47 of the Charter”. 

Significantly, the supervisory authority then affirmed that “transfers of personal data to the US can only take place if they are framed by effective supplementary measures in order to ensure an essentially equivalent level of protection for the personal data transferred”. Since the EP did not provide any evidence or documentation about supplementary measures being used on top of the SCCs it referred to in the privacy notice on the website, the EDPS found the transfers to the US to be unlawful.

Similarly, the Austrian DPA in its decision recalled that the CJEU “already dealt” with the legal framework in the US in its Schrems II judgment, as based on the same three legal acts (Section 702 FISA, EO 12333, PPD 28). The DPA merely noted that “it is evident that the second respondent (Google LLC – our note) qualifies as a provider of electronic communications services” within the meaning of FISA Section 702. Therefore, it has “an obligation to provide personally identifiable information to US authorities pursuant to 50 US Code §1881a”. Again, the DPA relied on Google’s Transparency Report to show that “such requests are also regularly made to it by US authorities”. 

Considering the legal framework in the US as assessed by the CJEU, just like the EDPS did, the Austrian DPA also concluded that the mere entering into SCCs with a data importer in the US cannot be assumed to ensure an adequate level of protection. Therefore, “the data transfer at issue cannot be based solely on the standard data protection clauses concluded between the respondents”. Hence, supplementary measures must be adduced on top of the SCCs. The Austrian DPA relied significantly on the EDPB Recommendation 1/2020 on measures that supplement transfer tools when analyzing the available supplementary measures put in place by the respondents. 

2.2 Supplementary measures must “eliminate the possibility of access” of the government to the data, in order to be effective

When analyzing the various measures put in place to safeguard the personal data being transferred, the DPA wanted to ascertain “whether the additional measures taken by the second respondent close the legal protection gaps identified in the CJEU [Schrems II] ruling – i.e. the access and monitoring possibilities of US intelligence services”. Setting this as a target, it went on to analyze the individual measures proposed.

The contractual and organizational supplementary measures considered in the case:

The DPA considered that “it is not discernable” to what extent these measures are effective to close the protection gap, taking into account that the CJEU found in the Schrems II judgment that even “permissible (i.e. legal under US law) requests from US intelligence agencies are not compatible with the fundamental right to data protection under Article 8 of the EU Charter of Fundamental Rights”. 

The technical supplementary measures considered were:

With regard to encryption as one of the supplementary measures being used, the DPA took into account that a data importer covered by Section 702 FISA, as is the case in the current decision, “has a direct obligation to provide access to or surrender such data”. The DPA considered that “this obligation may expressly extend to the cryptographic keys without which the data cannot be read”. Therefore, it seems that as long as the keys are kept by the data importer and the importer is subject to the US law assessed by the CJEU in Schrems II (FISA Section 702, EO 12333, PPD 28), encryption will not be considered sufficient.

As for the argument that the personal data being processed through Google Analytics is “pseudonymous” data, the DPA rejected it relying on findings made by the Conference of German DPAs that the use of cookie IDs, advertising IDs, and unique user IDs does not constitute pseudonymization under the GDPR, since these identifiers “are used to make the individuals distinguishable and addressable”, and not to “disguise or delete the identifying data so that data subjects can no longer be addressed” – which the Conference considers to be one of the purposes of pseudonymization.

Overall, the DPA found that the technical measures proposed were not enough because the respondents did not comprehensively explain (therefore, the respondents had the burden of proof) to what extent these measures “actually prevent or restrict the access possibilities of US intelligence services on the basis of US law”. 

With this finding, highlighted also in the operative part of the decision, the DPA seems to de facto reject the “risk based approach” to international data transfers, which has been specifically invoked during the proceedings. This is a theory according to which, for a transfer to be lawful in the absence of an adequacy decision, it is sufficient to prove the likelihood of the government accessing personal data transferred on the basis of additional safeguards is minimal or reduced in practice for a specific transfer, regardless of the broad authority that the government has under the relevant legal framework to access that data and regardless of the lack of effective redress. 

The Austrian DPA is technically taking the view that it is not sufficient to reduce the risk of access to data in practice, as long as the possibility to access personal data on the basis of US law is actually not prevented, or in other words, not eliminated. This conclusion is apparent also from the language used in the operative part of the decision, where the DPA summarizes its findings as such: “the measures taken in addition to the SCCs … are not effective because they do not eliminate the possibility of surveillance and access by US intelligence agencies”. 

If other DPAs confirm this approach for transfers from the EU to the US in their decisions, the list of potentially effective supplemental measures for transfers of personal data to the US will remain minimal – prima facie, it seems that nothing short of anonymization (per the GDPR standard) or any other technical measure that will effectively and physically eliminate the possibility of accessing personal data by US national security authorities will suffice under this approach. 

A key reminder here is that the list of supplementary measures detailed in the EDPB Recommendation concerns all international data transfers based on additional safeguards, to all third countries in general, in the absence of an adequacy decision. In the decision summarized here, the supplementary measures found to be ineffective concern their ability to cover “gaps” in the level of data protection of the US legal framework, as resulting from findings of the CJEU with regard to three specific legal acts (FISA Section 702, EO 12333 and PPD 28). Therefore, the supplementary measures discussed and their assessment may be different for transfers to another jurisdiction.

2.3 Are data importers liable for the lawfulness of the data transfer?

One of the most consequential findings of the Austrian DPA that may have an impact on international data transfers cases moving forward is that “the requirements of Chapter V of the GDPR must be complied with by the data exporter, but not by the data importer” – therefore, under this interpretation, the organizations that are on the receiving end of a data transfer, at least when they are a processor for the data exporter like in the present case, cannot be found in breach of the international data transfers obligations under the GDPR. The main argument used was that “the second respondent (as data importer) does not disclose the personal data of the complainant, but (only) receives them”. As a result, Google was found not to breach Article 44 GDPR in this case

However, the DPA did consider that it is necessary to look further, and as part of separate proceedings, into how the second respondent complied with its obligations as a data processor, and in particular the obligation to process personal data on documented instructions from the controller, including with regard to transfers of personal data to a third country or an international organization, as detailed in Article 28(3)(a) and Article 29 GDPR.

3. Sanctions and consequences: Between preemptive deletion of cookies, reprimands and blocking transfers

Another commonality of the two decisions summarized is that neither of them resulted in a fine. The EDPS issued a reprimand against the European Parliament for several breaches of the EUDPR, including those related to international data transfers “due to its reliance on the Standard Contractual Clauses in the absence of a demonstration that data subjects’ personal data transferred to the US were provided an essential equivalent level of protection”. It is significant to mention that the EP asked the website service provider to disable both Google Analytics and Stripe cookies in a matter of days after being contacted by the complainants on October 27, 2020. The cookies at issue were active between September 30, when the website became available, and November 4, 2020. 

In turn, the Austrian DPA found that “the Google Analytics tool (at least in the version of August 14, 2020) can thus not be used in compliance with the requirements of Chapter V GDPR”. However, as discussed above, the DPA found that only the website operator – as the data exporter – was in breach of Article 44 GDPR.  The DPA decided not to issue a fine in this case. 

However, the DPA pursues to impose a ban on the data transfers or a similar order against the website, with some procedural complications. In the middle of the proceedings, the Austrian company that was in charge of managing the website transferred the responsibility of operating it to a company based in Germany, therefore the website is not under its control any longer. But since the DPA noted that Google Analytics continued to be implemented on the website at the time of the decision, it resolved to refer the case to the competent German supervisory authority with regard to the possible use of remedial powers against the new operator. 

Therefore, it seems that stopping the transfer of personal data to the US without appropriate safeguards seems to be the focus in these cases, rather than sanctioning the data exporters. The parties have the possibility to challenge both decisions before their respective competent Court and require a judicial review within a limited period of time, but there are no indications yet whether this will happen. 

4. The big picture: 101 complaints and collaboration among DPAs

The decision published by the Austrian DPA is the first one in the 101 complaints that noyb submitted directly to 14 DPAs across Europe (EU and the European Economic Area) at the same time in August 2020, from Malta, to Poland, to Lichtenstein, with identical legal arguments centered on international data transfers to the US through the use of Google Analytics or Facebook Connect, and all against websites of local or national relevance – so most likely these complaints will be considered outside the One-Stop-Shop mechanism. 

The bulk of the 101 complaints were submitted to the Austrian DPA (about 50), either immediately under its competence, as in the analyzed case, or as part of the One-Stop-Shop mechanism where the Austrian DPA acts as the concerned DPA from the jurisdiction where the complainant resides, which likely needed to forward the cases to the many lead DPAs in the jurisdictions were the targeted websites have their establishment. This way, even more DPAs will have to make a decision in these cases –  from Cyprus, to Greece, to Sweden, Romania and many more. About a month after the identical 101 complaints were submitted, the EDPB decided to create a taskforce to “analyse the matter and ensure a close cooperation among the members of the Board”. 

In contrast, the complaint against the European Parliament was not part of this set, it was submitted separately at a later date to the EDPS, but relying on similar arguments on the issue of international data transfers to the US through Google Analytics and Stripe cookies. Even if it was not part of the 101 complaints, it is clear that the authorities indeed cooperated or communicated, with the EDPS making a direct reference to the Austrian proceedings, as shown above. 

In other signs of cooperation, both the Dutch DPA and the Danish DPA have published notices immediately after the publication of the Austrian decision to alert organizations that they may soon issue new guidance in relation to the use of Google Analytics, specifically referring to the Austrian case. Of note, the Danish DPA highlighted that “as a result of the decision of the Austrian DPA” it is now “in doubt whether – and how – such tools can be used in accordance with data protection law, including the rules on transfers of personal data to third countries”. It also called for a common approach of DPAs on this issue: “it is essential that European regulators have a common interpretation of the rules”, since data protection law “intends to promote the internal market”. 

In the end, the DPAs are applying findings from a judgment made by the CJEU, which has ultimate authority in the interpretation of EU law that must be applied across all EU Member States. All this indicates that it is likely a series of similar decisions will be successively published in the short to medium future, with small chances of seeing significant variations. This is why these two cases summarized here can be seen as the first two pieces that fell in a domino. 

This domino, though, will not only be about the 101 cases and the specific cookies they target – it eventually concerns all US based service providers and businesses that receive personal data from the EU potentially covered by the broad reach of FISA Section 702 and EO 12333; all EU based organizations, from website operators, to businesses, schools, and public agencies, that use the services provided by the former or engage them as business partners, and disclose personal data to them; and it might as well affect all EU based businesses that have offices and subsidiaries in the US and that make personal data available to these entities.

Dispatch from the Global Privacy Assembly: The brave new world of international data transfers

The future of international data transfers is multi-dimensional, exploring new territories around the world, featuring binding international agreements for effective enforcement cooperation and slowly entering the agenda of high level intergovernmental organizations. All this surfaced from notable keynotes delivered during the 43rd edition of the Global Privacy Assembly Conference, hosted remotely by Mexico’s data protection authority, INAI, on October 18 and 19.  

“The crucial importance of data flows is generally recognized as an inescapable fact”, noted Bruno Gencarelli, Head of Unit for International Data Flows and Protection at the European Commission, at the beginning of his keynote address. Indeed, from the shockwaves sent by the Court of Justice of the EU (CJEU) with the Schrems II judgment in 2020, to the increasingly poignant data localization push in several jurisdictions around the world, underpinned by the reality that data flows are at the center of daily lives during the pandemic with remote work, school, global conferences and everything else – the field of international data transfers is more important than ever. Because, as Gencarelli noted, “it is also generally recognized that protection should travel with the data”.

Latin America and Asia Pacific, the “real laboratories” of new data protection rules

Gencarelli then observed that the conversation on international data flows has become much more “global and diverse”, technically shifting from the “traditional transatlantic debate” to a truly global conversation. “We are seeing a shift to other areas of the world, such as Asia-Pacific and Latin America. This doesn’t mean that the transatlantic dimension is not a very important one, it’s actually a crucial one, but it is far from being the only one”, he said. These remarks come as the US Government and the European Commission have been negotiating for more than a year a framework for data transfers to replace the EU-US Privacy Shield, invalidated by the CJEU in July 2020.  

In fact, according to Gencarelli, “Latin America and Asia-Pacific are today the real laboratories for new data protection rules, initiatives and solutions. This brings new opportunities to facilitate data flows with these regions, but also between those regions and the rest of the world”. The European Commission has recently concluded adequacy talks with South Korea, after having created the largest area of free data flows for the EU with Japan, two years ago. 

“You will see more of that in the coming months and years, with other partners in Asia and Latin America”, he added, without specifying what jurisdictions are immediate in the adequacy pipeline. Earlier in the conference, Jonathan Mendoza, Secretary for Personal Data Protection at INAI, had mentioned that Mexico and Colombia are two of the countries in Latin America that have been engaging with the European Commission for adequacy. 

However, before the European Commission officially communicates about advanced adequacy talks or renewal of pre-GDPR adequacy decisions, we will not know what those jurisdictions are. In an official Communication from 2017, “Exchanging and protecting personal data in a globalized world”, the Commission announced that, “depending on progress towards the modernization of its data protection laws”, India could be one of those countries, together with countries from Mercosur and countries from the “European neighborhood” (this could potentially refer to countries in the Balkans or the Southern and Eastern borders, like Moldova, Ukraine or Turkey, for example).

Going beyond “bilateral adequacy”: regional “transfer tools”

“Adequacy” of foreign jurisdictions as a ground to allow data to flow freely has become a standard for international data transfers gaining considerable traction beyond the EU in new legislative data protection frameworks (see, for instance, Articles 33 and 34 of Brazil’s LGPD, Article 34(1)(b) of the Indian Data Protection Bill with regard to transfers of sensitive data, or the plans recently announced by the Australian government to update the country’s Privacy Law, at p. 160). Even where adequacy is not expressly recognized as a ground for transfers, like in China’s Personal Information Protection Law (PIPL), the State still has an obligation to promote “mutual recognition of personal information protection rules, standards etc. with other countries, regions and international organizations”, as laid down in Article 12 of the PIPL.

However, as Gencarelli noted in his keynote, at least from the European Commission’s perspective, “beyond that bilateral dimension work, new opportunities have emerged”. He particularly mentioned “the role regional networks and regional organizations can play in developing international transfer tools.” 

One example that he gave was the model clauses for international data transfers adopted by ASEAN this year, just before the European Commission adopted its new set of Standard Contractual Clauses under the GDPR: “We are building bridges between the two sets of model clauses. (…) Those two sets are not identical, they don’t need to be identical, but they are based on a number of common principles and safeguards. Making them talk to each other, building on that convergence can of course significantly facilitate the life of companies present in ASEAN and in the EU”. 

The convergence of data protection standards and safeguards around the world “has reached a certain critical mass”, according to Gencarelli. This will lead to notable opportunities to cover more than two jurisdictions under some transfer tools: “[they] could cover entire regions of the world and on that aspect too you will see interesting initiatives soon with other regions of the world, for instance Latin America. 

This new approach to transfers can really have a significant effect by covering two regions, a significant network effect to the benefit of citizens, who see that when the data are transferred to a certain region of the world, they are protected by a high and common level of protection, but also for businesses, since it will help them navigate between the requirements of different jurisdictions.”

Entering the world of high level intergovernmental organizations and international trade agreements

One of the significant features of the new landscape of international data transfers is that it has now entered the agenda of intergovernmental fora, like the G7 and G20, in an attempt to counter data localization tendencies and boost digital trade. “This is no longer only a state to state discussion. New players have emerged. (…) If you think of data protection and data flows, we see it at the top of the agenda of G7 and G20, but also regional networks of data protection authorities in Latin America, in Africa, in Europe”, Gencarelli noted.

One particular initiative in this regard, spearheaded by Japan, was extensively explored by Mieko Tanno, the Chairperson of Japan’s Personal Information Protection Commission (PIPC) in her keynote address at the GPA: the Data Free Flow with Trust initiative. “The legal systems related to data flows (…) differ from country to country reflecting their history, national characteristics and political systems. Given that there is no global data governance discipline, policy coordination in these areas is essential for free flow of data across borders. With that in mind, Japan proposed the idea of data free flow with trust at the World Economic Forum annual meeting in 2019. It was endorsed by the world leaders of the G20 Osaka summit in the same year and we are currently making efforts in realizing the concept of DFFT”, Tanno explained. 

A key characteristic of the DFFT initiative, though, is that it emulates existing legal frameworks in participating jurisdictions and does not seem to propose the creation of new solutions that would enhance the protection of personal data in cross-border processing and the trust needed to allow free flows of data. Two days after the GPA conference took place, the G7 group adopted a set of Digital Trade Principles during their meeting in London, including a section dedicated to “Data Free Flow with Trust”, which confirms this approach.

For instance, the DFFT initiative specifically outsources to the OECD solving the thorny issue of appropriate safeguards for government access to personal data held by private companies, which underpins both the first and second invalidation by the CJEU of an adequacy decision issued by the European Commission for a self-regulatory privacy framework adopted by the US. While the OECD efforts in this respect hit a roadblock during this summer, the GPA managed to adopt a resolution during the Closed Session of the conference on Government Access to Personal Data held by the Private Sector for National Security and Public Safety Purposes, which includes substantial principles like transparency, proportionality, independent oversight and judicial redress. 

However, one interesting idea surfaced among the proposals related to DFFT that the PIPC promotes for further consideration in these intergovernmental fora, according to Mieko Tanno: the introduction of a global corporate certification system. No further details about this idea were shared at the GPA, but since the DFFT initiative will continue to make its way through agendas of international fora, we might find out more information soon. 

One final layer of complexity added to the international data transfers debate is the intertwining of data flows with international trade agreements. In his keynote, Bruno Gencarelli spoke of “synergies that can be created between trade instruments on the one hand and data protection mechanisms on the other hand”, and promoted breaking down silos between the two as being very important. This is already happening to a certain degree, as shown by the Chart annexed to this G20 Insights policy brief, on “provisions in recent trade agreements addressing privacy for personal data and consumer protection”. 

An essential question to consider for this approach is, as pointed out by Dr. Clarisse Girot, Director of FPF Asia-Pacific, when reviewing this piece, “how far can we build trust with trade agreements?”. Usually, trade agreements “guarantee an openness that is appropriate to the pre-existing level of trust”, as noted in the G20 Insights policy brief.  

EU will seek a mandate to negotiate international agreements for data protection enforcement cooperation

Enforcement cooperation for the application of data protection rules in cross-border cases is one of the key areas that requires significant improvement, according to Bruno Gencarelli: “When you have a major data breach or a major compliance issue, it simultaneously affects several jurisdictions, hundreds of thousands, millions of users. It makes sense that the regulators who are investigating at the same time the same compliance issues should be able to effectively cooperate. It also makes sense because most of the new modernized privacy laws have a so-called extraterritorial effect”.

Gencarelli also noted that the lack of effectiveness of current arrangements for enforcement cooperation for privacy and data protection law surfaces especially when it is compared to other regulatory areas, like competition and financial supervision. In those areas, enforcers have binding tools that allow “cooperation on the ground, exchange of information in real time, providing mutual assistance to each other, carrying out joint investigations”. 

In this sense, the European Union has plans to create such a binding toolbox for regulators. “The EU will, in the context of the implementation of the GDPR, seek a mandate to negotiate such agreements with a number of international partners”, announced Bruno Gencarelli in his keynote address. 

The more than 130 privacy and supervisory authorities from around the world that are members of the GPA are very keen on enhancing and permanentalizing their cooperation, both in policy matters and enforcement, as is evident from the Resolution on the Assembly’s Strategic Direction for 2021-2023 adopted by the GPA during this year’s Conference, under the leadership of Elizabeth Denham and her team at the UK’s Information Commissioner’s Office. This two-year Strategy proposes concrete action, such as “building skills and capacity among members, particularly in relation to enforcement strategies, investigation processes, cooperation in practice and breach assessment”. The binding toolbox for enforcement cooperation that the EU might promote internationally will without a doubt boost these initiatives. 

In a sign that, indeed, the data protection and privacy debate is increasingly vibrant outside traditional geographies for this field, Mexico’s INAI was voted as the next Chair of the Executive Committee of the GPA and entrusted to carry out the GPA’s Strategy for the next two years. 

Video recordings of all Keynote sessions at this year’s GPA Annual Conference are available On Demand on the Conference’s platform for the attendees that had registered for the event.

  

At the intersection of AI and Data Protection law: Automated Decision-Making Rules, a Global Perspective (CPDP LatAm Panel)

On Thursday, 15th of July 2021, the Future of Privacy Forum (FPF) organised during the CPDP LatAm Conference a panel titled ‘At the Intersection of AI and Data Protection law: Automated Decision Making Rules, a Global Perspective’. The aim of the Panel was to explore how existing data protection laws around the world apply to profiling and automated decision making practices. In light of the European Commission’s recent AI Regulation proposal, it is important to explore the way and the extent to which existing laws already protect individuals’ fundamental rights and freedoms against automated processing activities driven by AI technologies. 

The panel consisted of Katerina Demetzou, Policy Fellow for Global Privacy at the Future of Privacy Forum; Simon Hania, Senior Director and Data Protection Officer at Uber; Prof. Laura Schertel Mendes, Law Professor at the University of Brasilia and Eduardo Bertoni, Representative for the Regional Office for South America, Interamerican Institute of Human Rights. The panel discussion was moderated by Dr. Gabriela Zanfir–Fortuna, Director for Global Privacy at the Future of Privacy Forum.

web 3120321 1920

Data Protection laws apply to ADM Practices in light of specific provisions and/or of their broad material scope

To kick-off the conversation, we presented preliminary results of an ongoing project led by the Global Privacy Team at FPF on Automated Decision Making (ADM) around the world. Seven jurisdictions were presented comparatively, among which five already have a general data protection law in force (EU, Brazil, Japan, South Korea, South Africa), while two jurisdictions have data protection bills expected to become laws in 2021 (China and India).

For the purposes of this analysis, the following provisions are being examined: the definitions of ‘processing operation’ and ‘personal data’ given that they are two concepts essential for defining the material scope of the data protection law; the principles of fairness and transparency and legal obligations and rights that relate to these two principles (e.g., right of access, right to an explanation, right to meaningful information etc.); provisions that specifically refer to ADM and profiling (e.g., Article 22 GDPR). 

The preliminary findings are summarized in the following points:

Uber, Ola and Foodinho Cases: National Courts and DPAs decide on ADM cases on the basis of existing laws

In recent months, Dutch national Courts and the Italian Data Protection Authority have ruled on complaints brought by employees of the ride-hailing companies Uber and Ola and the food delivery company Foodinho challenging the companies’ decisions reached with the use of algorithms. Simon Hania summarised the key points of these court decisions. It is important to mention that all cases appeared in the employment context and were all submitted back in 2019. That means that more outcomes of ADM cases may be expected in the near future. 

The first Uber case referred to the matching between drivers and riders which, as the Court judged, qualifies as an ADM based solely on automated means that however does not lead to any ‘legal or similarly significant effect’. Therefore, Article 22 GDPR is not applicable. The second Uber case referred to the deactivation of drivers’ accounts due to signals of potentially fraudulent behaviour or misconduct of the drivers. There, the Court judged that Article 22 is not applicable because, as the company proved, there is always human intervention before an account is deactivated and the actual final decision is made by a human. 

The third example presented was the Ola case, whereby the Court decided that the company’s decision of withholding drivers’ money as an act of penalizing their misconduct qualifies as an automated decision based solely on automated means , producing a ‘legal or similarly significant effect’, and therefore Article 22 GDPR applies. 

In the last example of Foodinho, the decision-making on how well couriers perform was indeed deemed by the Court to be based solely on automated means and it produced a significant effect on the data subjects (the couriers). The problem was highlighted to be the way that the performance metrics were established and specifically on the accuracy of the profiles created. They were not sufficiently accurate for the significance of the effect they would bring. 

This last point spurs the discussion on the importance of the principle of data accuracy which is an often overlooked principle. Having accurate data as the basis for decision making is crucial in order to avoid discriminatory practices and achieve fairer AI systems. As Simon Hania emphasised, we should have information available that is fit for purpose in order to reach accurate decisions. This suggests that the data minimisation principle should be understood as data rightsizing and not as requiring to purely minimise information processed for a decision to be reached.

LGPD: Brazil’s Data Protection Law and its application to ADM practices

The LGPD, Brazil’s recently passed data protection law, is heavily influenced by the EU GDPR in general, but also specifically on the topic of ADM processing. Article 20 of the LGPD protects individuals against decisions that are made only on the basis of automated processing of personal data, when these decisions “affect their interests”. The wording of this provision seems to suggest a wider protection than the relevant Article 22 of the GDPR which requires that the decision “has a legal effect or significantly affects the data subject”. Additionally, Article 20 LGPD provides individuals with a right to an explanation and with the right to request a review of the decision. 

In her presentation, Laura Mendes highlighted two points that require further clarification: first of all, it is still unclear what the definition of “solely automated” is. Secondly, it is not clear what the degree of the review of the decision should be and also whether the review shall be performed by a human. There are two provisions core to the discussion on ADM practices: 

(a) Art 6 IX LGPD, which introduces the principle of non-discrimination as a separate data protection principle. According to it, processing of data shall not take place for “discriminatory, unlawful or abusive purposes”. 

(b) Article 21 LGPD reads “The personal data relating to the regular exercise of rights by the data subjects cannot be used against them.” As Laura Mendes suggested, Article 21 LGPD is a provision with great potential regarding non-discrimination in ADM. 

Latin America & ADM Regulation: there is no homogeneity in Latin American laws but the Ibero-American Network seems to be setting a common tone

In the last part of the panel discussion, a wider picture of the situation in Latin America was presented. It should be clear that Latin America does not have a common, homogenous approach towards data protection. For example, while Argentina has had a data protection law since 2000 for which it obtained an adequacy decision with the EU, Chile is in the process of adopting a data protection law but still has a long way to go, while Peru, Ecuador and Colombia are trying to modernize their laws. 

The American Convention of Human Rights recognises a right to privacy and a right to intimacy, but there is still no interpretation by the Interamerican Court of Human Rights neither on the right to data protection nor specifically on the topic of ADM practices. However, it should be kept in mind that as was the case with Brazil’s LGPD, the GDPR has highly influenced Latin America’s approach to data protection. Another common reference for Latin American countries is the Ibero-American Network which, as Eduardo Bertoni explained in his talk, while it does not produce hard law, it publishes recommendations that are followed by the respective jurisdictions. Regarding specifically the discussion on ADM, Eduardo Bertoni mentioned the following initiatives taken in the Ibero-American space:

Main Takeaways

While there is an ongoing debate around the regulation of AI systems and automated processing in light of the recently proposed EU AI Act, this panel brought attention to existing data protection laws which are equipped with provisions that protect individuals against automated processing operations. The main takeaways of this panel are the following:

Looking ahead, the debate around the regulation of AI systems will continue to be heated and the protection of fundamental rights and freedoms in light of automated processing operations will still appear as a top priority. In this debate we should keep in mind that the proposed AI Regulation is being introduced in an already existing system of laws, as is data protection law, consumer law, labour law, etc. It is important to have clear what is the reach and the nature of these laws so as to be able to identify the gap that the AI Regulation or any other future proposal comes to fill. This panel highlighted that ADM and automated processing is not unregulated. On the contrary, current laws protect individuals by putting in place binding overarching principles, legal obligations and rights. At the same time, Courts and national authorities have already started enforcing these laws. 

Watch a recording of the panel HERE.

Read more from our Global Privacy series:

Insights into the future of data protection enforcement: Regulatory strategies of European Data Protection Authorities for 2021-2022

Spotlight on the emerging Chinese data protection framework: Lessons learned from the unprecedented investigation of Didi Chuxing

A new era for Japanese Data Protection: 2020 Amendments to the APPI

Image by digital designer from Pixabay

India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

car 3075497 1920

by Chelsey Colbert

The author thanks Hunter Dorwart for his contribution to this text.

The Cyberspace Administration of China (CAC) released a draft regulation on car privacy and data security on May 12, 2021. China has been very active in automated vehicle development and deployment and has also proposed last fall a draft comprehensive privacy law, which is moving towards adoption likely by the end of this year.

The draft car privacy and data security regulation (“Several Provisions on the Management of Automobile Data Security”; hereinafter, “draft regulation”) is interesting for those tracking automated vehicle (AV) and privacy regulations around the world and is relevant beyond China – not only due to the size of the Chinese market and its potential impact on all actors in the “connected cars” space present there, but also because dedicated legislation for car privacy and data security is novel for most jurisdictions. In fact, the draft regulation raises several interesting privacy and data protection aspects worthy of further consideration, such as its strict rules on consent, privacy by design, and data localization requirements. The CAC is seeking public comment on the draft, and the deadline for comments is June 11, 2021. 

The draft regulation complements other regulatory developments around connected and automated vehicles and data. For example, on April 29, 2021, the National Information Security Standardization Technical Committee (TC 260), which is jointly administered by the CAC and the Standardization Administration of China, published a draft Standard on Information Security Technology Security Requirements for Data Collected by Connected Vehicles. The Standard sets forth security requirements for data collection to ensure compliance with other laws and facilitate a safe environment for networked vehicles. Standards like this are an essential component of corporate governance in China and notably fill in compliance gaps left in the law. 

The publication of the draft regulation and the draft standard indicate that the Chinese government is turning its attention towards the data and security practices of the connected cars industry. Below we explain the key aspects of this draft regulation, summarize some of the noteworthy provisions, and conclude with the key takeaways for everyone in the car ecosystem. 

Broad scope of covered entities: from OEMs to online ride-hailing companies

The draft regulation aims to strengthen the protection of “personal information” and “important data,” regulate data processing related to cars, and maintain national security and public interests. The scope of application of this draft regulation is fairly broad, both in terms of who it applies to and the types of data it covers. 

The draft regulation applies to “operators” that collect, analyze, store, transmit, query, utilize, delete, and provide (activities collectively referred to as processing) personal information or important information overseas (during the design, production, sales, operation, maintenance, and management of cars) and “within the territory of the People’s Republic of China.” 

“Operators” are entities that design or manufacture cars, or service institutions such as OEMs (original equipment manufacturers), component and software providers, dealers, maintenance organizations, online car-hailing companies, insurance companies, etc. (Note: The draft regulation includes “etc.,” here and throughout, which appears to mean that it is a non-exhaustive list.)

Covered data: Distinction among “personal information,” “important data,” and “sensitive personal information”

The draft regulation considers three data types, with an emphasis on “personal information” and “important data”, which are defined terms under Article 3. In addition, there is also a third type mentioned within the draft, at Article 8, and in a separate press release document: “sensitive personal information.”  

Personal information includes data from car owners, drivers, passengers, pedestrians, etc. (non-exhaustive list) and also includes information that can infer personal identity and describe personal behavior. This is a broad definition and is notable because it explicitly includes information about passengers and pedestrians. As the business models evolve and the ecosystem of players in the car space grows, it has become more important to consider individuals other than just the driver or registered user of the car. The draft regulation appears to use the words “users” and “personal information subjects” when referring to this group of individuals broadly and also uses “driver,” “owner,” and “passenger” throughout.

The second type of data covered is “important data,” which includes:

The inclusion of this data type is notable because it is defined in addition to “sensitive personal information” and includes data about users and infrastructure (i.e., the car charging network). Article 11 prescribes that when handling important data, operators should report to the provincial cyberspace administration and relevant departments the type, scale, scope, storage location and retention period, the purposes for collection, whether it was shared with a third party, etc. in advance (presumably in advance of processing this type of data, but this is something that may need to be clarified).

The third type of data mentioned in the draft regulation is “sensitive personal information,” and this includes vehicle location, driver or passenger audio and video, and data that can be used to determine illegal driving. There are certain obligations for operators processing this type of data (Articles 8 and 16).

Article 8 prescribes that where “sensitive personal information” is collected or provided outside of the vehicle, operators must meet certain obligations:

The definitions of these three types of data mirror similar definitions in other Chinese laws or draft laws currently being considered for adoption, such as the Civil Code and, respectively, the Personal Information Protection Law and the Cybersecurity Law. Consistency across these laws indicates a harmonization of China’s emerging data governance regulatory model. 

Obligations based on the Fair Information Practice Principles

Articles 4 – 10 include many of the fair information practice principles, such as purpose specification and data minimization in Article 4 and security safeguards in Article 5, as well as privacy by design (Articles 6(4), 6(5), and 9). There are a few notable provisions worth discussing in more detail which are organized under the following headings below: local processing, transparency and notice, consent and user control, biometric data, annual data security management, and violations and penalties. 

Local (“on device”) processing

Personal information and important data should be processed inside the vehicle, wherever possible (Article 6(1)). Where data processing outside of the car is necessary, operators should ensure the data has been anonymized wherever possible (Article 6(2)).

Transparency and Notice

When processing personal information, the operator is required to give notice of the types of data being collected and provide the contact information for the person responsible for processing user rights (Article 7). This notice can be provided through user manuals, onboard display panels, or other appropriate methods. The notice should include the purpose for collection, the moment that personal information is collected, how users can stop the collection, where and for how long data is stored, and how to delete data stored in the car and outside of the vehicle.

Regarding sensitive personal information (Article 8(3)), the operator is obliged to inform the driver and passengers that this data is being collected through a display panel or a voice in the car. This provision does not include “user manuals” as an example of how to provide notice, which potentially means that this data type is worthy of more active notice than personal information. This is notable because operators cannot rely on notice being given through a privacy notice placed on a website or in the car’s manual.

Consent and User Control, including a two-week deletion deadline

Article 9 requires operators to obtain consent to collect personal information, except where laws do not require consent. This provision notes that consent is often difficult to obtain (e.g., collecting audio and video of pedestrians outside the car). Because of this difficulty, data should only be collected when necessary and should be processed locally in the vehicle. Operators should also employ privacy by design measures, such as de-identification on devices.

Article 8(2) (requirements when collecting sensitive personal information) requires operators to obtain the driver’s consent and authorization each time the driver enters the car. Once the driver leaves the driver’s seat, that consent session has ended, and a new one must begin once the driver gets back into the seat. The driver must be able to stop the collection of this type of data at any time, be able to view and make inquiries about the data collected, and request the deletion of the data (the operator has two weeks to delete the data). It is worth noting that Article 8 includes six subsections, some of which appear to apply only to the driver or owner and not passengers or pedestrians. 

These consent and user control requirements are quite notable and would have a non-trivial impact on the design of the car, the user experience, as well as the internal operations of the operator. It could potentially impact the user experience negatively if consent and authorization were required each time the driver got into the driver’s seat. For example, a relevant comparable experience is using a website and facing the consent-related pop-ups that must be closed out before being able to read or use the website at every visit. Furthermore, stopping the collection of location data, video data, and other telematics data (if used to determine illegal driving) could also present safety and functionality risks and cause the car not to operate as intended or safely. These are some of the areas where stakeholders are expected to submit comments for the public consultation. 

Biometric data

Biometric data is mentioned throughout the draft regulation, as this type of data is implicitly or explicitly included in the definitions of personal information, important data, and sensitive personal information. Biometric data is specifically mentioned in Article 10, which is about the biometric data of drivers. Biometric data is an increasingly common data type collected by cars and deserves special attention. Article 10 would require that the biometric data of the driver (e.g., fingerprints, voiceprints, faces, heart rhythms, etc.) only be collected for the convenience of the user or to increase the security of the vehicle. Operators should also provide alternatives to biometrics. 

Data localization

Articles 12-15 and 18 concern data localization. Both personal information and important data should be stored within China, but if it is necessary to store elsewhere, the operator must complete an “outbound security assessment” through the State Cyberspace Administration, and the operator is permitted to send only the data specified in that assessment overseas. The operator is also responsible for overseeing the overseas recipient’s use of the data to ensure appropriate security and for handling all user complaints. 

Annual data security management status

Article 17 places additional obligations on operators to report their annual data security management status to relevant authorities before December 15 of each year when:

  1. They process personal information of more than 100,000 users, or
  2. They process important data. 

Given that this draft regulation applies to passengers and pedestrians in addition to drivers, it would not take long for the threshold of 100,000 users to be met, especially for operators who manage a fleet of cars for rental or ride-hail. Additionally, since the definitions of personal information and important data are so broad, it is likely that many operators would trigger this reporting obligation. The obligations include recording the contact information of the person responsible for data security and handling user rights; recording relevant information about the scale and scope of data processing; recording with whom data is shared domestically; and other security conditions to be specified. If data is transferred overseas, there are additional obligations (Article 18). 

Violations and Penalties

Violation of the regulations would result in punishment in accordance with the “Network Security Law of the People’s Republic of China” and other laws and regulations. Operators may also be held criminally responsible. 

Conclusion 

China’s draft car privacy and security regulation provides relevant information for policymakers and others thinking carefully about privacy and data protection regarding cars. The draft regulation’s scope is very broad and includes many players in the mobility ecosystem beyond OEMs and suppliers (e.g., online car-hailing companies and insurance companies).

With regards to user rights, the draft regulation recognizes that other individuals, in addition to the driver, will have their personal information processed and provides data protection and user rights to these individuals (e.g., passengers and pedestrians). The draft regulation would apply to three broad categories of data (personal information, important data, and sensitive personal information).

In privacy and data protection laws from the EU to the US, we have continued to see different obligations arise depending on the type or sensitivity of data and how data is used. This underscores the need for organizations to have a complete data map; indeed, it is crucial that all operators in the connected and automated car ecosystem have a sound understanding of what data is being collected from which person and where that data is flowing. 

The draft regulation also highlights the importance of transparency and notice, as well as the challenges of consent and user control. It is a challenge to appropriately notify drivers, passengers, and pedestrians about all of the data types being collected by a vehicle.

Privacy and data protection laws will have a direct impact on the design, user experience, and even the enjoyment and safety of cars. It is crucial that all stakeholders are given the opportunity to provide feedback in the drafting of privacy and data protection laws that regulate data flows in the car ecosystem and that privacy professionals, engineers, and designers become much more comfortable working together to operationalize these rules. 

Image by Tayeb MEZAHDIA from Pixabay 

Check out other blogs in the Global Privacy series:

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

The Right to Be Forgotten is Not Compatible with the Brazilian Constitution. Or is it?

India: Massive Overhaul of Digital Regulation with Strict Rules for Take-down of Illegal Content and Automated Scanning of Online Content

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

mount fuji 3801827 1920

Authors: Takeshige Sugimoto, Akihiro Kawashima, Tobyn Aaron from S&K Brussels LPC; Authors can be contacted at [email protected].

The recent amendments to Japan’s data protection law (the Act on the Protection of Personal Information, henceforth the ‘APPI‘) contain a number of new provisions certain to alter – and for many foreign businesses, transform – the ways in which companies conduct business in or with Japan. In addition to greatly expanding data subject rights, most notably, the amendments to the APPI (the ‘2020 Amendments‘): 

(i) eliminate all former restrictions on the APPI’s extraterritorial application; 

(ii) considerably heighten companies’ disclosure and due diligence obligations with respect to overseas data transfers; 

(iii) introduce previously unregulated categories of personal information (each with corresponding obligations for companies), including ‘pseudonymously processed information’ and ‘personally referable information’; and 

(iv) for the first time, mandate notifications for qualifying data breaches.

The 2020 Amendments will be enforced by the Personal Information Protection Commission of Japan (the “PPC”), pursuant to forthcoming PPC guidelines alongside the amended Enforcement Rules for the Act on the Protection of Personal Information (the ‘amended PPC Rules‘) and the amended Cabinet Order to Enforce the Act on the Protection of Personal Information (the ‘amended Cabinet Order‘) (both published on March 24, 2021).

As the 2020 Amendments are set to enter into force on April 1, 2022, Japanese and global companies that conduct business in or with Japan, have just less than one year to bring their operations into compliance. To facilitate such efforts, this blog post describes those provisions of the 2020 Amendments likely to have the greatest impact on businesses, as well as current events in Japan which will affect their implementation and should inform the manner by which companies address enforcement risks and compliance priorities.

1. LINE Data Transfers to China: A Wake-Up Call for Japan

To appreciate the effect that the 2020 Amendments will have on the Japanese data protection space, one must first consider the current political and societal contexts in Japan in which the 2020 Amendments will be introduced – and enforced – beginning with a recent incident of note involving LINE Corporation. 

In March 2021, headlines across Japan shocked locals: Japan-based messaging app LINE, actively used and trusted by approximately 86 million Japanese citizens, had been transferring users’ personal information, including names, IDs and phone numbers, to a Chinese affiliate. It is neither unusual nor unlawful for Japanese tech companies to outsource certain of their operations, including personal information processing, overseas. But for Japanese nationals, the LINE matter is different for a number of important reasons, not least of which is the Japanese population’s awareness of the Chinese Government’s broad access rights to personal data managed by private-sector companies in China, pursuant to China’s National Intelligence Law.

LINE is not only the most utilized messaging application in Japan; it also occupies a special place in the country’s historical and cultural consciousness. When Japan was hit by the 2011 earthquake, use of voice networks failed and email exchanges were delayed, as citizens struggled to communicate with, and confirm the safety of, their loved ones. And so, LINE was born – a simple messaging and online calling tool to serve as a communications hotline in case of emergency. A decade on, LINE has become the major – and for many the only – means of communication in Japan – particularly in today’s socially-distanced world.

For the Japanese Government too, LINE serves a crucial role: national – and municipality – level government bodies use LINE for official communications, including of sensitive personal information such as for COVID-19 health data surveying. News of LINE’s transfer of user data to China, including potential access by the Chinese Government, therefore horrified private citizens and public officials both.

On March 31, 2021, the PPC launched an official investigation into LINE and its parent company, Z Holdings, over their management of personal information. Until such investigation is concluded, whether and to what extent LINE violated the APPI (and in particular, its provisions governing third party access and international transfers) will remain uncertain. Regardless, the impact of this matter on the Japanese data privacy space is already unfolding. In late March, a number of high-ranking Japanese politicians (including Mr. Akira Amari, Chairperson of the Rule-Making Strategy Representative Coalition of the Liberal Democratic Party of Japan) sent the PPC and other relevant Government ministries strongly-worded messages urging immediate action with respect to LINE, and more broadly, calling for a risk assessment to be conducted vis-à-vis all personal information transfers to China by companies in Japan.

Several days later, Japanese media reported that the PPC had requested members of both the KEIDANREN (the Japan Business Federation, comprised of 1,444 representative companies in Japan) and the Japan Association of New Economy (comprised of 534 member companies in Japan), to report their personal information transfer practices involving China, and to detail the privacy protection measures in place with respect to such transfers. For any APPI violations revealed, the PPC will issue a recommendation potentially followed with an injunctive order, the latter of which carries a criminal penalty (including possible imprisonment) if not implemented.

Importantly, recent political support for stronger data protection measures extends beyond transfers to China. For instance, Mr. Amari has also reportedly called on the PPC to broadly limit permissible overseas transfers of personal information to those countries with data protection standards equivalent to the APPI (a limitation which, if implemented, would greatly surpass restrictions on transfer under both the current APPI and the 2020 Amendments).

Although the PPC has yet to respond, it is evident that both political and popular sentiment in Japan strongly favor enhanced protections for Japanese persons’ personal information. The inevitable outcome of such sentiment, which may be further amplified depending on the PPC’s forthcoming conclusions regarding the LINE matter, will be the increasingly stringent enforcement of the APPI and its 2020 Amendments, and potentially, further amendments thereto. As recent events in Japan demonstrate, this transformation has already begun to take effect. Companies conducting business in or with Japan, whether Japanese or foreign, should therefore pay close attention to the Japanese data privacy space over the course of this year.

2. Broadened Extraterritorial Reach and International Transfer Restrictions

For ‘Personal Information Handling Business Operators’ (henceforth ‘Operators‘, a term used in joint reference to controllers and processors, upon which the APPI imposes the same obligations) arguably the greatest impact of the 2020 Amendments will derive from their drastic revisions to Article 75 (extraterritoriality) and Article 24 (international transfer).

To date, the APPI’s extraterritorial reach has been limited to a handful of its articles, primarily those governing purpose limitation and lawful acquisition of personal information (‘PI‘) by overseas Operators. From April 2022, however, Article 75 of the amended APPI will, without exception, fully bind all private-sector overseas entities, regardless of their size, which process the PI, pseudonymously processed PI or anonymously processed PI of individuals who are in Japan, in connection with supplying goods or services thereto.

With respect to international transfers, Article 24 of the current legislation prohibits the transfer of PI to a ‘third party’ outside of Japan absent the data subject’s prior consent, unless (i) the recipient country has been white-listed by the PPC or (ii) the recipient third party upholds data protection standards equivalent to the APPI (in practice, these would generally be imposed contractually). Otherwise, international transfers may also be conducted pursuant to legal obligation or necessity (for the protection of human life, public interest or governmental cooperation, provided that for each, the data subject’s consent would be difficult to obtain). The APPI’s international transfer mechanisms generally conform to those prescribed by other global data protection regimes, loosely resembling the EU GDPR’s adequacy decisions (with respect to (i) above), and standard contractual clauses or binding corporate rules (with respect to (ii) above, although there are no PPC-provided contractual clauses, and non-binding arrangements such as the APEC CPBR System are PPC-approved).

The 2020 Amendments and amended PPC Rules do not modify the above transfer mechanisms, but they do narrow their scope in two key aspects. First, pursuant to Article 24(2) of the 2020 Amendments, transfers conducted on the basis of data subject consent will henceforth require the transferring Operator (on top of preexisting notification obligations) to inform the data subject in advance as to the name of the recipient country, and the levels of PI protection provided by both that country (assessed using an “appropriate and reasonable method”) and the recipient third party. Absent such information, data subject consent will be rendered uninformed and the transfer, invalid.

Of greater impact on the transferring Operator, however, will be the second modification (pursuant to Article 24(3) of the 2020 Amendments): in the event that an international transfer is conducted in reliance on contractually or otherwise imposed APPI data protection standards (the primary transfer mechanism on which Operators in Japan rely), such contractual safeguards alone are to be rendered insufficient. Going forward, the transferring Operator must, in addition to imposing APPI-equivalent obligations upon a recipient third party, (i) take “necessary action to ensure continuous implementation” of such obligations by the recipient; and (ii) inform the data subject, upon request, regarding the actions the Operator has taken.

With respect to (i) above, the amended PPC Rules interpret “necessary action to ensure continuous implementation” as requiring the transferring Operator to: (1) periodically check the implementation status and content of the APPI-equivalent measures by the recipient third party, and assess (by an “appropriate and reasonable method”) the existence of any foreign laws which might impact such implementation; (2) take necessary and appropriate actions to remedy any obstacles that are found; and (3) suspend all PI transfer to the third-party recipient, should its continuous implementation of the APPI-equivalent measures become difficult.

In addition, following receipt of a data subject’s request for information (pursuant to (ii) above), the amended PPC Rules specify that the transferring Operator must, without undue delay, inform the requesting data subject of each of the following:

(1)  the manner by which the APPI-equivalent measures were established by (or presumably with) the recipient third party (such as a data processing agreement or memorandum of understanding, or in the case of inter-group transfers, a privacy policy);

(2)  details of the APPI-equivalent measures implemented by the recipient third party;

(3)  the frequency and method by which the transferring Operator checked such implementation;

(4)  the name of the recipient country;

(5)  whether any foreign laws may affect the implementation of the APPI-equivalent measures, and a detailed overview of such laws;

(6)  whether any obstacles to implementation exist, and a detailed overview of such obstacles; and

(7)  the measures taken by the transferring Operator upon a finding of such obstacles.

Only if provision of the above items to the data subject is likely to ‘significantly hinder’ an Operator’s business operations, might that Operator refrain from such (complete or partial) disclosure.

In practice, Operators primarily rely upon contractual safeguards and consent (in that order) to transfer PI outside of Japan. Indeed, the PPC’s list of “adequacy decisions” on which transferring Operators may alternatively rely is significantly shorter than that of the European Commission: to date, only the UK and EEA members have been deemed adequate recipients of a PI transfer from Japan. Therefore, the onerous informational and due diligence obligations incumbent upon Operators from April 2022, which affect precisely these two transfer mechanisms, are certain to impact business operations in Japan. And, given the 2020 Amendments’ unbridled extraterritoriality, this burden will be equally felt overseas. Most importantly, in the wake of the March 2021 LINE matter, compliance with the current and amended APPI, and in particular its overseas transfers restrictions, will be at the top of the PPC’s enforcement priorities.

3. Mandatory Data Breach Notifications

In addition to expanding the types of security incidents subject to the amended APPI, more notably, data breach notifications will henceforth be mandatory (in contrast, data breach notifications are subject to ‘best efforts’ under current legislation). Going forward, Operators will be required – pursuant to Article 22-2 of the 2020 Amendments and the amended PPC Rules – to promptly notify both the PPC and data subjects of the occurrence and/or potential occurrence of any data leakage, loss, damage or other similar situation which poses a ‘high’ risk to the rights and interests of data subjects (henceforth, a ‘breach‘).

The types of breaches which meet this ‘high’ risk threshold, and thus trigger a notification obligation, are described by the amended PPC Rules as those which involve, or potentially involve, any of the following: (i) sensitive (‘special care-required’) PI; (ii) financial injury caused by unauthorized usage; (iii) a wrongful purpose(s) as the cause; or (iv) greater than 1,000 affected data subjects. However, a notification is not required in the event that the Operator implemented ‘necessary measures’ to safeguard the rights and interests of data subjects (such as sophisticated encryption).

The amended PPC Rules also stipulate the required content for such notifications, although Operators are granted thirty days to provide details unknown at the time of the initial notice:

(1) overview of the breach;

(2) the types of PI affected or possibly affected by the breach;

(3) the number of data subjects affected or possibly affected by the breach;

(4) causes of the breach;

(5) existence and nature of secondary damage or risks thereof;

(6) status and nature of communications to affected data subjects;

(7) whether and how the breach has been publicized;

(8) measures implemented to prevent a recurrence; and

(9) any additional matters which may serve as a useful reference.

For those Operators ‘entrusted‘ by another Operator with the processing of PI, the 2020 Amendments provide a second option: in lieu of notifying the PPC and data subjects, such “entrusted” Operators may instead alert the “entrusting” Operator as to the breach. In practice, this likely equates to the EU GDPR’s requirement for processors to notify controllers in the event of a breach (although under the 2020 Amendments, direct accountability to the PPC and data subjects is still the default, including for “entrusted” Operators).

In the event of a breach, amended Article 30(5) additionally confers upon data subjects the right to request deletion, suspension of use and suspension of transfer, of affected PI.

4. Expansion of ‘Personal Information’ Concepts and Categories

Another major modification to the APPI is the expanded scope of the types of PI covered. In addition to eliminating the APPI’s differential treatment of temporary PI (retained for up to six months), the 2020 Amendments introduce a new category of information, ‘pseudonymously processed information‘, thereby bringing the Japanese data protection regime one additional step closer to the EU GDPR framework.

As currently drafted, the APPI recognizes only two major types of information: PI and anonymously processed information. Notably, the method of rendering anonymously processed information under the APPI – in contrast to the EU GDPR– need not be technically irreversible (unless such data originates in the UK or EEA and the transfer is based on the European Commission’s adequacy decision on Japan, in which case special PPC-drafted Supplementary Rules do require irreversibility); instead, the APPI endeavors to preserve anonymity by requiring Operators to implement appropriate security measures to prevent reidentification.

Pseudonymously processed information is defined by the 2020 Amendments as information relating to an individual, which cannot identify such individual unless collated with additional information. The stated intention behind the drafters’ introduction of the pseudonymization process is to enable Operators to (i) utilize pseudonymously processed information for internal purposes including business analytics, the development of computational models, etc., and/or (ii) retain rather than delete, for potential future statistical analysis usage, pseudonymously processed information derived from PI which are no longer necessary for the original purpose(s) for which they were collected.

The 2020 Amendments and amended PPC Rules model the pseudonymization process on anonymization, requiring the removal of any (i) description, (ii) unique ‘personal identification code’ (as defined in the APPI), and (iii) information relating to the processing method performed to enable the removal of (i) and (ii) above. The immediate result is the creation, by separation, of two types of information: pseudonymously processed information and ‘removed’ PI, where the latter is the ‘key’ enabling reidentification.

The removed PI are treated as PI under the 2020 Amendments, and as such are subject to all of the same requirements and restrictions, although Operators in possession of both removed PI and pseudonymously processed information are additionally obligated to provide enhanced security in order to safeguard the integrity of the pseudonymously processed information (pursuant to the amended PPC Rules and amended Article 35-2(2)).

Notably, and in divergence from the EU GDPR approach to pseudonymously processed information, the 2020 Amendments’ rules governing treatment of such information vary according to the Operator involved. With respect to pseudonymously processed information handled by an Operator in simultaneous possession of the removed (and separately handled) PI, amended Article 35-2 stipulates the following specific requirements:

(i)         a prohibition of the collation of such information with other data, such as the removed PI, in a manner which could identify data subjects;

(ii)       strict application of the principles of purpose limitation and necessity thereto;

(iii)     a prohibition on usage of any contact information contained therein to phone, mail, email or otherwise contact data subjects;

(iv)      a prohibition of any transfer thereof to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI);

(v)       in the event of their acquisition or the intended alteration of their processing purpose, limitation of the Operator’s disclosure obligation to that of notice by publication;

(vi)      non-applicability of breach notification obligations pursuant to amended Article 22-2, provided that the removed PI are not also subject to the breach; and

(vii)    the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).

In addition to the above, the APPI’s ‘general’ requirements pursuant to Articles 19-22 will apply to pseudonymously processed information handled by an Operator which simultaneously (but separately) possesses the removed PI. Such Operator will be required to:

(i) maintain accuracy of the pseudonymously processed information (for the duration their utilization remains necessary, after which their immediate deletion – alongside the deletion of the removed PI – is required, subject to the Operator’s best efforts);

(ii) implement necessary and appropriate security measures to prevent leakage, loss or damage of the pseudonymously processed information; and

(iii) exercise necessary and appropriate supervision over employees and entrusted persons handling the pseudonymously processed information.  

In contrast, with respect to pseudonymously processed information handed by an Operator which does not simultaneously possess the removed PI, amended Article 35-3 prohibits such Operator from acquiring the removed PI and/or collating the pseudonymously processed information with other information in order to identify data subjects, and limits the applicable provisions of the 2020 Amendments to the following:

(i) the implementation of necessary and appropriate security measures to prevent leakage (a simplified version of Article 20);

(ii) the exercise of necessary and appropriate supervision over employees and entrusted persons handling such information (pursuant to Articles 21 and 22);

(iii) a prohibition on usage of any contact information contained in the pseudonymously processed information to phone, mail, email or otherwise contact data subjects;

(iv) a prohibition of any transfer of such information to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI); and

(v) the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).

In addition to pseudonymously processed information, the 2020 Amendments, pursuant to Article 26-2, introduce an additional, fourth category of information – namely,personally referable information’. This fourth category includes cookies and purchase history (for example), which items may not independently be linkable to a specific individual (and thus would not constitute PI) but which could, if transferred to an Operator in possession of additional, related data, become PI. To account for such qualifying transfers, the 2020 Amendments introduce a consent requirement (such as an opt-in cookie banner).

In the case of overseas transfers, the transferring Operator must additionally inform the data subject as to the data protection system and safeguards of the recipient country and third party, as well as take ‘necessary action to ensure continuous implementation’ of APPI-equivalent safeguards by such recipient third party. Unlike for PI, the data subject does not have a right to request additional details regarding the ‘necessary action’ taken by the Operator with respect to an overseas transfer of personally referable information.

5. Preparing for the 2020 Amendments: Next Steps for Japanese and Foreign Operators

Companies conducting business in or with Japan should be mindful of the demanding nature of the 2020 Amendments to the APPI, and the stringency with which the PPC will seek to enforce them – particularly in view of the dismay caused by the LINE matter and the likelihood of efforts by the PPC to avoid similar incidents in the future.

Moreover, as the European Commission finalizes its first review of its 2019 adequacy decision on Japan, the PPC’s interpretative rules and enforcement trends may further intensify, with the aim of bringing Japanese data protection legislation closer to global standards, including the EU GDPR framework. Bearing this in mind, companies – including those not currently subject to the APPI, but which provide goods and/or services to individuals in Japan – would be wise to proactively conduct necessary modifications to their internal data protection policies and mechanisms, in order to ensure operational compliance with the amended APPI by April 2022.

For those Operators involved in international transfers of PI from Japan, absence of a PPC-issued “standard contractual clauses” template renders difficult, and from a compliance standpoint uncertain, any reliance on contractually-imposed APPI-equivalent standards pursuant to amended Article 24(3). However, one potential solution for Operators preparing to rely on this transfer mechanism for overseas PI transfers (excluding to the EEA or UK) may be the European Commission’s revised Standard Contractual Clauses (‘New SCCs‘), which are due to be published in early 2021. Subject to certain necessary modifications (of jurisdictional clauses and so forth), Operators may consider utilizing the New SCCs as a starting point, to bind recipient third parties to the stringent data protection standards and obligations of the 2020 Amendments.

Operators engaged in transferring PI should also be mindful of the 2020 Amendments’ onerous due diligence obligations with respect to overseas third parties. Prior to and during any cross-border engagements involving Japan-origin PI, Operators must actively ensure that their third-party recipients of such PI (including partners, vendors and subcontractors, as well as each of their respective partner, vendor and subcontractor recipients, and so forth) successfully implement, and continuously maintain, APPI-equivalent measures.

The 2020 Amendments’ enhanced disclosure obligations invite data subjects to hold Operators accountable with respect to the preventative and/or reactive measures Operators take – or fail to take – to protect their PI. Operators engaging foreign third parties should therefore consider reviewing and amplifying their due diligence of such entities, in addition to assessing the laws in each recipient country, in order to proactively identify and devise solutions to address potential obstacles to APPI adherence overseas.

The 2020 Amendments’ broadened extraterritorial application will also require non-Japanese companies to modify their internal data breach assessment and notification systems, to ensure that the PPC and data subjects in Japan are appropriately notified in the event of a qualifying breach; and to implement any necessary changes to their data subject communications platforms or data subject rights request forms, to enable data subjects in Japan to successfully exercise their amended APPI rights from April 1, 2022.

Once published, the PPC guidelines to the 2020 Amendments will further clarify (and potentially amplify) Operators’ compliance obligations with respect to each of the topics addressed in this blog post. The PPC’s findings in regard to LINE’s conduct may also have significant bearing on future APPI enforcement trends and risks. Therefore, in addition to implementing necessary measures to ensure operational compliance with the 2020 Amendments, companies processing covered PI and interested data privacy professionals should look out for these items over the next several months.   

Photo Credit: Ben Thai from Pixabay

For more Global Privacy thought leadership, see:

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and automated scanning of online content

Russia: New law requires express consent for making personal data available to the public and for any subsequent dissemination

FPF Hosted a CPDP 2021 Panel on US Privacy Law: The Beginning of a New Era

cpdp

By Srivats Shankar, FPF Legal Intern

For the 14th annual Computers, Privacy and Data Protection conference, which took place between 27 and 29 January, 2021, FPF hosted a panel of experts to discuss “US Privacy Law: The Beginning of a New Era”, whose recording has just been published. The panel was moderated by Dr. Gabriela Zanfir-Fortuna, who was joined by Anupam Chander, Professor of Law at Georgetown University; Jared Bomberg, Senior Counsel for the Senate Committee on Commerce, Science and Transportation; Stacey Schesser, Office of California Attorney General; and Lydia Parnes, Partner at Wilson Sonsini’s Privacy and Cybersecurity Practice.

Broadly, the panel discussed the events that have prompted the shift towards privacy protection in the US in recent years, including the latest privacy law initiatives at the state and federal level. The discussion addressed how regulators are enforcing current laws and preparing for what’s to come, and how these developments may strengthen the Trans-Atlantic relationship in the digital age.

Professor Anupam Chander discussed the most consequential developments in US privacy law in recent years, which he identified as the passage of the California Consumer Privacy Act (CCPA) in 2018, the Supreme Court decision of Carpenter v. US, and the passage of the Consumer Privacy Rights Act (CPRA) in 2020. According to Professor Chander, these developments will define the law of privacy over the next decade. 

Jared Bomberg discussed developments at the federal level in the United States, including the increasing focus by Congress on a comprehensive consumer privacy legislation. In the Senate, the two leading proposals are the Consumer Online Privacy Rights Act (COPRA), led by Senator Cantwell (D-WA) and the SAFE DATA Act, led by Senator Wicker (R-MS). Both bills have many cosponsors. Among these and other privacy bills, there is commonality regarding the right of access, correction, deletion, and portability. Meanwhile, key differences include the existence of a private right of action, the extent to which a federal law would preempt state laws, and the incorporation of fiduciary responsibilities. 

Stacey Schesser discussed the privacy law in California, including the enactment of the CCPA and the response of companies to the law. Following the passage of the GDPR, many companies have come to support compliance with the CCPA. California, by virtue of its large population and major economy, has required many businesses across the United States to come into compliance with the CCPA. Schesser notes that they have seen consumer frustration with opt-out mechanisms and deletion of personal information, alongside challenges with companies interpreting the law in different ways. However, she noted that many companies have complied with the CCPA within the 30 day notice and cure period after being notified of a violation. The initial rollout of Attorney General regulations have attempted to identify the scope of enforcement especially with reference to unique problems such as dark patterns.

Lydia Parnes discussed the enforcement of privacy law in the US. She observed that the Federal Trade Commission (FTC) has been fairly aggressive in exercising its enforcement powers. Commissioner Slaughter who became Acting Chairwoman has promoted the usage of civil penalties in privacy rights cases. These enforcement actions have become “baseline norms” for companies to follow. They don’t just affect the individual company but the industry at large. Parnes noted that the FTC has limited resources and enforcement by state agencies would be an effective way to facilitate change.

In the Q&A session, attendees raised issues of global interoperability, agency enforcement, and competition. Professor Anupam Chander emphasized the importance of the Schrems II decision, and the need for the US and Europe to come to another “modus vivendi.” This could be established without a “national” policy on privacy, to protect the information of foreign individuals whose data may be stored in the United States.

In response to a question about enforcement, Jared Bomberg emphasized that agencies like the FTC need more resources and that there is some acceptance that the FTC should continue enforcement in its existing fashion. He further noted that the Attorney General could also supplement and collaborate on enforcement. Bomberg also stressed the need for a private right of action. Market constraints also play a role in limiting the ability of the customer to protect their rights, and the current lack of transparency with the power dynamic has created a situation where customers do not understand what they have signed up for.

In closing, the panelists received a question on the likelihood of seeing a federal privacy law in the next two years. The consensus as Jared put it was that it could be “100% and 0%.” 

Watch the full recording of the panel by following this link.