Workplace Discrimination and Equal Opportunity

Why monitoring cultural diversity in your European workforce is not at odds with GDPR

Author: Prof. Lokke Moerel*

The following is a guest post to the FPF blog from Lokke Moerel, Professor of Global ICT Law at Tilburg University and a lawyer with Morrison & Foerster (Brussels).

The guest blog reflects the opinion of the author only. Guest blog posts do not necessarily reflect the views of FPF.

“It has been said that figures rule the world. Maybe. But I am sure that figures show us whether it is being ruled well or badly.” – Johann Wolfgang von Goethe

I. Introduction

It is a known fact that discrimination persists in today’s labor markets,1 this despite EU anti-discrimination and equality laws—such as the Racial Equality Directive—specifically prohibiting practices that put employees at a particular disadvantage based on racial or ethnic origin.2 In a market where there is an acute scarcity of talent,3 we see HR departments struggle with how to eliminate workplace discrimination and create an inclusive culture in order to be able to recruit and support an increasingly diverse workforce. By now, many organizations have adopted policies to promote diversity, equity, and inclusion (DEI) in their organizations and the need has arisen to monitor and evaluate their DEI efforts.

Without proper monitoring, DEI efforts may well be meaningless or even counterproductive.4 To take a simple example, informal mentoring is known to be an important factor for internal promotions, and informal mentoring is less available for women and minorities.5 Organizations setting up a formal internal mentoring program to address this imbalance would like to monitor whether the program is attracting minorities to participate in the program and achieving its goal of promoting equity. If not, the program may unintentionally only exacerbate existing inequalities. Monitoring is therefore required to evaluate whether the mentoring indeed results in more equal promotions across the workforce or whether changes to the program should be made.

screen shot 2023 02 07 at 7.22.46 pm

Organizations are hesitant to monitor these policies in the EU based on a seemingly persistent myth that the EU General Data Protection Regulation 2016/679 (GDPR) would prohibit such practices. This article shows that it is actually the other way around. Where discrimination, lack of equal opportunity, or pay inequity at the workplace is pervasive, monitoring of DEI data is a prerequisite for employers to be able to comply with employee anti-discrimination and equality laws, and to defend themselves appropriately against any claims.6

For historic reasons,7 the collection of racial or ethnic data is considered particularly sensitive in many EU member states. EU privacy laws provide for a special regime to collect sensitive data categories such as data revealing racial or ethnic origin, disability, and religion, based on the underlying assumption that collecting and processing such data increases the risk of discrimination.

However, where racial or ethnic background are ‘visible’ as a matter of fact to recruiters and managers alike, individuals from minority groups may be discriminated against without recording any data. It is therefore only by recording the data that potential existing discrimination may be revealed, and bias can be eliminated from existing practices.8

screen shot 2023 02 07 at 7.23.34 pm

A similar issue has come to the fore where tools are used which are powered by artificial intelligence (AI). We often see in the news that the deployment of algorithms leads to discriminatory outcomes.9 If self-learning algorithms discriminate, it is not because there is an error in the algorithm, it is because the data used to train the algorithm are “biased.” It is only when you know which individuals belong to vulnerable groups that bias in the data can be made transparent and algorithms trained properly.10 Also here, it is not the recording of the sensitive data that is wrong, it is humans who discriminate, and the recording of the data detects this bias. Organizations should be aware that the “fairness” principle under the GDPR cannot be achieved by unawareness. In other words, race blind is not race neutral, and unawareness does not equal fairness. That sensitive data may be legitimately collected for these purposes under European data protection law11 is explicitly provided for in the proposed AI Act.12

It is therefore not surprising, that minority interest groups that represent the groups whose privacy is actually at stake, actively advocate for such collection of data and monitoring. Equally, EU and international institutions unequivocally consider collection of DEI data indispensable for monitoring and reporting purposes in order to fight discrimination. EU institutions further explicitly confirm that the GDPR should not be considered an obstacle preventing the collection of DEI data, but instead establishes conditions under which collecting and processing of such data are allowed. 

From 2024 onwards, large companies in the EU will be subject to mandatory disclosure requirements for compliance with environmental, social, and governance (ESG) standards under the upcoming EU Corporate Sustainability Reporting Directive (CSRD). The CSRD requires companies to report on actual or potential adverse impacts on their workforce with regard to equal treatment and opportunities, which are difficult to measure without collecting and monitoring DEI data.

Currently, the regulation of collection and processing of DEI data is mainly left to the Member States. EU anti-discrimination and equality laws do not impose an obligation on organizations to collect DEI data for monitoring purposes, but neither do they prohibit collecting such data. In the absence of a specific requirement or prohibition, the processing of DEI data is regulated by the GDPR. The GDPR provides for ample discretionary powers for the Member States to provide for legal bases in their national laws to process DEI data for monitoring purposes. In practice, most Member States, however, have not used the opportunity under the GDPR to provide for a specific legal basis in their national laws for processing racial or ethnic data for monitoring purposes (with notable exceptions).13 As a consequence, collection and processing of DEI data for monitoring purposes is taking place on a voluntary basis, whereby employees are asked to fill out surveys based on self-identification. This is in line with the GDPR, which provides for a general exception allowing organizations to process DEI data based on the explicit consent of the individuals concerned, provided that the Member States have not excluded this option in their national laws. In practice, Member States have not used this discretionary power; they have not excluded the possibility of relying on explicit consent for the collection of DEI data. This leaves explicit consent as a valid, but also the only practically viable, option to collect DEI data for monitoring purposes.14 Both human rights frameworks and the GDPR itself facilitate such monitoring, provided there are safeguards to protect abuse of the relevant data in accordance with data minimization and privacy-by-design requirements.15 We now see best practices developing as to how to monitor DEI data while limiting the impact on the privacy of employees, and rightfully so. In literature, collecting racial or ethnic data for monitoring is rightfully described as “a problematic necessity, a process that itself needs constant monitoring.”16

2. Towards a positive duty to monitor for workplace discrimination

Where discrimination in the workplace is pervasive, monitoring of DEI data for quantifying discrimination in those workplaces is essential for employers to be able to comply with anti-discrimination and equality laws. As indicated above, there is no general requirement under the Racial Equality Directive to collect, analyze, and use DEI data. This Directive, however, does provide for a shift in the burden of proof.17 Where a complainant establishes facts from which a prima facie case of discrimination can be presumed, it will fall to the employer to prove that there has been no breach of the principle of equal treatment. Where workplace discrimination is pervasive, a prima facie case will be easy to make, and it will fall to the employer to disprove any such claim, which will be difficult without any data collection and monitoring. The argument that the GDPR does not allow for processing such data will not relieve the employer of its burden of proof. See, in a similar vein, European Committee of Social Rights, European Roma Rights Centre v. Greece18

Data collection 

27. The Committee notes that, in connection with its wish to assess the allegation of the discrimination against Roma made by the complainant organisation, the Government stated until recently that it was unable to provide any estimate whatsoever of the size of the groups concerned. To justify its position, it refers to legal and more specifically constitutional obstacles. The Committee considers that when the collection and storage of personal data [are] prevented for such reasons, but it is also generally acknowledged that a particular group is or could be discriminated against, the authorities have the responsibility for finding alternative means of assessing the extent of the problem and progress towards resolving it that are not subject to such constitutional restrictions.”

Since as early as 1989,19 all relevant EU and international institutions have, with increasing urgency, issued statements that the collection of DEI data for monitoring and reporting purposes is indispensable to the fight against discrimination.20 See, for example, the EU Anti-racism Action Plan 2020‒202521 (the “Action Plan”) in which the European Commission explicitly states:

Accurate and comparable data is essential in enabling policy-makers and the public to assess the scale and nature of discrimination suffered and for designing, adapting, monitoring and evaluating policies. This requires disaggregating data by ethnic or racial origin.22

In the Action Plan, the European Commission notes that equality data remains scarce, “with some member states collecting such data while others consciously avoid this approach.” The Action Plan subsequently provides significant steps to ensure collection of reliable and comparable equality data at the European and national level.23

screen shot 2023 02 07 at 6.08.47 pm

On the international level, the United Nations (UN) takes an even stronger approach, considering collection of DEI data that allow for disaggregation for different population groups to be part of governments’ human rights obligations. See the UN 2018 report, “A Human Rights-based approach to data, leaving nobody behind in the 2030 agenda for sustainable development” (the “UN Report”):24

Data collection and disaggregation that allow for comparison of population groups are central to a HRBA [human rights-based approach] to data and form part of States’ human rights obligations.25 Disaggregated data can inform on the extent of possible inequality and discrimination.

The UN Report notes that this was implicit in earlier treaties, but that “more recently adopted treaties make specific reference to the need for data collection and disaggregated statistics. See, for example, Article 31 of the Convention on the Rights of Persons with Disabilities.”26

Many of the reports referred to above explicitly state that the GDPR should not be an obstacle for collecting this data. For example, in 2021 the EU High Level Group on Non-discrimination Equality and Diversity issued “Guidelines on improving the collection and use of equality data,”27 which explicitly state:

Sometimes data protection requirements are understood as prohibiting collection of personal data such as a person’s ethnic origin, religion or sexual orientation. However, as the explanation below shows the European General Data Protection Regulation (GDPR), which is directly applicable in all EU Member States since May 2018, establishes conditions under which collection and processing of such data [are] allowed.

The UN Special Rapporteur on Extreme Poverty and Human Rights even opined that the European Commission should start an infringement procedure where a Member State continues to misinterpret data protection laws as not permitting data collection on the basis of racial and ethnic origin.28

screen shot 2023 02 07 at 6.21.06 pm

In light of the above, employers referring to GDPR requirements to avoid collecting DEI data for monitoring purposes are starting to appear to be driven more by a wish to avoid workplace scrutiny than by genuine concerns for the privacy of their employees.29 The employees whose privacy is at stake are exactly those who are potentially exposed to discrimination. At the risk of stating the obvious, invoking the GDPR as a prohibition on DEI data collection with the outcome that organizations avoid or are constrained from detecting discrimination of these groups, runs contrary to the GDPR’s entire purpose. The GDPR is about preserving the privacy of the employees while protecting them against discrimination.

screen shot 2023 02 07 at 6.23.44 pm

Not surprisingly, the minority interest groups, who represent the groups whose privacy is actually at stake, actively advocate for such collection of data and monitoring.30 If anything, their concerns as to collection of DEI data for DEI monitoring purposes is that these groups often do not feel represented in the categorization of the data collected.31 If these are too generic or do not allow for splitting out of intersecting inequalities (like being female and from a minority), specific vulnerable groups may well fall outside the scope of DEI monitoring and therefore outside the scope of potential DEI policy measures. It is widely acknowledged that already the setting of the categories may represent bias, and the core principle of relevant human rights frameworks for collecting DEI data is to involve relevant minority stakeholders in a bottom-up process of indicator selection (the human rights principle of participation),32 and further ensure data collection is based on the principle of self-identification, which requires that surveys should always allow for free responses (including no responses) as well as indicating multiple identities (see section 4 on the human rights principle of self-identification). 

3. ESG reporting

Under the upcoming CSRD,33 large companies34 will be subject to mandatory disclosure requirements on ESG matters from 2024 onwards (i.e., in their annual reports published in 2025).35 The European Commission is tasked with setting the reporting standards and has asked the European Financial Reporting Advisory Group (EFRAG) to provide recommendations for these standards. In November 2022, EFRAG published the first draft standards. The European Commission is now consulting relevant EU bodies and Member States, before adopting the standards as delegated acts in June 2023. 

One of the draft reporting standards provides for a standard on reporting on a company’s own workforce (European Sustainability Reporting Standard S1 (ESRS S1)).36 This standard requires a general explanation of the company’s approach identifying and managing any material, actual, and potential impacts on its own workforce in relation to equal treatment and opportunities for all, including “gender equality and equal pay for work of equal value” and “diversity.” From the definitions in ESRS S1, it is clear that “equal treatment” requires that there shall be “no direct or indirect discrimination based on criteria such as gender and racial or ethnic origin; “equal opportunities” refers to equal and nondiscriminatory access to opportunities for education, training, employment, career development and the exercise of power without any individuals being disadvantaged on the basis of criteria such as gender and racial or ethnic origin. 

ESRS S1 provides a specific chapter on metrics and targets, which requires mandatory public reporting metrics on a set of specific characteristics of a company’s workforce, which does include gender, but not racial or ethnic origin.37 Reading all standards, however, it is difficult to imagine how companies could report on the standards without collecting and monitoring DEI data internally.

For example, the general disclosure requirements of ESRS S1 require the company to disclose all of its policies relating to equal treatment and opportunity,38 including:

d) Whether and how these policies are implemented through specific procedures to ensure discrimination is prevented, mitigated, and acted upon once detected, as well as to advance diversity and inclusion in general.

It is difficult to see how companies can comply with reporting the information in clause (d) which requires reporting how the policies are implemented to ensure discrimination is prevented, mitigated, and acted upon once detected, without collecting DEI data.

ESRS S1 further clarifies how disclosures under S1 relate to disclosures under ESRS S2, which includes disclosures where potential impacts on a company’s own workforce have an impact on the company’s strategy and business model(s). See:

screen shot 2023 02 07 at 6.29.56 pm

Based on the reporting requirements above, collecting and monitoring of DEI data will be required for mandatory disclosures, which also provides for a legal basis under the GDPR for collecting such data, provided the other provisions of GDPR are complied with as well as broader human rights principles. Before setting out the GDPR requirements, a brief summary is provided of the broader human rights principles that apply to data collection of DEI data for monitoring purposes.

4. Human rights principles

The three main human rights principles in relation to data collection processes are self-identification, participation, and data protection.39 The principle of self-identification requires that people should have the option of self-identifying when confronted with a question seeking sensitive personal information related to them. As early as 1990, the Committee on the Elimination of Racial Discrimination held that identification as a member of a particular ethnic group “shall, if no justification exists to the contrary, be based upon self-identification by the individual concerned.”40 A personal sense of identity and belonging cannot in principle be restricted or undermined by a government-imposed identity and should not be assigned through imputation or proxy. This entails that all questions on personal identity, whether in surveys or administrative data, should allow for free responses (including no response) as well as multiple identities.41 Also, owing to the sensitive nature of survey questions on population characteristics, special care is required by data collectors to demonstrate to respondents that appropriate data protection and disclosure control measures are in place.42

5. EU data protection law requirements

Collecting racial or ethnic data for monitoring is rightfully described in literature as “a problematic necessity, a process that itself needs constant monitoring.”43 The collection and use of racial and ethnic data to combat discrimination is not an “innocent” practice.44 Even if performed on an anonymized or aggregated basis, it can contribute to exclusion and discrimination. An example is when politicians argue, based on statistics, that there are “too many” people of a certain category in a country.45

Collection and processing of racial and ethnic data is not illegal in the EU. In general, no Member State imposes an absolute prohibition on collecting this data.46 There is also no general requirement under the Racial Equality Directive to collect, analyze, and use equality data. Obligations to collect racial or ethnic data also do not generally seem to be codified in law in the Member States, with notable exceptions in Finland, Ireland and (pre-Brexit) the UK.47

In the absence of specific prohibitions and specific requirements in EU and Member State law, processing of racial and ethnic data is governed by the GDPR, which provides a special regime for processing “special categories of data” such as data revealing racial or ethnic origin.48/49 

Article 9 of the GDPR prohibits the processing of special categories of data, with notable exceptions. The prohibition does not apply (insofar as relevant here)50 when:

For the conditions to apply, provisions must be made in EU or Member State law which permit processing where necessary for substantial public interest or statistical purposes. In practice, most Member States have not used their discretionary power under the GDPR to provide a specific legal basis in their national law for processing racial or ethnic data for these purposes.51 Member States have, however, also not used the possibility under GDPR to provide in their national law that the prohibition under Article 9 may not be lifted by consent. This leaves explicit consent as a valid, but also the only practically viable, option to collect DEI data for monitoring purposes.52 This is in line with human rights principles, provided reporting is based on self-identification.

Once the CDRD has been implemented in the national laws of the Member States, collecting DEI data will be required for mandatory ESG disclosures, which will be permitted under Article 9(2) sub. (g) GDPR (reason of substantial public interest). Where organizations collect this data the human rights principles set out above should be observed, in particular that reporting should be based on self-identification. In practice, the legal basis of substantial public interest, will therefore very much mirror the legal basis of explicit consent and the safeguards and mitigating measures set out below will equally apply.

5.1 Explicit consent

The requirements for valid consent are strict, especially in the workplace.53 For instance, consent must be ‘freely given’, which is considered problematic in view of the imbalance of power between the employer and the individual.54 The term ‘explicit’ refers to the way consent is expressed by the individual. It means that the data subject must give an express statement of consent. An obvious way to make sure consent is explicit would be to expressly confirm consent in a written statement, an electronic form or in an email.55

For employee consent to be valid, employees need to have genuine free choice as whether to provide the information or not without any detrimental effects. This includes no downside whatsoever to an employee refusing to provide consent, which would be the case if refusal of consent would, for example, exclude the employee from any positive action initiatives.56 To ensure that consent is indeed freely given, the voluntary nature of the reporting for employees should be twofold: (1) the act of completing a survey or questionnaire related to one’s racial or ethnic background should be voluntary and (2) the survey or questionnaire should include options for the employee to respond with (an equivalent of) “I choose not to say.” The individual status of a survey or questionnaire (i.e., completed or not completed), as well as the provided answers, should not be visible to the employer on an individual basis. This is in practice realized by privacy-by-design measures (see further below).

Note that for consent to be valid, it needs to be accompanied with clear information as to why it is being collected and how it will be used (consent needs to be “specific and informed”).57 In addition, employees should be informed that consent can be withdrawn at any time and that any withdrawal of consent will not affect the lawfulness of processing prior to the withdrawal.58

When consent is withdrawn, any processing of personal data (to the extent it is identifiable) will need to stop from the moment that the consent is withdrawn. However, where data are collected and processed in the aggregate (see section 5.2 below on privacy-by-design requirements), employees will no longer be identifiable or traceable, and, therefore, any withdrawal of consent will not be effective in relation to data already collected and included in such reports.

5.2 General data protection principles

Obtaining consent does not negate or in any way diminish the data controller’s obligations to observe the principles of processing enshrined in the GDPR, especially Article 5 of the GDPR with regard to fairness, necessity, and proportionality, as well as data quality.59 Employers will further have to comply with principles of privacy-by-design.60 In practice, this means that employers should only process the personal data that they strictly need for their pursued purposes and in the most privacy-friendly manner. For example, employers can collect DEI data without reference to individual employees (i.e., without employees providing their name, or other unique identifier, such as a personnel number or email address). In this manner, employers will comply with data minimization and privacy-by-design requirements, limiting any impact on the privacy of their employees. In practice we see also that employers involve a third-party service provider, and request employees to send the information directly to the third-party provider. The third-party service provider subsequently only shares aggregate information with the employer. 

From a technical perspective, it is possible to achieve a similar segregation of duties within the company’s internal HR system (like Workday or SuccessFactors), whereby data are collected on a de-identified basis and only one or two employees within the diversity function have access to de-identified DEI data for statistical analysis and subsequently report to management on an aggregate basis only (ensuring individual employees cannot be singled out or re-identified).61 This requires customization of HR systems, which is currently underway. Where employers have a works council, the works council will need to provide its prior approval for any company policy related to the processing of employee data. As part of the works council approval process, the privacy-by-design measures can be verified.

For the sake of completeness, note that where data collection and processing are on a truly anonymous basis, the provisions of the GDPR would not apply.62 The threshold under the GDPR for data to be truly anonymous is, however, very high and is unlikely to be met where employers collect such data from their employees. Any application of anonymization techniques, such as pseudonymization (e.g., removal or replacement of unique identifiers and names) therefore do not take the data processing outside the scope of the GDPR, but are rather necessary measures to meet data minimization and privacy-by-design requirements.

6. The way forward

It is no longer possible to hide behind the GDPR to avoid collecting DEI data for monitoring purposes. The direction of travel is towards meaningful DEI policies, monitoring, and reporting (such as under CSRD). Collecting data relating to racial and ethnic origin has been labeled “a problematic necessity, a process that itself needs constant monitoring.” This is the negative way of qualifying DEI data collection and monitoring. A positive human rights-based approach is that data-collection processes should be based on self-identification, participation, and data protection. Where all three principles are safeguarded, the process will be controlled and can be trusted without being inherently problematic or in need of constant monitoring. The path forward revolves around building trust with the workforce (and their works councils and trade unions). If trust is not already a given, the recommendation is to start small (in less sensitive jurisdictions), engage with works councils or the workforce at large, and in light of the upcoming CRSD, start now.63

Self-identification: A company requires the full trust of its employees to be able to collect representative DEI data from them based on self-identification. If introduction of DEI data collection is perceived by employees as abrupt and countercultural, or a box-ticking exercise unlikely to result in meaningful change, surveys will not be successful. For employees to fill out surveys disclosing sensitive data, trust is required that their employer is serious about its DEI efforts and that data collection and monitoring complements these efforts based on the aphorism “We measure what we treasure.” Practice shows that when a certain tipping point is reached, employees are proud to self-identify and contribute to the DEI statistics of their company. 

Trust will be undermined if employees do not recognize themselves in any pre-defined categories. Proper self-identification entails that any pre-defined categories are relevant to a country’s workforce, allow for free responses (including no response) as well as allow for identifying with multiple identities. Trust of employees will be enhanced, if the company has put careful thought into the reporting metrics, ensuring that reporting can actually inform where the company can focus interventions to bring about meaningful change. For example, is important to ensure reporting metrics are not just outcome-based (tracking demographics, without knowing where a problem exists), but are also process-based. Process-based metrics can pinpoint problems in employee-management processes such as hiring, evaluation, promotion, and executive sponsorship. If outcome metrics inform a company that it has limited percentages of specific minorities, process metrics may show in which part of its processes (or part of a process, e.g., which part of the hiring process) a company needs to focus to bring about meaningful change. Examples of these metrics include the speed at which minorities move up the corporate ladder and salary differentials between different categories in comparable jobs.

Participation: Trust requires an inclusive bottom-up process whereby employees (and their works councils) have a say in the data collection and monitoring procedure. For example, in setting the categories in a survey to ensure minority employees can ‘recognize’ themselves in those categories, in setting the reporting metrics to ensure these may bring about meaningful change as well as in setting the data protection safeguards (see below). 

Data protection: To gain employees’ trust, data protection principles, such as data security, data minimization and privacy-by-design, must be fully implemented. A company will need to submit a data collection and processing protocol to its works council and receive its approval, specifying all organizational, contractual and technical measures ensuring that data are collected on a de-identified basis, and access controls are in place to ensure access to the data is limited to one or two employees of the diversity team in order to generate statistics only.

Country Reports

Below we provide a summary of the legal basis available under the laws of France, Germany, Italy, Spain and The Netherlands, and available to collect racial and ethnic background data of their employees for purposes of monitoring their DEI policies (DEI Monitoring Purposes). Note that in all cases also the general data processing principles apply (such as privacy-by-design requirements) as set out in section 5.2, but are not repeated here.  

Belgium

Olivia Vansteelant & Damien Stas de Richelle, Laurius

Summary requirements for processing racial and ethnic background data

Under Belgian law, there is neither a specific legal requirement for employers to collect data revealing racial or ethnic origin of their employees, nor is there a general prohibition for employers to collect such data.

Companies with their registered office or at least one operating office in the Brussels-Capital Region can lawfully process personal data on foreign nationality or origin for DEI Monitoring Purposes on the basis of necessity to exercise their labour law rights and obligations. All companies can lawfully process racial or ethnic background data for DEI Monitoring Purposes based on explicit consent of their employees. Employers with a works council should consult their works council before implementing any policy related to processing data revealing racial or ethnic origin of their employees, but no approval from the works council is required by law.

Necessity to exercise labour law rights and obligations (art. 9(2) b) GDPR). The basis for this exception can be found in the Decision of the Government of the Brussels-Capital Region of 7 May 2009 regarding diversity plans and the diversity label and the Ordonnance of the Government of the Brussels-Capital Region of 4 September 2008 on combatting discrimination and promoting equal treatment in the field of employment.

According to this Decision, companies with their registered office or at least one operating office in the Brussels-Capital Region are entitled to draft a diversity plan to address the issue of discrimination in recruitment and develop diversity at the workplace. No similar regulations currently exist in the Flemish or Walloon regions. Many Flemish NGOs are urging the Flemish Government to work towards a sustainable and inclusive labour market with monitoring and reporting as an important basis for evaluation of diversity. They are asking the Flemish Government to put its full weight behind this before the 2024 elections.

Under this Decision, employers are permitted to analyze the level of diversity amongst their personnel by classifying their workforce into categories, including that of foreign nationality or origin. To classify employees in this category and, hence, collect data on foreign nationality or origin. It is possible that employers may indirectly collect data revealing racial or ethnic origin due to a possible link with, or inference drawn from, information on nationality or origin. However, the Decision does not cover data revealing racial or ethnic origin and there would be no condition permitting such collection under Article 9(2)(b) GDPR. 

Explicit consent. The Belgian Implementation Act does not expressly exclude the possibility of processing racial or ethnic data based on employees’ consent. For all other purposes of processing racial and ethnic background data, employers can therefore rely on explicit consent and voluntary reporting by employees. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Belgium. To ensure that consent is indeed freely given, the voluntary nature of the reporting for employees should be twofold: (1) the act of completing a survey or questionnaire related to one’s racial or ethnic background should be voluntary and (2) the survey or questionnaire should include options for the employee to respond with (an equivalent of) “I choose not to say.”

France

Héléna Delabarre & Sylvain Naillat, Nomos, Société D’Avocats

Summary requirements for processing racial and ethnic background data

Under French law, the processing of race and ethnicity data is prohibited in principle under (i) a general provision of the French Constitution, and (ii) some specific provisions of French data protection laws, which is also the public position of the French Data Protection Authority’s (CNIL). French law does not recognize any categorization of people based on their (alleged) races or ethnicity and the prohibition of processing race and ethnicity data has been reaffirmed by the French Constitutional Court in a decision related to public studies whose purpose was to measure diversity/minority groups. However, while race and ethnicity data may not be collected or processed, objective criteria relating to geographical and/or cultural origins, such as name, nationality, birthplace, mother tongue, etc., can be considered by employers in order to measure diversity and to fight against discrimination.

In a public paper from 201264 (that has not been contradicted since) the CNIL confirmed that employers may collect and process data about objective criteria relating to “origin,” such as the birthplace of the respondent and his/her parents, his/her mother tongue, his/her nationality and that of his/her parents, etc., if such processing is necessary for the purpose of conducting statistical studies aiming at measuring and fighting discrimination. The CNIL also considers that questions about self-identification and how the respondent feels perceived by others can be asked if necessary, in view of the purpose of the data collection and any other questions asked. See the CNIL’s paper:

In accordance with the decision of the Constitutional Court of 15 November 2007 and the insights of the Cahiers du Conseil, studies on the measurement of diversity cannot, without violating Article 1 of the Constitution, be based on the ethnic or racial origins of the persons. Any nomenclature that could be interpreted as ethno-racial reference must therefore be avoided. It is nevertheless possible to approach the criterion of “origin” on the basis of objective data such as the place of birth and the nationality at birth of the respondent and his or her parents, but also, if necessary, on the basis of subjective data relating to how the respondent self-identifies or how the person feels perceived by others.65

Based on the guidance from the CNIL, several public studies have been conducted relying on the collection of information considered permissible by the CNIL, i.e., (i) whether or not respondents felt discriminated against based on their origins or skin colour; (ii) how the respondent self-identifies; and (iii) statistics about the geographical and/or cultural origins of the respondents.66 The provision of any information should be entirely voluntary and the rules regarding explicit consent in section 5.1 above apply in the same manner to France. Any questions relating to the collection of data regarding geographical and/or cultural origins should be objective, and in the absence of the need to identify (directly or indirectly) the individuals, then the collection process should be entirely anonymous.

Germany

Hanno Timner, Morrison & Foerster

Legal basis for processing racial and ethnic background data

Under German law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data. 

Employers in Germany can lawfully process racial and ethnic background data for DEI Monitoring Purposes on the basis of (i) necessity to exercise their labour law rights and obligations or (ii) based on explicit consent of their employees. If the employer has a works council, the works council has a co-determination right for the implementation of diversity surveys and questionnaires in accordance with Section 94 of the Works Council Act (Betriebsverfassungsgesetz – “BetrVG”) if the information is not collected anonymously and on a voluntary basis. If the information is collected electronically, the works council may have a co-determination right in accordance with Section 87(1), no. 6 BetrVG.

Necessity to exercise labour law rights or obligations. According to Section 26(3) of the German Federal Data Protection Act (Bundesdatenschutzgesetz – “BDSG”), the processing of racial and ethnic background data in the employment context is only permitted if the processing is necessary for the employer to exercise rights or comply with legal obligations derived from labour law, social security, and social protection law, and there is no reason to believe that the data subject has an overriding legitimate interest in not processing the data. One of the rights of the employer derives from Section 5 of the German General Equal Treatment Act (Allgemeines Gleichbehandlungsgesetz – “AGG”), according to which employers have the right to adopt positive measures to prevent and stop discrimination on the grounds of race or ethnicity. As a precondition to the adoption of such measures, employers may collect data to identify their DEI needs. 

Explicit consent. For all other purposes of processing racial and ethnic background data, employers will have to rely on explicit consent and voluntary reporting by employees. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Germany. Further, Section 26(2) BDSG specifies that the employee’s level of dependence in the employment relationship and the circumstances under which consent was given have to be taken into account when assessing whether an employee’s consent was freely given. According to Section 26(2) BDSG, consent may be freely given, in particular, if it is associated with a legal or economic advantage for the employee, or if the employer and the employee are pursuing the same interests. This can be the case if the collection of data also benefits employees, e.g., if it leads to the establishment of comprehensive DEI management within the employer’s company.

Ireland

Colin Rooney & Alison Peate, Arthur Cox

Summary requirements for processing racial and ethnic background data

Under Irish law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data.

Explicit consent: Employers in Ireland can lawfully process race and ethnicity data for their own specified purpose based on the explicit consent of employees. It should be noted that the Irish Data Protection Commission has said that in the context of the employment relationship, where there is a clear imbalance of power between the employer and employee, it is unlikely that consent will be given freely. While this does not mean that employers can never rely on consent in relation to the processing of employee data, it does mean that the burden is on employers to prove that consent is truly voluntary, as explained in section 5.1 above. In the context of collecting data relating to an employee’s racial or ethnic background, employers should ensure that employees are given the option to select “prefer not to say”.

Statistical purposes: If the employer intends to process race and ethnicity data solely for statistical purposes, it could rely on Article 9(2)(j) of the GDPR and section 54(c) of the Irish Data Protection Act 2018 (the “2018 Act”), provided that the criteria set out in section 42 of the 2018 Act are met. This allows for race and ethnicity data to be processed where it is necessary and proportionate for statistical purposes and where the employer has complied with section 42 of the 2018 Act. Section 42 requires that: (i) suitable and specific measures are implemented to safeguard the fundamental rights and freedoms of the data subjects in accordance with Article 89 GDPR; (ii) the principle of data minimisation is respected; and (iii) the information is processed in a manner which does not permit identification of the data subjects, where the statistical purposes can be fulfilled in this manner.

Italy

Marco Tesoro, Tesoro and Partners

Summary requirements for processing racial and ethnic background data

The Italian Data Protection Code (IDPC) regulates the processing of personal data under Article 9 of the GDPR, stating that the legal basis for the processing of such data must be to comply with a law or regulation (art. 2-ter(1), IDPC), or for reasons of relevant public interest. Under Italian law, no specific legal basis has been implemented to process racial or ethnic data for reasons of public interest.

Explicit consent. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Italy. The IDPC does not expressly exclude the possibility of processing racial and ethnicity data based on employees’ consent. Employers wanting to collect and process racial and ethnicity data on the basis of employees’ consent under Art. 9 of the GDPR, however, should ensure that the consent is granted on a free basis and, where possible, involve the trade unions they are associated with (as well as their Works Council, where relevant). The trade unions should be able to i) ascertain and certify that the employees’ consent has been freely given; and ii) ensure that employees are fully aware of their rights and of the consequences of providing such data. In the absence of associated trade unions, employers may inform the local representative of the union associations who signed the collective bargaining agreement (CBA) that applies (if any). Furthermore, employers should ensure that employees are given the option to “prefer not to say.”

Statute of Workers. It is also worth noting that under Italian law, there is a general prohibition on the collection of information not strictly related or needed to assess the employee’s professional capability. Per Article 8, Law 23 May 1970, no. 300 “Statute of Workers,” race and ethnicity data should not be collected or used by employers to impact in any way the decision to hire a candidate or to manage any of the terms of the employment relationship.

Spain

Laura Castillo, Gómez-Acebo & Pombo

Summary requirements for processing racial and ethnic background data

Under the Organic Law 3/2018 of 5th December on the Protection of Personal Data and Guarantee of Digital Rights (SDPL), there is a general prohibition on collecting racial and ethnic background information unless: (i) there is a legal requirement to do so (per Article 9 of the SDPL); or (ii) the employees have provided their explicit consent (although the latter is not without risk).

Fulfilment of a legal requirement. The Comprehensive Law 15/2022 of 12th July, for Equal Treatment and Non-Discrimination (the “Equal Treatment Law”) guarantees and promotes the right to equal treatment and non-discrimination. This Law expressly states that no limitations, segregations, or exclusions may be made based on ethnicity or racial backgrounds, i.e., nobody can be discriminated against on grounds of race or ethnicity. In this context, any positive discrimination measures that have been implemented as a result of the Equal Treatment Law have been included in collective bargaining agreements (CBA) or collective agreements as agreed with the unions or the relevant employee representatives. Where there is a requirement in the CBA to collect race and ethnicity data from employees, employers can do so, as this would constitute a legal requirement. In circumstances where the CBA does not specifically require the collection of this type of information, employers can either seek to include such a provision in the terms of the CBA or a collective agreement and work with the unions or legal representatives to do so, or take an alternative approach and rely on explicit consent, as set out immediately below.

Explicit consent. In principle, an employee’s consent on its own is not sufficient to lift the general prohibition on the processing of sensitive data under the SDPL. However, one of the main aims of the prohibition pursuant to the SDPL is to avoid discrimination. Therefore, if the purpose of collection is to promote diversity, it is arguable (although this has not yet been tested in Spain) that employers can rely on explicit consent, and we refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Spain. In addition to the conditions in section 5.1, Spanish case law has determined that the employee’s level of dependence within the employment relationship and the circumstances under which consent is given should be considered when assessing whether an employee’s consent is freely given. It is therefore not recommended that employers obtain or process race and ethnicity data of its employees during the recruitment or hiring process, or before the end of the probationary period, unless a CBA regulates this issue in a different manner. Employers should also ensure that employees are given the option to “prefer not to say” and ensure that they are able to prove that consent is genuinely voluntary, as explained in section 5.1 above.

The Netherlands

Marta Hovanesian, Morrison & Foerster

Summary requirements for processing racial and ethnic background data

Under Dutch law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data. 

Employers in the Netherlands can lawfully process racial and ethnic background data of their employees for DEI Monitoring Purposes on the basis of (i) the substantial public interest on the basis of Dutch law or (ii) the explicit consent of the employees.  Employers with a works council need to ensure their works council approves any policy related to processing Equality Data.

Substantial public interest. The Netherlands has implemented the conditions of Article 9(2) GDPR for the processing of racial and ethnic background data in the Dutch GDPR Implementation Act (the “Dutch Implementation Act”). More specifically, Article 25 of the Dutch Implementation Act provides that racial and ethnicity background data (limited to country of birth and parents’ or grandparents’ countries of birth) may be processed (on the basis of substantial public interest) if processing is necessary for the purpose of restoring a disadvantaged position of a minority group, and only if the individual has not objected to the processing. Reliance on this condition requires the employer to, among other things, (i) demonstrate that certain groups of people have a disadvantaged position; (ii) implement a wider company policy aimed at restoring this disadvantage; and (iii) demonstrate that the processing of race and ethnicity data is necessary for the implementation and execution of said policy.

Explicit consent. Employers can collect racial and ethnicity background data of their employees for DEI Monitoring Purposes based on explicit consent and voluntary reporting by employees. The conditions for consent set out above in section 5.1 apply in the same manner to the Netherlands. 

Cultural Diversity Barometer. Note that Dutch employers with more than 250 employees have the option to request DEI information from Statistics Netherlands about their own company. Statistics Netherlands, upon the Ministry of Social Affairs and Employment’s request, created the “Cultural Diversity Barometer”. The Barometer allows employers to disclose certain non-sensitive personal data to Statistics Netherlands, which, in turn, will report back to the relevant employers with a statistical and anonymous overview of the company’s cultural diversity (e.g., percentage of employees with a (i) Dutch background, (ii) western migration background, and (iii) non-western migration background). Statistics Netherlands can either provide information about the cultural diversity within the entire organization or within specific departments of the organization (provided that the individual departments have more than 250 employees).

United Kingdom

Annabel Gillham, Morrison & Foerster (UK) LLP

Summary requirements for processing racial and ethnic background data

Under UK law, there is no general prohibition on the collection of employees’ racial or ethnic background data by employers, provided that specific conditions pursuant Article 9 of the retained version of the GDPR (UK GDPR) are met. It is fair to say that the collection of such data is increasingly common in the UK workplace, with several organizations electing to publish their ethnicity pay gap.[1] In some cases, collection of racial or ethnic background data is a legal requirement. For example, with respect to accounting periods beginning on or after April 1, 2022 certain large listed companies are required to include in their published annual reports a “comply or explain” statement on the achievement of targets for ethnic minority representation on their board [2] and a numerical disclosure on the ethnic background of the board.[3]

Employers in the UK can lawfully process racial and ethnic background data of their employees for DEI Monitoring Purposes where the processing is (i) necessary for reasons of substantial public interest on the basis of UK law [4]; or (ii) carried out with the explicit consent of the employees [5]. Employers should ensure that they check and comply with the provisions of any agreement or arrangement with a works council, trade union or other employee representative body (e.g., relating to approval or consultation rights) when collecting and using such data.

Substantial public interest. Article 9 of the retained version of the UK GDPR prohibits the processing of special categories of data, with notable exceptions similar to those set out in section 5 above. Schedule 1 to the UK Data Protection Act 2018 (DP Act 2018) sets out specific conditions for meeting the “substantial public interest” ground under Article 9(2)(g) of the UK GDPR. Two conditions are noteworthy in the context of the collection of racial and ethnic background data.

The first is an “equality of opportunity or treatment” condition. This is available where processing of personal data revealing racial or ethnic origin is necessary for the purposes of identifying or keeping under review the existence or absence of equality of opportunity or treatment between groups of people of different racial or ethnic origins with a view to enabling such equality to be promoted or maintained.[6] There are exceptions – the data must not be used for measures or decisions with respect to a particular individual, nor where there is a likelihood of substantial damage or substantial distress to an individual. Individuals have a specific right to object to the collection of their information.

The second condition covers “racial and ethnic diversity at senior levels of organisations”.[7] Organisations may collect personal data revealing racial or ethnic origin where as part of a process of identifying suitable individuals to hold senior positions (e.g., director, partner or senior manager), is necessary for the purposes of promoting or maintaining diversity in the racial and ethnic origins of individuals holding such positions and can reasonably be collected without the consent of the individual. When relying on this condition, organisations should factor in any risk that collecting such data may cause substantial damage or substantial distress to the individual.

In order to rely on either condition set out above, organisations must prepare an “appropriate policy document” outlining the principles set out in the Article 9 UK GDPR conditions and the measures taken to comply with those principles, along with applicable retention and deletion policies.

Explicit consent [8]The conditions for consent set out in section 5.1 above apply in the same manner to the UK. Consent must be a freely given, specific, informed and unambiguous indication of an employee’s wishes. Therefore, any request for employees to provide racial or ethnic background data should be accompanied with clear information as to why it is being collected and how it will be used for DEI Monitoring Purposes.

[1] Ethnicity pay gap reporting – Women and Equalities Committee (parliament.uk)

[2] At least one board member must be from a minority ethnic background (as defined in the Financial Conduct Authority, Listing Rules and Disclosure Guidance and Transparency Rules (Diversity and Inclusion) Instrument 2022, https://www.handbook.fca.org.uk/instrument/2022/FCA_2022_6.pdf.

[3] Financial Conduct Authority, Listing Rules and Disclosure Guidance and Transparency Rules (Diversity and Inclusion) Instrument 2022, https://www.handbook.fca.org.uk/instrument/2022/FCA_2022_6.pdf.

[4] Article 9(2)(g) UK GDPR.

[5] Article 9(2)(a) UK GDPR.

[6] Paragraph 8, Part 2 of Schedule 1 to the DP Act 2018.

[7] Paragraph 9, Part 2 of Schedule 1 to the DP Act 2018.

[8] Article 9(2)(a) UK GDPR.


* Ms. Moerel thanks Annabel Gillham, a partner at Morrison & Foerster in London, for her valuable input on a previous version of this article.   

1 For recent statistics, see “A Union of equality: EU anti-racism action plan 2020-2025,” https://ec.europa.eu/info/sites/default/files/a_union_of_equality_eu_action_plan_against_racism_2020_-2025_en.pdf, p. 2, referring to wide range of surveys conducted by the EU Agency for Fundamental Rights (FRA) pointing to high levels of discrimination in the EU, with the highest level in the labor market (29%), both in respect of looking for work but also at work.

2 See, specifically, Council Directive 2000/43/EC of June 29, 2000, implementing the principle of equal treatment between persons irrespective of racial or ethnic origin (“Racial Equality Directive”), Official Journal L 180, 19/07/2000 P. 0022–0026. Action to combat discrimination and other types of intolerance at the European level rests on an established EU legal framework, based on a number of provisions of the European Treaties (Articles 2 and 9 of the Treaty on European Union (TEU), Articles 19 and 67(3) of the Treaty on the Functioning of the European Union (TFEU), and the general principles of non-discrimination and equality, also reaffirmed in the EU Charter of Fundamental Rights (in particular, Articles 20 and 21).

3 WEF, Jobs of Tomorrow, Mapping Opportunity in the New Economy, Jan. 2020 (“WEF Report”), WEF_Jobs_of_Tomorrow_2020.pdf (weforum.org). See also here.

4 For a list of examples why diversity policies may fail: https://hbr.org/2016/07/why-diversity-programs-fail. See also Data-Driven Diversity (hbr.org): “According to Harvard Kennedy School’s Iris Bohnet, U.S. companies spend roughly $8 billion a year on DEI training—but accomplish remarkably little. This isn’t a new phenomenon: An influential study conducted back in 2006 by Alexandra Kalev, Frank Dobbin, and Erin Kelly found that many diversity-education programs led to little or no increase in the representation of women and minorities in management.”

5 Research shows that lack of social networks and mentoring and sponsoring is a limiting factor for the promotion of women, but this is even stronger for cultural diversity, due to the lack of a “social bridging network,” a network that allows for connections with other social groups, see Putnam, R.D. (2007) “E Pluribus Unum: Diversity and Community in the Twenty-first Century,” Scandinavian Political Studies, 30 (2), pp. 137‒174. While white men tend to find mentors on their own, women and minorities more often need help from formal programs. Introduction of formal mentoring shows real results: https://hbr.org/2016/07/why-diversity-programs-fail.

6 Quote is from https://hbr.org/2022/03/data-driven-diversity.

7 Historically, there have been cases of misuse of data collected by National Statistical Offices (and others), with extremely detrimental human rights impacts, see Luebke, D. & Milton, S. 1994, “Locating the Victim: An Overview of Census-Taking, Tabulation Technology, and Persecution in Nazi Germany.” IEEE Annals of the History of Computing, Vol. 16 (3). See also W. Seltzer and M. Anderson, “The dark side of numbers: the role of population data systems in human rights abuses,” Social Research, Vol. 68, No. 2 (summer 2001), the authors report that during the Second World War, several European countries, including France, Germany, the Netherlands, Norway, Poland, and Romania, abused population registration systems to aid Nazi persecution of Jews, Gypsies, and other population groups. The Jewish population suffered a death rate of 73 percent in the Netherlands. In the United States, misuse of population data on Native Americans and Japanese Americans in the Second World War is well documented. In the Soviet Union, micro data (including specific names and addresses) were used to target minority populations for forced migration and other human rights abuses. In Rwanda, categories of Hutu and Tutsi tribes introduced in the registration system by the Belgian colonial administration in the 1930s were used to plan and assist in mass killings in 1994.

8 The quote is from the Commission for Racial Equality (2000), Why Keep Ethnic Records? Questions and answers for employers and employees (London, Commission for Racial Equality).

9 In the U.S., for example, “crime prediction tools” proved to discriminate against ethnic minorities. The police stopped and searched more ethnic minorities, and as a result this group also showed more convictions. If you use this data to train an algorithm, the algorithm will allocate a higher risk score to this group. Discrimination by algorithms is therefore a reflection of discrimination already taking place “on the ground”. https://www.cnsnews.com/news/article/barbara-hollingsworth/coalition-predictive-policing-supercharges-discrimination.

10 See L. Moerel, “Algorithms can reduce discrimination, but only with proper data,” Op-ed IAPP Privacy Perspectives, Nov. 16, 2018, https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/.

11 See also the guidance of the UK Information Commissioner (ICO) on AI and data protection, Guidance on AI and data protection | ICO. In earlier publications, I have argued that the specific regime for processing sensitive data under the GDPR is no longer meaningful. Increasingly, it is becoming more and more unclear whether specific data elements are sensitive. Rather, the focus should be on whether the use of such data is sensitive. Processing of racial and ethnic data to eliminate discrimination in the workplace is an example of non-sensitive use, provided that strict technical and organizational measures are implemented to ensure that the data are not used for other purposes. See https://iapp.org/news/a/gdpr-conundrums-processing-special-categories-of-data/ and https://iapp.org/news/a/11-drafting-flaws-for-the-ec-to-address-in-its-upcoming-gdpr-review/.

12 Article 10(2) sub. 5 of the draft AI Act allows for the collection of special categories of data for purposes of bias monitoring, provided that appropriate safeguards are in place, such as pseudonymization.

13 A notable exception is the UK, which long before its exit from the EU, legislated for the collection of racial and ethnic data to meet the requirements of the substantial public interest condition for purposes of both “Equality of opportunity or treatment” and “Racial and ethnic diversity at senior levels” and further provided for criteria for processing ethnic data for statistical purposes, see Schedule 1 to the UK Data Protection Act 2018 (inherited from its predecessor, the Data Protection Act 1998), and the Information Commissioner’s Office Guidance on special category data, 2018. Schedule 1 also provides for specific criteria to meet the requirements of the statistical purposes condition. See here. Another exception is the Netherlands, which allows for limited processing of racial and ethnic data (limited to country of birth and parents’ or grandparents’ countries of birth) for reason of substantial public interest.

14 For example, in the Netherlands, it is generally accepted that collecting DEI data can take place on a voluntary basis. See Dutch Social Economic Council,“Meten is Weten, zicht op effecten van diversiteits- en inclusiebeleid,” Charter Document, pp. 7–10, Dec. 2021. See also the report titled “Het moet wel werken,” p. 30, https://goldschmeding.foundation/wp-content/uploads/Rapport-Het-Moet-Wel-Werken-Vergelijkende-analyse-juli-2021.pdf.

15 The European Handbook on Equality Data (2016) provides a comprehensive overview of how equality data can be collected, https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1/language-en; EU High Level Group on Non-discrimination Equality and Diversity, “Guidelines on improving the collection and use of equality data,” 2021, https://commission.europa.eu/system/files/2022-02/guidance_note_on_the_collection_and_use_of_equality_data_based_on_racial_or_ethnic_origin_final.pdf. See also the reports listed in the previous endnote.

16 Bonnett and Carrington 2000, p. 488.

17 Article 8 Racial Equality Directive.

18 Complaint No. 15/2003, decision on the merits, Dec. 8, 2004, § 27.

19 See, e.g., ECRI General Policy Recommendation No. 4 on national surveys on the experience and perception of discrimination and racism from the point of view of potential victims, adopted on Mar. 6, 1998.

20 See the Report prepared for the European Commission, “Analysis and comparative review of equality data collection practices in the European Union Data collection in the field of ethnicity,” 2017, data_collection_in_the_field_of_ethnicity.pdf (europa.eu), https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1#:~:text=The%20European%20Handbook%20on%20Equality,to%20achieve%20progress%20towards%20equality. For example, the European Handbook on Equality Data initially dating from 2007, already stated that “Monitoring is perhaps the most effective measure an organisation can take to ensure it is in compliance with the equality laws.” The handbook was updated in 2016 and provides a comprehensive overview of how equality data can be collected, https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1/language-en.

21 EU Anti-racism Action Plan 2020‒2025, p. 15.

22 See a longer but similar statement in the 2021 Report of the European Commission evaluating the Racial Equality Directive, p. 14, https://ec.europa.eu/info/sites/default/files/report_on_the_application_of_the_racial_equality_directive_and_the_employment_equality_directive_en.pdf.

23 EU Anti-racism Action Plan 2020-2025, p. 21, under reference to: Niall Crowley, Making Europe more Equal: A Legal Duty? https://www.archive.equineteurope.org/IMG/pdf/positiveequality_duties-finalweb.pdf, which reports on the Member States that already provide for such positive statutory duty. See p. 16 for an overview of explicit preventive duties requiring organizations to take unspecified measures to prevent discrimination, shifting responsibility to act from those experiencing discrimination to employers. They can stimulate the introduction of new organizational policies, procedures and practices on such issues.

24 https://www.ohchr.org/sites/default/files/Documents/Issues/HRIndicators/GuidanceNoteonApproachtoData.pdf

25 For instance, target 17.18 in the 2030 Agenda requests that Social Development Goals indicators are disaggregated by income, gender, age, race, ethnicity, migratory status, disability, geographic location, and other characteristics relevant in national contexts.

26https://www.ohchr.org/sites/default/files/Documents/Issues/HRIndicators/GuidanceNoteonApproachtoData.pdf, p. 15, footnote 27.

27 EU High Level Group on Non-discrimination Equality and Diversity, “Guidelines on improving the collection and use of equality data,” 2018, p.11, https://commission.europa.eu/system/files/2021-09/en-guidelines-improving-collection-and-use-of-equality-data.pdf.

28 United Nations, End-of-mission statement on Romania, Professor Philip Alston, United Nations Human Rights Council Special Rapporteur on Extreme Poverty and Human Rights, http://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=16737&LangID=E#sthash.42v5AefT.dpuf.

29 Quote is from Michael O’Flaherty, Director of the Fundamental Rights Agency (FRA), Equality data round table report, 30 December 2021, https://commission.europa.eu/system/files/2021-12/roundtable-equality-data_post-event-report.pdf.

30 See policy statement on the website of the European Network Against Racism (ENAR); as well as the statement dated 28 September 2022 issued by Equal@Work Partners calling on the EU to implement conducive legal frameworks that will bring operational and legal certainty to organizations willing to implement equality data collection measures on the grounds of race, ethnicity and other related categories., https://www.enar-eu.org/about/equality-data. See also here.

31 For an excellent article on all issues related to ethnic categorization, see Bonnett, A., & Carrington, B. (2000), “Fitting into categories or falling between them? Rethinking ethnic classification,” British Journal of Sociology of Education, 21(4), pp. 487‒500.

32 The Office of the United Nations High Commissioner for Human Rights (OHCHR), “Human Rights Indicators, A Guide to Measurement and Implementation,” 2012 (OHCHR Guide), Chapter III sub. A (Ethical, statistical and human rights considerations in indicator selection), p. 46,  https://www.ohchr.org/sites/default/files/Documents/Publications/Human_rights_indicators_en.pdf.

33 See Corporate Sustainability Reporting Directive (Nov. 10, 2022), at point (4) of Article 1, available here.

34 The new EU sustainability reporting requirements will apply to all large companies that fulfill two of the following three criteria: more than 250 employees, €40 million net revenue, and more than €20 million on the balance sheet), whether listed or not. Lighter reporting standards will apply to small and medium enterprises listed on public markets.

35 See Article 1 (Amendments to Directive 2013/34/EU) sub. (8), which introduces new Chapter 6a Sustainability Reporting Standards, pursuant to which the European Commission will adopt a delegated act, specifying the information that undertakings are to disclose about social and human factors, which include equal treatment and opportunity for all, including diversity provisions.

36 See European Financial Reporting Advisory Group, Draft European Sustainability Reporting Standard S1 Own Workforce, Nov. 2022, here

37 The initial working draft of ESRS S1 published by EFRAG for public consultation did include a public reporting requirement also of the total number of ‘employees belonging to vulnerable groups, where relevant and legally permissible to report’ (see disclosure requirement 11), Download (efrag.org). This requirement was deleted from the draft standards presented by EFRAG to the European Commission.

38 See S1-1 – Policies related to own workforce.

39 OHCHR Guide, p. 46, https://www.ohchr.org/sites/default/files/Documents/Publications/Human_rights_indicators_en.pdf.

40 General Recommendation 8, Membership of racial or ethnic groups based on self-identification, 1990, https://www.legal-tools.org/doc/2503f1/pdf/.

41 https://www.ohchr.org/sites/default/files/Documents/Issues/HRIndicators/GuidanceNoteonApproachtoData.pdf,
pp. 12‒13. See also The EU High Level Group on Non-discrimination Equality and Diversity, “Guidelines on improving the collection and use of equality data,” 2021, p.36, https://commission.europa.eu/system/files/2022-02/guidance_note_on_the_collection_and_use_of_equality_data_based_on_racial_or_ethnic_origin_final.pdf.

42 OHCHR Guide, p. 48.

43 Bonnett and Carrington 2000, p. 488.

44 The Dutch Young Academy, Antidiscrimination data collection in academia: and exploration of survey methodology practices outside of the Netherlands, https://www.dejongeakademie.nl/en/publications/2300840.aspx?t=Antidiscrimination-data-practices-worldwide-and-views-of-students-and-staff-of-colour.

45 See older examples in Bonnett and Carrington 2000.

46 European Commission, Analysis and comparative review of equality data collection practices in the European Union Data collection in the field of ethnicity, p. 14, https://commission.europa.eu/system/files/2021-09/data_collection_in_the_field_of_ethnicity.pdf. Even in France, often seen as a case of absolute prohibition, ethnic data collection is possible under certain exceptions. The same applies to Italy. Italian Workers’ Statute (Italian Law No. 300/1970), Article 8, expressly forbids employers from collecting and using ethnic data to decide whether to hire a candidate and to decide any other aspect of the employment relationship already in place (like promotions). Collecting such data for monitoring workplace discrimination and equal opportunity falls outside the prohibition (provided it is ensured such data cannot be used for other purposes). 

47 Three notable exceptions, Finland, Ireland and the UK (before leaving the EU), place a duty of equality data collection on public bodies as part of their equality planning, see https://ec.europa.eu/info/sites/default/files/data_collection_in_the_field_of_ethnicity.pdf, p. 16. 

48 Data revealing racial or ethnic origin qualifies as a “special category” of personal data under Article 9 of the GDPR. Data on nationality or place of birth of a person or their parents do not qualify as special categories of data and can as a rule be collected without consent of the surveyed respondent. However, if they are used to predict ethnic or racial origin, they become subject to the regime of Article 9 GDPR for processing special categories of data.

49 There is no debate that the summary of requirements here is a correct reflection of the requirements of the GDPR. A similar summary of relevant provisions of the EU High Level Group on Non-discrimination, Equality, and Diversity can be found in its 2018 guidelines, p. 12, https://ec.europa.eu/info/sites/default/files/en-guidelines-improving-collection-and-use-of-equality-data.pdf and further in its 2021 Guidance Note on the collection and use of equality data based on racial or ethnic origin, p. 29 – 31 https://commission.europa.eu/system/files/2022-02/guidance_note_on_the_collection_and_use_of_equality_data_based_on_racial_or_ethnic_origin_final.pdf. See further the guidelines issued by the Dutch Social Economic Council in Dec. 2021, “Meten is Weten, zicht op effecten van diversiteits- en inclusiebeleid,” Charter Document, pp. 7– 10, Dec. 2021; and an earlier report by PWC on assignment of the Dutch government, “Onderzoek Vrijwllige vastlegging van culturele diversiteit,” https://www.rijksoverheid.nl/documenten/publicaties/2017/12/22/onderzoek-vrijwillige-vastlegging-van-culturele-diversiteit.

50 Note that Article 9(2)(b) of the GDPR provides a condition for collecting racial and ethnic data where it “is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment … in so far as it is authorised by Union or Member State law….” There is currently no Union or Member State law that provides for an employer obligation to collect racial or ethnic data for monitoring purposes. However, EU legislators considered this to be a valid exception for Member States to implement in their national laws. Art. 88 of the GDPR states that Member States may, by law or collective agreements, provide for more specific rules to ensure the protection of the rights and freedoms of employees with respect to the processing of employees’ personal data in the employment context. In particular, these rules may be provided for the purposes of, inter alia, equality and diversity in the workplace.

51 See endnote 13.

52 For example, in the Netherlands, it is generally accepted that collecting DEI data can take place on a voluntary basis, see Dutch Social Economic Council, Dec. 2021, “Meten is Weten, zicht op effecten van diversiteits- en inclusiebeleid,” Charter Document, pp. 7–10, Dec. 2021. See also the report titled “Het moet wel werken,” p. 30, https://goldschmeding.foundation/wp-content/uploads/Rapport-Het-Moet-Wel-Werken-Vergelijkende-analyse-juli-2021.pdf.

53 Article 4(11) of the GDPR defines consent as “any freely given, specific, informed, and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

54 Guidance of the European Data Protection Board has made it explicit in a series of guidance documents that, for the majority of data processing at work, consent is not a suitable legal basis due to the nature of the relationship between employer and employee. See also Opinion 2/2017 on data processing at work (WP249), paragraph 3.3.1.6.2, at https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf.

55 EDPB Guidelines 05/2020 on consent under Regulation 2016/679, adopted on May 4, 2020 (EDPB Guidelines on consent), p. 21,  https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf.

56 WP249, paragraph 6.2.

57 EDPB Guidelines on consent, pp. 13–15.

58 Article 7(3) GDPR.

59 EDPB Guidelines on consent, p. 5.

60 Article 25 GDPR.

61 See Article 29 Working Party Opinion 05/2014 on Anonymization Techniques, adopted on 10 April 2014.

62 Recital 26 of the GDPR states the principles of data protection should not apply to anonymous information which does not relate to an identified or identifiable natural person, or which relates to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. Recital 26 further states that the GDPR does not concern the processing of such anonymous information, including for statistical or research purposes.

63 For an informative article on the practicalities of implementing data-driven diversity proposals, see Data-Driven Diversity (hbr.org). The distinction between outcome-based and process-based metrics is based on this article.

64https://www.cnil.fr/sites/default/files/atoms/files/ddd_gui_20120501_egalite_chances_0.pdf.

65https://www.cnil.fr/sites/default/files/atoms/files/ddd_gui_20120501_egalite_chances_0.pdf, p. 70.

66 This article from the workers union “CGT,” refers to a study conducted by the “Défenseur des droits” (a public body) who notably co-signed the CNIL’s 2012 paper referred to in footnotes 80 and 81 above. https://www.cgt.fr/actualites/france/interprofessionnel/discriminations/le-defenseur-des-droits-denonce-un-racisme. See also recent studies conducted by the public institutions INED and INSEE: https://teo.site.ined.fr/fichier/s_rubrique/29262/teo2_questionnaire.fr.pdf

FPF at IAPP’s Europe Data Protection Congress 2022: Global State of Play, Automated Decision-Making, and US Privacy Developments

Authored by Christina Michelakaki, FPF Intern for Global Policy

On November 16 and 17, 2022, the IAPP hosted the Europe Data Protection Congress 2022 – Europe’s largest annual gathering of data protection experts. During the Congress, members of the  Future of Privacy Forum (FPF) team moderated and spoke at three different panels. Additionally, on November 14, FPF hosted the first Women@Privacy awards ceremony at its Brussels office, and on November 15, FPF co-hosted the sixth edition of its annual Brussels Privacy Symposium with the Vrije Universiteit Brussels (VUB)’s Brussels Privacy Hub on the issue of “Vulnerable People, Marginalization, and Data Protection” (event report forthcoming in 2023).

In the first panel for IAPP’s Europe Data Protection Congress, Global Privacy State of Play, Gabriela Zanfir-Fortuna (VP for Global Privacy, Future of Privacy Forum) moderated a conversation on key global trends in data protection and privacy regulation in jurisdictions from Latin America, Asia, and Africa. Linda Bonyo (CEO, Lawyers Hub Africa), Annabel Lee (Director, Digital Policy (APJ) and ASEAN Affairs, Amazon Web Services), and Rafael Zanatta (Director, Data Privacy Brasil Research Association) participated. 

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale (EU Privacy Counsel, Future of Privacy Forum) led a discussion on FPF’s ADM case-law report and impactful cases and relevant concepts for automated decision-making regulation under the GDPR. Ruth Boardman (Partner, Co-head, International Data Protection Practice, Bird & Bird), Simon Hania (DPO, Uber), and Gintare Pazereckaite (Legal Officer, EDPB) participated.

Finally, in the third panel, Perspectives on the Latest US Privacy Developments, Keir Lamont (Senior Counsel, Future of Privacy Forum) participated in a conversation focused on data protection developments at the federal and state level in the United States. Cobun Zweifel-Keegan (Managing Director, D.C., IAPP) moderated it, and Maneesha Mithal (Partner, Privacy and Cybersecurity, Wilson Sonsini Goodrich & Rosati) and Dominique Shelton Leipzig, (Partner, Cybersecurity & Data Privacy; Leader, Global Data Innovation & AdTech, Mayer Brown) also participated.

Below is a summary of the discussions in each of the three panels:

1. Global trends and legislative initiatives around the world

In the first panel, Global Privacy State of Play, Gabriela Zanfir-Fortuna stressed that although EU and US developments in privacy and data protection are in the spotlight, the explosion of regulatory action in other regions of the world is very interesting and deserves more attention.

Linda Bonyo touched upon the current movement in Africa where countries are adopting their own data protection laws, primarily inspired by the European model of data protection regulation, since they trust that the GDPR is a global standard and lack the resources to draft policies from scratch. Bonyo also added that the lack of resources and limited expertise are the main reasons why African countries struggle to establish independent Data Protection Authorities (DPAs). She then stressed that the Covid-19 pandemic revived discussions about a continental legal framework to address data flows. Regarding enforcement, she noted that for Africa, the approach looks rather “preventative” than “punitive.” Bonyo also underlined that it is common for big tech companies to operate outside of the continent and only have a small subsidiary in the African region, rendering local and regional regulatory action less impactful than in other regions.

Annabel Lee offered her view on the very dynamic Asia-Pacific region, noting that the latest trends, especially post-GDPR, include not only the introduction of new GDPR-like laws but also the revision of existing ones. Lee noted, however, that the GDPR is a very complex piece of legislation to “copy,” especially if a country is building its first data protection regime. She then focused on specific jurisdictions, noting that South Korea has overhauled its originally fragmented framework with a more comprehensive one and that Australia will implement a broad extraterritorial element in its revised law. Then Lee stated that when it comes to implementation and interpretation, data protection regimes in the region differ significantly, and countries try to promote harmonization by mutual recognition. With regards to enforcement, she stressed that it is common to see occasional audits and that in certain countries, such as Japan, there is a very strong culture of compliance. She also added that education can play a key role in working towards harmonized rules and enforcement. Lee offered Singapore as an example, where the Personal Data Protection Commission gives companies explanations not only on why they are in breach but also on why they are not in breach.

Rafael Zanatta explained that after years of strenuous discussions, there is an approved data protection legislation in Brazil (LGPD) that has already been in place for a couple of years. The new DPA created by the LGPD will likely ramp up its enforcement duties next year and has, so far, focused on building experimental techniques (to help incentivize associations and private actors to cooperate) and publishing guidelines, namely non-binding rules that will provide future interpretation for cases. Zanatta stressed that Brazil has been experiencing the formalization of autonomous data protection rights with supreme court rulings stating that data protection is a fundamental right different from privacy. He underscored that it will be interesting to see how the private sector applies data protection rights given their horizontal effect and the development of concepts like positive obligations and the collective dimension of rights. He explained that the extraterritorial applicability of Brazil’s law is very similar to the GDPR since companies do not need to operate in Brazil for the law to apply. He also touched upon the influence of Mercosur, a South American trade bloc, in discussions around data protection and the collective rights of the indigenous people of Bolivia in light of the processing of their biometric data. With regards to enforcement, he explained that in Brazil, it is happening primarily through the courts due to Brazil’s unique system where federal prosecutors and public defenders can file class actions.

img 1056

2. Looking beyond case law on automated decision-making

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale offered an overview of FPF’s ADM Report, noting that it contains analyses of more than 70 DPA decisions and court rulings concerning the application of Article 22 and other related GDPR provisions. He also briefly summarized the Report’s main conclusions. One of the main points he highlighted is that the GDPR covers automated decision-making (ADM) comprehensively beyond Article 22, including through the application of overarching principles like fairness and transparency, rules on lawful grounds for processing, and carrying out Data Protection Impact Assessments (DPIA). 

Ruth Boardman underlined that the FPF Report reveals the areas of the law that are still “foggy” regarding ADM. Boardman also offered her view on the Portuguese DPA decision concerning a university using proctoring software to monitor students’ behavior during exams and detect fraudulent acts. The Portuguese DPA ruled that the Article 22 prohibition applied, given that the human involvement of professors in the decisions to investigate instances of fraud and invalidate exams was not meaningful. Boardman further explained that this case, along with the Italian DPA’s Foodhino case, shows that the human in the loop must have meaningful involvement in the process of making a decision for Article 22 GDPR to be inapplicable. She added that internal guidelines and training provided by the controller may not be definitive factors but can serve as strong indicators of meaningful human involvement. Regarding the concept of “legal or similarly significant effects” — another condition for the application of Article 22 GDPR – Boardman noted the link between such effects and contract law. For example, in the case of national laws transposing the e-Commerce Directive in which adding a product to a virtual basket counts as an offer to the merchant and not as a binding contract, no legal effects are triggered. She also added that meaningful information about the logic behind ADM should include the consequences that data subjects can suffer and referred to an enforcement notice from the UK’s Information Commissioner Office concerning the creation of profiles for direct marketing purposes.

Simon Hania argued that the FPF Report showed the robustness of the EDPB guidelines on ADM and that ADM triggers GDPR provisions that are relevant to fairness and transparency. With regards to the “human in the loop” concept, Hania claimed that it is important to involve multiple humans and ensure that they are properly trained to avoid biased decisions. Then he elaborated on a case concerning Uber’s algorithms that match drivers with clients, where Uber drivers requested access to data to assess whether the matching process was fair. For the Amsterdam District Court, the drivers did not demonstrate how the matching process could have legal or similarly significant effects on them, which meant that drivers did not have enhanced access rights that would only apply if ADM covered by Article 22 GDPR was at stake. However, when ruling on an algorithm used by another ride-hailing company (Ola) to calculate fare deductions based on drivers’ performance, the same Court found that the ADM at issue had significant effects on drivers. For Hania, a closer inspection of the two cases reveals that both ADM schemes affect drivers’ ability to earn or lose remuneration, which highlights the importance of financial impacts when assessing the effects of ADM as per Article 22. He also touched on a decision from the Austrian DPA concerning a company that scored individuals on the likelihood they would belong to certain demographic groups, as the DPA mandated the company to inform individuals about how it calculated their individual scores. For Hania, the case shows that controllers need to explain the reasons behind their automated decisions – regardless of whether they are covered by Article 22 GDPR or not – to comply with the fairness and transparency principles of Article 5 GDPR.

Gintare Pazereckaite noted that the FPF Report is particularly helpful in understanding inconsistencies in how DPAs apply Article 22 GDPR. She then stressed that the interpretation of “solely automated processing” should be done in light of protecting and safeguarding data subjects’ fundamental rights. Pazereckaite also referred to the criteria set out by the EDPB guidelines that clarify the concept of the “legal and similarly significant effects.” She added that data protection rules such as accountability and data protection by design play an important role in allowing data subjects to understand how ADM works and what consequences it may bring up. Lastly, Pazereckaite commented on Article 5 of the proposed AI Act – which contains a list of prohibited AI practices – and its importance when an algorithm does not trigger Article 22 GDPR.

img 1063

3. ADPPA and regional laws re-shaping US data protection regime

In the last panel, Perspectives on the Latest US Privacy Developments, Keir Lamont offered an overview of recent US Congressional efforts to enact the American Data Privacy and Protection Act (ADPPA) and outstanding areas of disagreement. For him, the bill would introduce stronger rights and protections than those set forth in existing state-level laws; including a broad scope; strong data minimization provisions; limitations on advertising practices; enhanced privacy-by-design requirements; algorithmic impact assessments; and a private right of action. In contrast, existing state laws typically adhere to the outdated opt-in/opt-out paradigm for establishing individual privacy rights.

Maneesha Mithal explained that in the absence of comprehensive federal privacy legislation, the Federal Trade Commission (FTC) has largely taken on the role of DPA by virtue of having jurisdiction over a broad range of sectors in the economy and acting both as an enforcement and rulemaking agency. Mithal explained that the FTC enforces four existing privacy laws in the US and can also take action against both unfair and deceptive trade practices. For example, the FTC can enforce against any statement (irrespective of whether it is in a privacy policy or the context of user interfaces), material omissions (for example, the FTC has concluded that a company did not inform its clients that it was collecting second by second television data and was further sharing it), and unfair practices in the data security area. Mithal pointed out that since the FTC does not have the authority to seek civil penalties for first-time violations, it is trying to introduce additional deterrents by naming individuals (for example, in the case of an alcohol provider, the FTC named the CEO for failing to prioritize security) and is using its power to obtain injunctive relief. For example, in a case where a company was unlawfully using facial recognition systems, the FTC ordered the company to delete any models or algorithms that were used, and thus FTC applied the fruit to the poisonous tree theory. Mithal also noted that although the FTC has historically not been active as a rulemaking authority due to procedural issues along with the lack of resources and time considerations, it is initiating a major rulemaking involving “Commercial Surveillance and Lax Data Security Practices.”

Finally, Dominique Shelton Leipzig offered remarks on state-level legislation focusing on the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA), adding that Colorado, Connecticut, Utah, and Virginia have similar laws. She elaborated on the CPRA’s contractual language, comparing California’s categorization of “Businesses,” “Contractors,” “Third Parties,” and “Service Providers” to the GDPR’s distinction between controllers and processors. Shelton Leipzig also explained that the CPRA introduced a highly disruptive model for the ad tech industry since consumers can opt out of both the sale of data, as well as the sharing of data. The CPRA also created a new independent rulemaking and enforcement Agency, the first in the US, focusing only on data protection and privacy. Finally, she addressed the recently enacted California Age-Appropriate Design Code Act, which focuses on the design of internet tools, and stressed that companies are struggling to implement it.

img 1077

Further reading:

FPF Report: Automated Decision-Making Under the GDPR – A Comprehensive Case-Law Analysis

On May 17, the Future of Privacy Forum launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision Making (ADM). The Report is informed by extensive research covering more than 70 Court judgments, decisions from Data Protection Authorities (DPAs), specific Guidance and other policy documents issued by regulators.

The GDPR has a particular provision applicable to decisions based solely on automated processing of personal data, including profiling, which produces legal effects concerning an individual or similarly affects that individual: Article 22. This provision enshrines one of the “rights of the data subject”, particularly the right not to be subject to decisions of that nature (i.e., ‘qualifying ADM’), which has been interpreted by DPAs as a prohibition rather than a prerogative that individuals can exercise.

However, the GDPR’s protections for individuals against forms of automated decision-making (ADM) and profiling go significantly beyond Article 22. In this respect, there are several safeguards that apply to such data processing activities, notably the ones stemming from the general data processing principles in Article 5, the legal grounds for processing in Article 6, the rules on processing special categories of data (such as biometric data) under Article 9, specific transparency and access requirements regarding ADM under Articles 13 to 15, and the duty to carry out data protection impact assessments in certain cases under Article 35.

This new FPF Report outlines how national courts and DPAs in the European Union (EU)/European Economic Area (EEA) and the UK have interpreted and applied the relevant EU data protection law provisions on ADM so far – before and after the GDPR became applicable -, as well as the notable trends and outliers in this respect. To compile the Report, we have looked into publicly available judicial and administrative decisions and regulatory guidelines across EU/EEA jurisdictions and the UK. It draws from more than 70 cases – 19 court rulings and more than 50 enforcement decisions, individual opinions, or general guidance issued by DPAs, – from a span of 18 EEA Member-States, the UK, and the European Data Protection Supervisor (EDPS). To complement the facts of the cases discussed, we have also looked into press releases, DPAs’ annual reports, and media stories.

Some examples of ADM and profiling activities assessed by EU courts and DPAs and analyzed in the Report include:

FPF Training: Automated Decision-Making under the GDPR

Ready to get an in-depth understanding of the GDPR’s Automated Decision-Making requirements? Register for our upcoming virtual training session on November 9, where FPF experts will cover the critical elements of Article 22, recent DPA decisions, consent requirements, and more.

Register today!

Our analysis shows that the GDPR as a whole is relevant for ADM cases and has been effectively applied to protect the rights of individuals in such cases, even in situations where the ADM at issue did not meet the high threshold established by Article 22 GDPR. Among those, we found detailed transparency obligations about the parameters that led to an individual automated decision, a broad reading of the fairness principle to avoid situations of discrimination, and strict conditions for valid consent in cases of profiling and ADM.

Moreover, we found that when enforcers are assessing the threshold of applicability for Article 22 (“solely” automated, and “legal or similarly significant effects”), the criteria they use are increasingly sophisticated. This means that:

A recent preliminary ruling request sent by an Austrian court in February 2022 to the Court of Justice of the European Union (CJEU) may soon help clarify these concepts, as well as other related to the information which controllers need to give data subjects about ADM’s underlying logic, significance and envisaged consequences for the individual.

The findings of this Report may also serve to inform the discussions about pending legislative initiatives in the EU that regulate technologies or business practices that foster, rely on, or relate to ADM and profiling, such as the AI Act, the Consumer Credits Directive, and the Platform Workers Directive.

On May 20, the authors of the report discussed with prominent European data protection experts some of the most impactful analyzed decisions during an FPF roundtable. These include cases related to the algorithmic management of platform workers in Italy and the Netherlands, the use of automated recruitment and social assistance tools, and creditworthiness assessment algorithms. The discussion also covered pending questions sent by national courts to the CJEU on matters of algorithmic transparency under the GDPR. View a recording of the conversation here, and download the slides here.

What the Biden Executive Order on Digital Assets Means for Privacy

Author: Dale Rappaneau

Dale Rappaneau is a policy intern at the Future of Privacy Forum and a 3L at the University of Maine School of Law.

On March 9, the Biden Administration issued an Executive Order on “Ensuring Responsible Developments of Digital Assets” (“the Order”), published together with an explanatory Fact Sheet. The Order states that the growing adoption of digital assets throughout the economy and inconsistent controls to mitigate their risks necessitates a new governmental approach to regulating digital assets.

The Order outlines a whole-of-government approach to address a wide range of technological frameworks, including blockchain protocols and centralized systems. The Order frames this approach as an important step toward safeguarding consumers and businesses from illicit activities and potential privacy harms involving digital assets. In particular, it calls for a list of federal agencies and regulators to assess digital assets, consider future action, and ultimately provide reports recommending how to achieve the Order’s numerous policy goals. The Order recognizes the importance of incorporating data and privacy protections into this approach, which indicates that the Administration is actively considering the privacy risks associated with digital assets.

1. Covered Technologies

Definitions

Digital Assets – The Order defines digital assets broadly, including cryptocurrencies, stablecoins, and all central bank digital currencies (CBDCs), regardless of the technology used. The term also refers to any other representations of value or financial instrument issued or represented in a digital form through the use of distributed ledger technology relying on cryptography, such as a blockchain protocol.

CBDC – The Order defines a Central Bank Digital Currency (“CBDC”) as digital money that is a direct liability of the central bank, not of a commercial bank. This definition aligns with the recent Federal Reserve Board CBDC report. A U.S. CBDC could support a faster and more modernized financial system, but it would also raise important policy questions including how it would affect the current rules and regulations of the U.S. financial sector.

Cryptocurrencies – These are digital assets that may operate as a medium of exchange and are recorded through distributed ledger technologies that rely on cryptography. This definition is notable because blockchain is often mistaken as the only form of distributed ledger technology, leading some to believe that all cryptocurrencies require a blockchain. However, the Order defines cryptocurrencies by reference to distributed ledger technology – not blockchain – and seems to cover both mainstream cryptocurrencies utilizing a blockchain (e.g., bitcoin or Ether) and alternative cryptocurrencies built on distributed ledger technology without a blockchain (e.g., IOTA).

Stablecoins – The Order recognizes stablecoins as a category of cryptocurrencies featuring mechanisms aimed at maintaining a stable value. As reported by relevant agencies, stablecoin arrangements may utilize distributed or centralized ledger technology.

Implications of Covered Technologies

From a technical perspective, distributed ledger technologies such as blockchain stand in stark contrast to centralized systems. Blockchain protocols, for example, allow users to conduct financial transactions on a peer-to-peer level, without requiring oversight from the private sector or government. Centralized ledger technology, as used by most credit cards and banks, typically requires a private sector or government actor to facilitate financial transactions. In this environment, the data flows through the actor, who carries obligations to monitor and protect the data.

screen shot 2022 04 27 at 9.37.15 am

Despite the technical differences between these approaches, the Order appears to group these very different financial transaction systems into the single umbrella term of digital assets. It does this by including within the scope of the definition of digital assets all CBDCs, even ones utilizing centralized ledger technology, and other assets using distributed ledger technology. This homogenization of technological concepts may indicate that the Administration is seeking a uniform regulatory approach to these technologies.

2. Privacy Considerations of the EO

Section 2 of the Order states the principal policy objectives with respect to digital assets, which include: exploring a U.S. CBDC; ensuring responsible development and use of digital assets and their underlying ledger technologies; and mitigating finance and national security risks posed by the illicit use of digital assets.

Notably, the Administration uses the word “privacy” five times in this section, declaring that digital assets should maintain privacy, shield against arbitrary or unlawful surveillance, and incorporate privacy protections into their architecture. The need to ensure that digital assets preserve privacy raises notable, albeit different, implications for both centralized and decentralized technologies.

Privacy Implications of a United States CBDC

The Order places the highest urgency on developing and deploying a U.S. CBDC, which must be designed to include privacy protections. The Order states that a United States CBDC would be the liability of the Federal Reserve, which is currently experimenting with a number of CBDC system designs, including centralized and decentralized ledger technologies, as well as alternative technologies. Although the Federal Reserve has not chosen a particular system, the monetary authority has listed numerous privacy-related characteristics that should be incorporated into a United States CBDC regardless of the technology used.

First, the Federal Reserve recognizes that a CBDC would generate data about users’ financial transactions in the same ways that commercial banks and nonbanks do today. This may include a user’s name, email address, physical address, know-your-customer (KYC) data, and more. Depending on the design chosen for the CBDC, this data may be centralized under the control of a single entity or distributed across ledgers held by multiple entities or users.

Second, because of the robust rules designed to combat money laundering and financing of terrorism, a CBDC would need to allow intermediaries to verify the identity of the person accessing CBDC, just as banks and financial institutions currently do so. For this reason, the Federal Reserve states that a CBDC would need to safeguard an individual’s privacy while deterring criminal activity.

This intersection between consumer privacy and the transparency needed to monitor criminal activity gets to the heart of the Order. On one hand, a United States CBDC would provide certain data security and privacy protections for consumers under the current rules and regulations imposed on financial institutions. The Gramm-Leach-Bliley Act (GLBA), for example, includes privacy and data security provisions that regulate the collection, use, protection, and disclosure of nonpublic personal information by financial institutions (15 U.S.C.A. §§ 6801 to 6809). But on the other hand, the CBDC would likely require the Federal Reserve, or entrusted intermediaries, to monitor and verify the identity of users to reduce the likelihood of illicit transactions.

It is unclear whether current rules and regulations would apply if the CBDC utilizes distributed ledger technology, given that they typically establish scope via definitions of applicable entities using particular data. Because users (and not financial institutions) hold copies of the data ledger under distributed ledger technology systems, pre-existing privacy laws may fail to cover large amounts of data processing and provide adequate safeguards to consumers. In addition, as the next section suggests, it is unclear how monitoring and verification would occur under a CBDC that uses distributed ledger technology. This raises further questions in how policymakers can navigate the intersection of privacy and transaction monitoring.

Privacy Implications of Distributed Ledger Technologies

Distributed ledger technologies often attempt to create an environment where users do not have to reveal their personal information. Transactions under these systems typically do not filter through a singular entity such as the Federal Reserve, but instead happen on a peer-to-peer level, with users directly exchanging digital assets without third-party oversight. In this environment, users can complete transactions utilizing hashed identifiers rather than their own information, and these transactions usually occur without the supervision of a private or government entity. Together, the use of hashed identifiers and lack of supervision creates a digital environment ripe with identity-shielding protections.

However, experts recognize that distributed ledger technologies also create a multitude of financial risks. If users can conduct transactions on a peer-to-peer level without supervision or revealing their identity, they can more easily conduct illicit activities, including money laundering, terror funding, and human and drug trafficking.

The Order acknowledges these benefits and risks. The Fact Sheet prioritizes privacy protections and efforts to combat criminal activities, which indicates that the Order seeks to emphasize the privacy-preserving aspects of new distributed ledger technologies while finding ways to restrict illicit financial activity. Such an emphasis may represent an enhanced governmental effort to address criminal activities in the digital asset landscape while avoiding measures that would create risks to privacy and data protection.

3. Future Action: Privacy and Law Enforcement Equities

The Order’s repeated emphasis on privacy seems to align with the Biden Administration’s current focus on prioritizing privacy and data protection rulemaking. The Order acknowledges both necessary safeguards to combat illicit activities and the need to embed privacy protections in the regulation of digital assets.

The U.S. Department of the Treasury and the Federal Reserve have articulated concerns regarding how bad actors exploit distributed ledger technologies for illicit purposes, and those agencies will likely make recommendations to strengthen government oversight and supervision capabilities. However, the Order’s emphasis on privacy seems to indicate that the Administration wants to ensure privacy protections while also enabling traceability to monitor users, verify identities, and investigate illicit activities.

The question is, will the Administration find a way to preserve the privacy protections of centralized and distributed ledger technology, while also promoting the efficacy of monitoring illicit activities? That answer will likely come once agencies and regulators start providing reports that recommend steps to achieve the Order’s goals. Until then, the answer remains unknown, and entities utilizing cryptocurrencies or other digital assets should stay aware of a possible shift in how the Federal Government regulates the digital asset landscape.

Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework

The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield. 

This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached. 

The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.

This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases. 

After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement. 

In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.

Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.

Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.

The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.

The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”. 

Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.

One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.

Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side. 

Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.  

What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.

Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.

In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.

Understanding why the first pieces fell in the transatlantic transfers domino

The Austrian DPA and the EDPS decided EU websites placing US cookies breach international data transfer rules 

Two decisions issued by Data Protection Authorities (DPAs) in Europe and published in the second week of January 2022 found that two websites, one run by a contractor of the European Parliament (EP), and the other one by an Austrian company, have unlawfully transferred personal data to the US merely by placing cookies (Google Analytics and Stripe) provided by two US-based companies on the devices of their visitors. Both decisions looked into the transfers safeguards put in place by the controllers (the legal entities responsible for the websites), and found them to be either insufficient – in the case against the EP, or ineffective – in the Austrian case. 

Both decisions affirm that all transfers of personal data from the EU to the US need “supplemental measures” on top of their Article 46 GDPR safeguards, in the absence of an adequacy decision and under the current US legal framework for government access to personal data for national security purposes, as assessed by the Court of Justice of the EU in its 2020 Schrems II judgment. Moreover, the Austrian case indicates that in order to be effective, the supplemental measures adduced to safeguard transfers to the US must “eliminate the possibility of surveillance and access [to the personal data] by US intelligence agencies”, seemingly putting to rest the idea of the “risk based approach” in international data transfers post-Schrems II

This piece analyzes the two cases comparatively, considering they have many similarities other than their timing: they  both target widely used cookies (Google Analytics, in addition to Stripe in the EP case), they both stem from complaints where individuals are represented by the Austrian NGO noyb, and it is possible that they will be followed by similar decisions from the other DPAs that received a batch of 101 complaints in August 2020 from the same NGO, relying on identical legal arguments and very similar facts. This piece analyzes the most important findings made by the two regulators, showing how their analyses were in sync and how these analyses likely preface similar decisions for the rest of the complaints.         

1. “Personal data” is being “processed” through cookies, even if users are not identified and even if the cookies are thought to be “inactive”

In the first decision, the European Data Protection Supervisor (EDPS) investigated a complaint made by several Members of the European Parliament against a website made available by the EP to its Members and staff in the context of managing COVID-19 testing. The complainants raised concerns with regard to transfers of their personal data to the US through cookies provided by US based companies (Google and Stripe) and placed on their devices when accessing the COVID-19 testing website. The case was brought under the Data Protection Regulation for EU Institutions (EUDPR), which has identical definitions and overwhelmingly similar rules to the GDPR. 

One of the key issues that was analyzed in order for the case to be considered falling under the scope of the EUDPR was whether personal data was being processed through the website by merely placing cookies on the devices of those who accessed it. Relying on its 2016 Guidelines on the protection of personal data processed through Web Services, the EDPS noted in the decision that “tracking cookies, such as the Stripe and Google Analytics cookies, are considered personal data, even if the traditional identity parameters of the tracked users are unknown or have been deleted by the tracker after collection”. It also noted that “all records containing identifiers that can be used to single out users, are considered as personal data under the Regulation and must be treated and protected as such”. 

The EP argued in one of its submissions to the regulator that the Stripe cookie “had never been active, since registration for testing for EU Staff and Members did not require any form of payment”. However, the EP also confirmed that the dedicated COVID-19 testing website, which was built by its contractor, copied code from another website run by the same contractor, and “the parts copied included the code for a cookie from Stripe that was used for online payment for users” of the other website. In its decision, the EDPS highlighted that “upon installation on the device, a cookie cannot be considered ‘inactive’. Every time a user visited [the website], personal data was transferred to Stripe through the Stripe cookie, which contained an identifier. (…) Whether Stripe further processed the data transferred through the cookie is not relevant”. 

With regard to the Google Analytics cookies, the EDPS only notes that the EP (as controller) acknowledged that the cookies “are designed to process ‘online identifiers, including cookie identifiers, internet protocol addresses and device identifiers’ as well as ‘client identifiers’”. The regulator concluded that personal data were therefore transferred “through the above-mentioned trackers”.  

In the second decision, which concerned the use of Google Analytics by a website owned by an Austrian company and targeting Austrian users, the DPA argued in more detail what led it to find that personal data was being processed by the website through Google Analytics cookies, under the GDPR. 

1.1 Cookie identification numbers, by themselves, are personal data

The DPA found that the cookies contained identification numbers, including a UNIX timestamp at the end, which shows when a cookie was set. It also noted that the cookies were placed either on the device or the browser of the complainant. The DPA affirmed that relying on these identification numbers makes it possible for both the website and Google Analytics “to distinguish website visitors … and also to obtain information as to whether the visitor is new or returning”. 

In its legal analysis, the DPA noted that “an interference with the fundamental right to data protection … already exists if certain entities take measures – in this case, the assignment of such identification numbers – to individualize website visitors”. Analyzing the “identifiability” component of the definition of “personal data” in the GDPR, and relying on its Recital 26, as well as on Article 29 Working Party Opinion 4/2007 on the concept of “personal data”, the DPA clarified that “a standard of identifiability to the effect that it must also be immediately possible to associate such identification numbers with a specific natural person – in particular with the name of the complainant – is not required” for data thus processed to be considered “personal data”. 

The DPA also recalled that “a digital footprint, which allows devices and subsequently the specific user to be clearly individualized, constitutes personal data”. The DPA concluded that the identification numbers contained in the cookies placed on the complainant’s device or browser are personal data, highlighting their “uniqueness”, their ability to single out specific individuals and rebutting specifically the argument the respondents made that no means are in fact used to link these numbers to the identity of the complainant. 

1.2 Cookie identification numbers combined with other elements are additional personal data

However, the DPA did not stop here and continued at length in the following sections of the decision to underline why placing the cookies at issue when accessing the website constitutes processing of personal data. It noted that the classification as personal data “becomes even more apparent if one takes into account that the identification numbers can be combined with other elements”, like the address and HTML title of the website and the subpages visited by the complainant; information about the browser, operating system, screen resolution, language selection and the date and time of the website visit; the IP address of the device used by the complainant. The DPA considers that “the complainant’s digital footprint is made even more unique following such a combination [of data points]”. 

The “anonymization function of the IP address” – which is a function that Google Analytics provides to users if they wish to activate it – was expressly set aside by the DPA, considering that during fact finding it was shown the function was not correctly implemented by the website at the time of the complaint. However, later in the decision, with regard to the same function and the fact that it was not implemented by the website, the regulator noted that “the IP address is in any case only one of many pieces of the puzzle of the complainant’s digital footprint”, hinting therefore that even if the function would have been correctly implemented, it wouldn’t have necessarily led to the conclusion that the data being processed was not personal. 

1.3 Controllers and other persons “with lawful means and justifiable effort” will count for the identifiability test

Drilling down even more on the notion of “identifiability” in a dedicated section of the decision, the DPA highlights that in order for the data processed through the cookies at issue to be personal, “it is not necessary that the respondents can establish a personal reference on their own, i.e. that all information required for identification is with them. […] Rather, it is sufficient that anyone, with lawful means and justifiable effort, can establish this personal reference”. Therefore, the DPA took the position that “not only the means of the controller [the website in this case] are to be taken into account in the question of identifiability, but also those of ‘another person’”.

After recalling that the CJEU repeatedly found that “the scope of application of the GDPR is to be understood very broadly” (e.g. C-439/19 B, C-434/16 Nowak, C-553/07 Rijkeboer), the DPA nonetheless stated that in its opinion, the term “anyone” it referred to above, and thus the scope of the definition of personal data, “should not be interpreted so broadly that any unknown actor could theoretically have special knowledge to establish a reference; this would lead to almost any information falling within the scope of application of the GDPR and a demarcation from non-personal data would become difficult or even impossible”.

This being said, the DPA considers that the “decisive factor is whether identifiability can be established with a justifiable and reasonable effort”. In the case at hand, the DPA considers that there are “certain actors who possess special knowledge that makes it possible to establish a reference to the complainant and identify him”. These actors are, from the DPA’s point of view, certainly the provider of the Google Analytics service and, possibly the US authorities in the national security area. As for the provider of Google Analytics, the DPA highlights that, first of all, the complainant was logged in with his Google account at the time of visiting the website. 

The DPA indicates this is a relevant fact only “if one takes the view that the online identifiers cited above must be assignable to a certain ‘face’”. The DPA finds that such an assignment to a specific individual is in any case possible in the case at hand. As such, the DPA states that: “[…] if the identifiability of a website visitor depends only on whether certain declarations of intent are made in the account (user’s Google account – our note), then, from a technical point of view, all possibilities of identifiability are present”, since, as noted by the DPA, otherwise Google “could not comply with a user’s wishes expressed in the account settings for ‘personalization’ of the advertising information received”. It is not immediately clear how the ad preferences expressed by a user in their personal account are linked to the processing of data for Google Analytics (and thus website traffic measurement) purposes, and it seems that this was used in the argumentation to substantiate the claim that the second respondent generally has additional knowledge across its various services that could lead to the identification or the singling out of the website visitor.  

However, following the arguments of the DPA, on top of the autonomous finding that cookie identification numbers are personal data, it seems that even if the complainant wouldn’t have been logged into his account, the data processed through the Google Analytics cookies would have still been considered personal. In this context, the DPA “expressly” notes that “the wording of Article 4(1) of the GDPR is unambiguous and is linked to the ability to identify and not to whether identification is ultimately carried out”.

Moreover, “irrespective of the second respondent” – so even if Google admittedly did not have any possibility or ability to render the complainant identifiable or to single him out, other third parties in this case were considered to have the potential ability to identify the complainant: US authorities.

1.4 Additional information potentially available to US intelligence authorities, taken into account for the identifiability test

Lastly, according to the decision, the US authorities in the national security area “must be taken into account” when assessing the potential of identifiability of the data processed through cookies in this case. The DPA considers that “intelligence services in the US take certain online identifiers, such as the IP address or unique identification numbers, as a starting point for monitoring individuals. In particular, it cannot be ruled out that intelligence services have already collected information with the help of which the data transmitted here can be traced back to the person of the complainant.” 

To show that this is not merely a “theoretical danger”, the DPA relies on the findings of the CJEU in Schrems II with regard to the US legal framework and the “access possibilities” it offers to authorities, and on Google’s Transparency Report, “which proves that data requests are made to [it] by US authorities.” The regulator further decided that even if it is admittedly not possible for the website to check whether such access requests are made in individual cases and with regard to the visitors of the website, “this circumstance cannot be held against affected persons, such as the complainant. Thus, it was ultimately the first respondent as the website operator who, despite publication of the Schrems II judgment, continued to use the Google Analytics tool”. 

Therefore, based on the findings of the Austrian DPA in this case, at least two of the “any persons” mentioned in Recital 26 GDPR that will be considered when deciding who can have lawful means to identify data so that the data is deemed personal are the processor of a specific processing operation, as well as the national security authorities that may have access to that data, at least in cases where this access is relevant (like in international data transfers). This latter finding of the DPA raises questions whether national security agencies in general in a specific jurisdiction may be considered by DPAs as an actor who has “lawful means” and additional knowledge when deciding if a data set links to an “identifiable” person, also in cases where international data transfers are not at issue. 

The DPA concluded that the data processed by the Google Analytics cookies is personal data and falls under the scope of the GDPR. Importantly, the cookie identification numbers were found to be personal data by themselves. Additionally, the other data elements potentially collected through cookies together with the identification numbers are also personal data.

2. Data transfers to the US are taking place by placing cookies provided by US-based companies on EU-based websites

Once the supervisory authorities established that the data processed through Google Analytics and, respectively, Stripe cookies, were personal data and were covered by the GDPR or EUDPR respectively, they had to ascertain whether an international transfer of personal data from the EU to the US was taking place in order to see whether the provisions relevant to international data transfers were applicable.

The EDPS was again concise. It stated that because the personal data were processed by two entities located in the US (Stripe and Google LLC) on the EP website, “personal data processed through them were transferred to the US”. The regulator strengthened its finding by stating that this conclusion “is reinforced by the circumstances highlighted by the complainants, according to which all data collected through Google Analytics is hosted (i.e. stored and further processed) in the US”. For this particular finding, the EDPS referred, under footnote 27 of the decision, to the proceedings in Austria “regarding the use of Google Analytics in the context of the 101 complaints filed by noyb on the transfer of data to the US when using Google Analytics”, in an evident indication that the supervisory authorities are coordinating their actions. 

In turn, the Austrian DPA applied the criteria laid out by the EDPB in its draft Guidelines 5/2021 on the relationship between the scope of Article 3 and Chapter V GDPR, and found that all the conditions are met. The administrator of the website is the controller and it is based in Austria, and, as data exporter, it “disclosed personal data of the complainant by proactively implementing the Google Analytics tool on its website and as a direct result of this implementation, among other things, a data transfer to the second respondent to the US took place”. The DPA also noted that the second respondent, in its capacity as processor and data importer, is located in the US. Hence, Chapter V of the GDPR and its rules for international data transfers are applicable in this case. 

However, it should also be highlighted that, as part of fact finding in this case, the Austrian DPA noted that the version of Google Analytics subject to this case was provided by Google LLC (based in the US) until the end of April 2021. Therefore, for the facts of the case which occurred in August 2020, the relevant processor and eventual data importer was Google LLC. But the DPA also noted that since the end of April 2021, Google Analytics has been provided by Google Ireland Limited (based in Ireland). 

One important question that remains for future cases is whether, under these circumstances, the DPA would find that an international data transfer occurred, considering the criteria laid out in the draft EDPB Guidelines 5/2021, which specifically require (at least in the draft version, currently subject to public consultation) that “the data importer is located in a third country”, without any further specifications related to corporate structures or location of the means of processing. 

2.1 In the absence of an adequacy decision, all data transfers to the US based on “additional safeguards”, like SCCs, need supplementary measures 

After establishing that international data transfers occurred from the EU to the US in the cases at hand, the DPAs assessed the lawful ground for transfers used. 

The EDPS noted that EU institutions and bodies “must remain in control and take informed decisions when selecting processors and allowing transfers of personal data outside the EEA”. It followed that, absent an adequacy decision, they “may transfer personal data to a third country only if appropriate safeguards are provided, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available”. Noting that the use of Standard Contractual Clauses (SCCs) or another transfer tool do not substitute individual case-by-case assessments that must be carried out in accordance with the Schrems II judgment, the EDPS stated that EU institutions and bodies must carry out such assessments “before any transfer is made”, and, where necessary, they must implement supplemental measures in addition to the transfer tool.

The EDPS recalled some of the key findings of the CJEU in Schrems II, in particular the fact that “the level of protection of personal data in the US was problematic in view of the lack of proportionality caused by mass surveillance programs based on Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order (EO) 12333 read in conjunction with Presidential Policy Directive (PPD) 28 and the lack of effective remedies in the US essentially equivalent to those required by Article 47 of the Charter”. 

Significantly, the supervisory authority then affirmed that “transfers of personal data to the US can only take place if they are framed by effective supplementary measures in order to ensure an essentially equivalent level of protection for the personal data transferred”. Since the EP did not provide any evidence or documentation about supplementary measures being used on top of the SCCs it referred to in the privacy notice on the website, the EDPS found the transfers to the US to be unlawful.

Similarly, the Austrian DPA in its decision recalled that the CJEU “already dealt” with the legal framework in the US in its Schrems II judgment, as based on the same three legal acts (Section 702 FISA, EO 12333, PPD 28). The DPA merely noted that “it is evident that the second respondent (Google LLC – our note) qualifies as a provider of electronic communications services” within the meaning of FISA Section 702. Therefore, it has “an obligation to provide personally identifiable information to US authorities pursuant to 50 US Code §1881a”. Again, the DPA relied on Google’s Transparency Report to show that “such requests are also regularly made to it by US authorities”. 

Considering the legal framework in the US as assessed by the CJEU, just like the EDPS did, the Austrian DPA also concluded that the mere entering into SCCs with a data importer in the US cannot be assumed to ensure an adequate level of protection. Therefore, “the data transfer at issue cannot be based solely on the standard data protection clauses concluded between the respondents”. Hence, supplementary measures must be adduced on top of the SCCs. The Austrian DPA relied significantly on the EDPB Recommendation 1/2020 on measures that supplement transfer tools when analyzing the available supplementary measures put in place by the respondents. 

2.2 Supplementary measures must “eliminate the possibility of access” of the government to the data, in order to be effective

When analyzing the various measures put in place to safeguard the personal data being transferred, the DPA wanted to ascertain “whether the additional measures taken by the second respondent close the legal protection gaps identified in the CJEU [Schrems II] ruling – i.e. the access and monitoring possibilities of US intelligence services”. Setting this as a target, it went on to analyze the individual measures proposed.

The contractual and organizational supplementary measures considered in the case:

The DPA considered that “it is not discernable” to what extent these measures are effective to close the protection gap, taking into account that the CJEU found in the Schrems II judgment that even “permissible (i.e. legal under US law) requests from US intelligence agencies are not compatible with the fundamental right to data protection under Article 8 of the EU Charter of Fundamental Rights”. 

The technical supplementary measures considered were:

With regard to encryption as one of the supplementary measures being used, the DPA took into account that a data importer covered by Section 702 FISA, as is the case in the current decision, “has a direct obligation to provide access to or surrender such data”. The DPA considered that “this obligation may expressly extend to the cryptographic keys without which the data cannot be read”. Therefore, it seems that as long as the keys are kept by the data importer and the importer is subject to the US law assessed by the CJEU in Schrems II (FISA Section 702, EO 12333, PPD 28), encryption will not be considered sufficient.

As for the argument that the personal data being processed through Google Analytics is “pseudonymous” data, the DPA rejected it relying on findings made by the Conference of German DPAs that the use of cookie IDs, advertising IDs, and unique user IDs does not constitute pseudonymization under the GDPR, since these identifiers “are used to make the individuals distinguishable and addressable”, and not to “disguise or delete the identifying data so that data subjects can no longer be addressed” – which the Conference considers to be one of the purposes of pseudonymization.

Overall, the DPA found that the technical measures proposed were not enough because the respondents did not comprehensively explain (therefore, the respondents had the burden of proof) to what extent these measures “actually prevent or restrict the access possibilities of US intelligence services on the basis of US law”. 

With this finding, highlighted also in the operative part of the decision, the DPA seems to de facto reject the “risk based approach” to international data transfers, which has been specifically invoked during the proceedings. This is a theory according to which, for a transfer to be lawful in the absence of an adequacy decision, it is sufficient to prove the likelihood of the government accessing personal data transferred on the basis of additional safeguards is minimal or reduced in practice for a specific transfer, regardless of the broad authority that the government has under the relevant legal framework to access that data and regardless of the lack of effective redress. 

The Austrian DPA is technically taking the view that it is not sufficient to reduce the risk of access to data in practice, as long as the possibility to access personal data on the basis of US law is actually not prevented, or in other words, not eliminated. This conclusion is apparent also from the language used in the operative part of the decision, where the DPA summarizes its findings as such: “the measures taken in addition to the SCCs … are not effective because they do not eliminate the possibility of surveillance and access by US intelligence agencies”. 

If other DPAs confirm this approach for transfers from the EU to the US in their decisions, the list of potentially effective supplemental measures for transfers of personal data to the US will remain minimal – prima facie, it seems that nothing short of anonymization (per the GDPR standard) or any other technical measure that will effectively and physically eliminate the possibility of accessing personal data by US national security authorities will suffice under this approach. 

A key reminder here is that the list of supplementary measures detailed in the EDPB Recommendation concerns all international data transfers based on additional safeguards, to all third countries in general, in the absence of an adequacy decision. In the decision summarized here, the supplementary measures found to be ineffective concern their ability to cover “gaps” in the level of data protection of the US legal framework, as resulting from findings of the CJEU with regard to three specific legal acts (FISA Section 702, EO 12333 and PPD 28). Therefore, the supplementary measures discussed and their assessment may be different for transfers to another jurisdiction.

2.3 Are data importers liable for the lawfulness of the data transfer?

One of the most consequential findings of the Austrian DPA that may have an impact on international data transfers cases moving forward is that “the requirements of Chapter V of the GDPR must be complied with by the data exporter, but not by the data importer” – therefore, under this interpretation, the organizations that are on the receiving end of a data transfer, at least when they are a processor for the data exporter like in the present case, cannot be found in breach of the international data transfers obligations under the GDPR. The main argument used was that “the second respondent (as data importer) does not disclose the personal data of the complainant, but (only) receives them”. As a result, Google was found not to breach Article 44 GDPR in this case

However, the DPA did consider that it is necessary to look further, and as part of separate proceedings, into how the second respondent complied with its obligations as a data processor, and in particular the obligation to process personal data on documented instructions from the controller, including with regard to transfers of personal data to a third country or an international organization, as detailed in Article 28(3)(a) and Article 29 GDPR.

3. Sanctions and consequences: Between preemptive deletion of cookies, reprimands and blocking transfers

Another commonality of the two decisions summarized is that neither of them resulted in a fine. The EDPS issued a reprimand against the European Parliament for several breaches of the EUDPR, including those related to international data transfers “due to its reliance on the Standard Contractual Clauses in the absence of a demonstration that data subjects’ personal data transferred to the US were provided an essential equivalent level of protection”. It is significant to mention that the EP asked the website service provider to disable both Google Analytics and Stripe cookies in a matter of days after being contacted by the complainants on October 27, 2020. The cookies at issue were active between September 30, when the website became available, and November 4, 2020. 

In turn, the Austrian DPA found that “the Google Analytics tool (at least in the version of August 14, 2020) can thus not be used in compliance with the requirements of Chapter V GDPR”. However, as discussed above, the DPA found that only the website operator – as the data exporter – was in breach of Article 44 GDPR.  The DPA decided not to issue a fine in this case. 

However, the DPA pursues to impose a ban on the data transfers or a similar order against the website, with some procedural complications. In the middle of the proceedings, the Austrian company that was in charge of managing the website transferred the responsibility of operating it to a company based in Germany, therefore the website is not under its control any longer. But since the DPA noted that Google Analytics continued to be implemented on the website at the time of the decision, it resolved to refer the case to the competent German supervisory authority with regard to the possible use of remedial powers against the new operator. 

Therefore, it seems that stopping the transfer of personal data to the US without appropriate safeguards seems to be the focus in these cases, rather than sanctioning the data exporters. The parties have the possibility to challenge both decisions before their respective competent Court and require a judicial review within a limited period of time, but there are no indications yet whether this will happen. 

4. The big picture: 101 complaints and collaboration among DPAs

The decision published by the Austrian DPA is the first one in the 101 complaints that noyb submitted directly to 14 DPAs across Europe (EU and the European Economic Area) at the same time in August 2020, from Malta, to Poland, to Lichtenstein, with identical legal arguments centered on international data transfers to the US through the use of Google Analytics or Facebook Connect, and all against websites of local or national relevance – so most likely these complaints will be considered outside the One-Stop-Shop mechanism. 

The bulk of the 101 complaints were submitted to the Austrian DPA (about 50), either immediately under its competence, as in the analyzed case, or as part of the One-Stop-Shop mechanism where the Austrian DPA acts as the concerned DPA from the jurisdiction where the complainant resides, which likely needed to forward the cases to the many lead DPAs in the jurisdictions were the targeted websites have their establishment. This way, even more DPAs will have to make a decision in these cases –  from Cyprus, to Greece, to Sweden, Romania and many more. About a month after the identical 101 complaints were submitted, the EDPB decided to create a taskforce to “analyse the matter and ensure a close cooperation among the members of the Board”. 

In contrast, the complaint against the European Parliament was not part of this set, it was submitted separately at a later date to the EDPS, but relying on similar arguments on the issue of international data transfers to the US through Google Analytics and Stripe cookies. Even if it was not part of the 101 complaints, it is clear that the authorities indeed cooperated or communicated, with the EDPS making a direct reference to the Austrian proceedings, as shown above. 

In other signs of cooperation, both the Dutch DPA and the Danish DPA have published notices immediately after the publication of the Austrian decision to alert organizations that they may soon issue new guidance in relation to the use of Google Analytics, specifically referring to the Austrian case. Of note, the Danish DPA highlighted that “as a result of the decision of the Austrian DPA” it is now “in doubt whether – and how – such tools can be used in accordance with data protection law, including the rules on transfers of personal data to third countries”. It also called for a common approach of DPAs on this issue: “it is essential that European regulators have a common interpretation of the rules”, since data protection law “intends to promote the internal market”. 

In the end, the DPAs are applying findings from a judgment made by the CJEU, which has ultimate authority in the interpretation of EU law that must be applied across all EU Member States. All this indicates that it is likely a series of similar decisions will be successively published in the short to medium future, with small chances of seeing significant variations. This is why these two cases summarized here can be seen as the first two pieces that fell in a domino. 

This domino, though, will not only be about the 101 cases and the specific cookies they target – it eventually concerns all US based service providers and businesses that receive personal data from the EU potentially covered by the broad reach of FISA Section 702 and EO 12333; all EU based organizations, from website operators, to businesses, schools, and public agencies, that use the services provided by the former or engage them as business partners, and disclose personal data to them; and it might as well affect all EU based businesses that have offices and subsidiaries in the US and that make personal data available to these entities.

Dispatch from the Global Privacy Assembly: The brave new world of international data transfers

The future of international data transfers is multi-dimensional, exploring new territories around the world, featuring binding international agreements for effective enforcement cooperation and slowly entering the agenda of high level intergovernmental organizations. All this surfaced from notable keynotes delivered during the 43rd edition of the Global Privacy Assembly Conference, hosted remotely by Mexico’s data protection authority, INAI, on October 18 and 19.  

“The crucial importance of data flows is generally recognized as an inescapable fact”, noted Bruno Gencarelli, Head of Unit for International Data Flows and Protection at the European Commission, at the beginning of his keynote address. Indeed, from the shockwaves sent by the Court of Justice of the EU (CJEU) with the Schrems II judgment in 2020, to the increasingly poignant data localization push in several jurisdictions around the world, underpinned by the reality that data flows are at the center of daily lives during the pandemic with remote work, school, global conferences and everything else – the field of international data transfers is more important than ever. Because, as Gencarelli noted, “it is also generally recognized that protection should travel with the data”.

Latin America and Asia Pacific, the “real laboratories” of new data protection rules

Gencarelli then observed that the conversation on international data flows has become much more “global and diverse”, technically shifting from the “traditional transatlantic debate” to a truly global conversation. “We are seeing a shift to other areas of the world, such as Asia-Pacific and Latin America. This doesn’t mean that the transatlantic dimension is not a very important one, it’s actually a crucial one, but it is far from being the only one”, he said. These remarks come as the US Government and the European Commission have been negotiating for more than a year a framework for data transfers to replace the EU-US Privacy Shield, invalidated by the CJEU in July 2020.  

In fact, according to Gencarelli, “Latin America and Asia-Pacific are today the real laboratories for new data protection rules, initiatives and solutions. This brings new opportunities to facilitate data flows with these regions, but also between those regions and the rest of the world”. The European Commission has recently concluded adequacy talks with South Korea, after having created the largest area of free data flows for the EU with Japan, two years ago. 

“You will see more of that in the coming months and years, with other partners in Asia and Latin America”, he added, without specifying what jurisdictions are immediate in the adequacy pipeline. Earlier in the conference, Jonathan Mendoza, Secretary for Personal Data Protection at INAI, had mentioned that Mexico and Colombia are two of the countries in Latin America that have been engaging with the European Commission for adequacy. 

However, before the European Commission officially communicates about advanced adequacy talks or renewal of pre-GDPR adequacy decisions, we will not know what those jurisdictions are. In an official Communication from 2017, “Exchanging and protecting personal data in a globalized world”, the Commission announced that, “depending on progress towards the modernization of its data protection laws”, India could be one of those countries, together with countries from Mercosur and countries from the “European neighborhood” (this could potentially refer to countries in the Balkans or the Southern and Eastern borders, like Moldova, Ukraine or Turkey, for example).

Going beyond “bilateral adequacy”: regional “transfer tools”

“Adequacy” of foreign jurisdictions as a ground to allow data to flow freely has become a standard for international data transfers gaining considerable traction beyond the EU in new legislative data protection frameworks (see, for instance, Articles 33 and 34 of Brazil’s LGPD, Article 34(1)(b) of the Indian Data Protection Bill with regard to transfers of sensitive data, or the plans recently announced by the Australian government to update the country’s Privacy Law, at p. 160). Even where adequacy is not expressly recognized as a ground for transfers, like in China’s Personal Information Protection Law (PIPL), the State still has an obligation to promote “mutual recognition of personal information protection rules, standards etc. with other countries, regions and international organizations”, as laid down in Article 12 of the PIPL.

However, as Gencarelli noted in his keynote, at least from the European Commission’s perspective, “beyond that bilateral dimension work, new opportunities have emerged”. He particularly mentioned “the role regional networks and regional organizations can play in developing international transfer tools.” 

One example that he gave was the model clauses for international data transfers adopted by ASEAN this year, just before the European Commission adopted its new set of Standard Contractual Clauses under the GDPR: “We are building bridges between the two sets of model clauses. (…) Those two sets are not identical, they don’t need to be identical, but they are based on a number of common principles and safeguards. Making them talk to each other, building on that convergence can of course significantly facilitate the life of companies present in ASEAN and in the EU”. 

The convergence of data protection standards and safeguards around the world “has reached a certain critical mass”, according to Gencarelli. This will lead to notable opportunities to cover more than two jurisdictions under some transfer tools: “[they] could cover entire regions of the world and on that aspect too you will see interesting initiatives soon with other regions of the world, for instance Latin America. 

This new approach to transfers can really have a significant effect by covering two regions, a significant network effect to the benefit of citizens, who see that when the data are transferred to a certain region of the world, they are protected by a high and common level of protection, but also for businesses, since it will help them navigate between the requirements of different jurisdictions.”

Entering the world of high level intergovernmental organizations and international trade agreements

One of the significant features of the new landscape of international data transfers is that it has now entered the agenda of intergovernmental fora, like the G7 and G20, in an attempt to counter data localization tendencies and boost digital trade. “This is no longer only a state to state discussion. New players have emerged. (…) If you think of data protection and data flows, we see it at the top of the agenda of G7 and G20, but also regional networks of data protection authorities in Latin America, in Africa, in Europe”, Gencarelli noted.

One particular initiative in this regard, spearheaded by Japan, was extensively explored by Mieko Tanno, the Chairperson of Japan’s Personal Information Protection Commission (PIPC) in her keynote address at the GPA: the Data Free Flow with Trust initiative. “The legal systems related to data flows (…) differ from country to country reflecting their history, national characteristics and political systems. Given that there is no global data governance discipline, policy coordination in these areas is essential for free flow of data across borders. With that in mind, Japan proposed the idea of data free flow with trust at the World Economic Forum annual meeting in 2019. It was endorsed by the world leaders of the G20 Osaka summit in the same year and we are currently making efforts in realizing the concept of DFFT”, Tanno explained. 

A key characteristic of the DFFT initiative, though, is that it emulates existing legal frameworks in participating jurisdictions and does not seem to propose the creation of new solutions that would enhance the protection of personal data in cross-border processing and the trust needed to allow free flows of data. Two days after the GPA conference took place, the G7 group adopted a set of Digital Trade Principles during their meeting in London, including a section dedicated to “Data Free Flow with Trust”, which confirms this approach.

For instance, the DFFT initiative specifically outsources to the OECD solving the thorny issue of appropriate safeguards for government access to personal data held by private companies, which underpins both the first and second invalidation by the CJEU of an adequacy decision issued by the European Commission for a self-regulatory privacy framework adopted by the US. While the OECD efforts in this respect hit a roadblock during this summer, the GPA managed to adopt a resolution during the Closed Session of the conference on Government Access to Personal Data held by the Private Sector for National Security and Public Safety Purposes, which includes substantial principles like transparency, proportionality, independent oversight and judicial redress. 

However, one interesting idea surfaced among the proposals related to DFFT that the PIPC promotes for further consideration in these intergovernmental fora, according to Mieko Tanno: the introduction of a global corporate certification system. No further details about this idea were shared at the GPA, but since the DFFT initiative will continue to make its way through agendas of international fora, we might find out more information soon. 

One final layer of complexity added to the international data transfers debate is the intertwining of data flows with international trade agreements. In his keynote, Bruno Gencarelli spoke of “synergies that can be created between trade instruments on the one hand and data protection mechanisms on the other hand”, and promoted breaking down silos between the two as being very important. This is already happening to a certain degree, as shown by the Chart annexed to this G20 Insights policy brief, on “provisions in recent trade agreements addressing privacy for personal data and consumer protection”. 

An essential question to consider for this approach is, as pointed out by Dr. Clarisse Girot, Director of FPF Asia-Pacific, when reviewing this piece, “how far can we build trust with trade agreements?”. Usually, trade agreements “guarantee an openness that is appropriate to the pre-existing level of trust”, as noted in the G20 Insights policy brief.  

EU will seek a mandate to negotiate international agreements for data protection enforcement cooperation

Enforcement cooperation for the application of data protection rules in cross-border cases is one of the key areas that requires significant improvement, according to Bruno Gencarelli: “When you have a major data breach or a major compliance issue, it simultaneously affects several jurisdictions, hundreds of thousands, millions of users. It makes sense that the regulators who are investigating at the same time the same compliance issues should be able to effectively cooperate. It also makes sense because most of the new modernized privacy laws have a so-called extraterritorial effect”.

Gencarelli also noted that the lack of effectiveness of current arrangements for enforcement cooperation for privacy and data protection law surfaces especially when it is compared to other regulatory areas, like competition and financial supervision. In those areas, enforcers have binding tools that allow “cooperation on the ground, exchange of information in real time, providing mutual assistance to each other, carrying out joint investigations”. 

In this sense, the European Union has plans to create such a binding toolbox for regulators. “The EU will, in the context of the implementation of the GDPR, seek a mandate to negotiate such agreements with a number of international partners”, announced Bruno Gencarelli in his keynote address. 

The more than 130 privacy and supervisory authorities from around the world that are members of the GPA are very keen on enhancing and permanentalizing their cooperation, both in policy matters and enforcement, as is evident from the Resolution on the Assembly’s Strategic Direction for 2021-2023 adopted by the GPA during this year’s Conference, under the leadership of Elizabeth Denham and her team at the UK’s Information Commissioner’s Office. This two-year Strategy proposes concrete action, such as “building skills and capacity among members, particularly in relation to enforcement strategies, investigation processes, cooperation in practice and breach assessment”. The binding toolbox for enforcement cooperation that the EU might promote internationally will without a doubt boost these initiatives. 

In a sign that, indeed, the data protection and privacy debate is increasingly vibrant outside traditional geographies for this field, Mexico’s INAI was voted as the next Chair of the Executive Committee of the GPA and entrusted to carry out the GPA’s Strategy for the next two years. 

Video recordings of all Keynote sessions at this year’s GPA Annual Conference are available On Demand on the Conference’s platform for the attendees that had registered for the event.

  

At the intersection of AI and Data Protection law: Automated Decision-Making Rules, a Global Perspective (CPDP LatAm Panel)

On Thursday, 15th of July 2021, the Future of Privacy Forum (FPF) organised during the CPDP LatAm Conference a panel titled ‘At the Intersection of AI and Data Protection law: Automated Decision Making Rules, a Global Perspective’. The aim of the Panel was to explore how existing data protection laws around the world apply to profiling and automated decision making practices. In light of the European Commission’s recent AI Regulation proposal, it is important to explore the way and the extent to which existing laws already protect individuals’ fundamental rights and freedoms against automated processing activities driven by AI technologies. 

The panel consisted of Katerina Demetzou, Policy Fellow for Global Privacy at the Future of Privacy Forum; Simon Hania, Senior Director and Data Protection Officer at Uber; Prof. Laura Schertel Mendes, Law Professor at the University of Brasilia and Eduardo Bertoni, Representative for the Regional Office for South America, Interamerican Institute of Human Rights. The panel discussion was moderated by Dr. Gabriela Zanfir–Fortuna, Director for Global Privacy at the Future of Privacy Forum.

web 3120321 1920

Data Protection laws apply to ADM Practices in light of specific provisions and/or of their broad material scope

To kick-off the conversation, we presented preliminary results of an ongoing project led by the Global Privacy Team at FPF on Automated Decision Making (ADM) around the world. Seven jurisdictions were presented comparatively, among which five already have a general data protection law in force (EU, Brazil, Japan, South Korea, South Africa), while two jurisdictions have data protection bills expected to become laws in 2021 (China and India).

For the purposes of this analysis, the following provisions are being examined: the definitions of ‘processing operation’ and ‘personal data’ given that they are two concepts essential for defining the material scope of the data protection law; the principles of fairness and transparency and legal obligations and rights that relate to these two principles (e.g., right of access, right to an explanation, right to meaningful information etc.); provisions that specifically refer to ADM and profiling (e.g., Article 22 GDPR). 

The preliminary findings are summarized in the following points:

Uber, Ola and Foodinho Cases: National Courts and DPAs decide on ADM cases on the basis of existing laws

In recent months, Dutch national Courts and the Italian Data Protection Authority have ruled on complaints brought by employees of the ride-hailing companies Uber and Ola and the food delivery company Foodinho challenging the companies’ decisions reached with the use of algorithms. Simon Hania summarised the key points of these court decisions. It is important to mention that all cases appeared in the employment context and were all submitted back in 2019. That means that more outcomes of ADM cases may be expected in the near future. 

The first Uber case referred to the matching between drivers and riders which, as the Court judged, qualifies as an ADM based solely on automated means that however does not lead to any ‘legal or similarly significant effect’. Therefore, Article 22 GDPR is not applicable. The second Uber case referred to the deactivation of drivers’ accounts due to signals of potentially fraudulent behaviour or misconduct of the drivers. There, the Court judged that Article 22 is not applicable because, as the company proved, there is always human intervention before an account is deactivated and the actual final decision is made by a human. 

The third example presented was the Ola case, whereby the Court decided that the company’s decision of withholding drivers’ money as an act of penalizing their misconduct qualifies as an automated decision based solely on automated means , producing a ‘legal or similarly significant effect’, and therefore Article 22 GDPR applies. 

In the last example of Foodinho, the decision-making on how well couriers perform was indeed deemed by the Court to be based solely on automated means and it produced a significant effect on the data subjects (the couriers). The problem was highlighted to be the way that the performance metrics were established and specifically on the accuracy of the profiles created. They were not sufficiently accurate for the significance of the effect they would bring. 

This last point spurs the discussion on the importance of the principle of data accuracy which is an often overlooked principle. Having accurate data as the basis for decision making is crucial in order to avoid discriminatory practices and achieve fairer AI systems. As Simon Hania emphasised, we should have information available that is fit for purpose in order to reach accurate decisions. This suggests that the data minimisation principle should be understood as data rightsizing and not as requiring to purely minimise information processed for a decision to be reached.

LGPD: Brazil’s Data Protection Law and its application to ADM practices

The LGPD, Brazil’s recently passed data protection law, is heavily influenced by the EU GDPR in general, but also specifically on the topic of ADM processing. Article 20 of the LGPD protects individuals against decisions that are made only on the basis of automated processing of personal data, when these decisions “affect their interests”. The wording of this provision seems to suggest a wider protection than the relevant Article 22 of the GDPR which requires that the decision “has a legal effect or significantly affects the data subject”. Additionally, Article 20 LGPD provides individuals with a right to an explanation and with the right to request a review of the decision. 

In her presentation, Laura Mendes highlighted two points that require further clarification: first of all, it is still unclear what the definition of “solely automated” is. Secondly, it is not clear what the degree of the review of the decision should be and also whether the review shall be performed by a human. There are two provisions core to the discussion on ADM practices: 

(a) Art 6 IX LGPD, which introduces the principle of non-discrimination as a separate data protection principle. According to it, processing of data shall not take place for “discriminatory, unlawful or abusive purposes”. 

(b) Article 21 LGPD reads “The personal data relating to the regular exercise of rights by the data subjects cannot be used against them.” As Laura Mendes suggested, Article 21 LGPD is a provision with great potential regarding non-discrimination in ADM. 

Latin America & ADM Regulation: there is no homogeneity in Latin American laws but the Ibero-American Network seems to be setting a common tone

In the last part of the panel discussion, a wider picture of the situation in Latin America was presented. It should be clear that Latin America does not have a common, homogenous approach towards data protection. For example, while Argentina has had a data protection law since 2000 for which it obtained an adequacy decision with the EU, Chile is in the process of adopting a data protection law but still has a long way to go, while Peru, Ecuador and Colombia are trying to modernize their laws. 

The American Convention of Human Rights recognises a right to privacy and a right to intimacy, but there is still no interpretation by the Interamerican Court of Human Rights neither on the right to data protection nor specifically on the topic of ADM practices. However, it should be kept in mind that as was the case with Brazil’s LGPD, the GDPR has highly influenced Latin America’s approach to data protection. Another common reference for Latin American countries is the Ibero-American Network which, as Eduardo Bertoni explained in his talk, while it does not produce hard law, it publishes recommendations that are followed by the respective jurisdictions. Regarding specifically the discussion on ADM, Eduardo Bertoni mentioned the following initiatives taken in the Ibero-American space:

Main Takeaways

While there is an ongoing debate around the regulation of AI systems and automated processing in light of the recently proposed EU AI Act, this panel brought attention to existing data protection laws which are equipped with provisions that protect individuals against automated processing operations. The main takeaways of this panel are the following:

Looking ahead, the debate around the regulation of AI systems will continue to be heated and the protection of fundamental rights and freedoms in light of automated processing operations will still appear as a top priority. In this debate we should keep in mind that the proposed AI Regulation is being introduced in an already existing system of laws, as is data protection law, consumer law, labour law, etc. It is important to have clear what is the reach and the nature of these laws so as to be able to identify the gap that the AI Regulation or any other future proposal comes to fill. This panel highlighted that ADM and automated processing is not unregulated. On the contrary, current laws protect individuals by putting in place binding overarching principles, legal obligations and rights. At the same time, Courts and national authorities have already started enforcing these laws. 

Watch a recording of the panel HERE.

Read more from our Global Privacy series:

Insights into the future of data protection enforcement: Regulatory strategies of European Data Protection Authorities for 2021-2022

Spotlight on the emerging Chinese data protection framework: Lessons learned from the unprecedented investigation of Didi Chuxing

A new era for Japanese Data Protection: 2020 Amendments to the APPI

Image by digital designer from Pixabay

India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

car 3075497 1920

by Chelsey Colbert

The author thanks Hunter Dorwart for his contribution to this text.

The Cyberspace Administration of China (CAC) released a draft regulation on car privacy and data security on May 12, 2021. China has been very active in automated vehicle development and deployment and has also proposed last fall a draft comprehensive privacy law, which is moving towards adoption likely by the end of this year.

The draft car privacy and data security regulation (“Several Provisions on the Management of Automobile Data Security”; hereinafter, “draft regulation”) is interesting for those tracking automated vehicle (AV) and privacy regulations around the world and is relevant beyond China – not only due to the size of the Chinese market and its potential impact on all actors in the “connected cars” space present there, but also because dedicated legislation for car privacy and data security is novel for most jurisdictions. In fact, the draft regulation raises several interesting privacy and data protection aspects worthy of further consideration, such as its strict rules on consent, privacy by design, and data localization requirements. The CAC is seeking public comment on the draft, and the deadline for comments is June 11, 2021. 

The draft regulation complements other regulatory developments around connected and automated vehicles and data. For example, on April 29, 2021, the National Information Security Standardization Technical Committee (TC 260), which is jointly administered by the CAC and the Standardization Administration of China, published a draft Standard on Information Security Technology Security Requirements for Data Collected by Connected Vehicles. The Standard sets forth security requirements for data collection to ensure compliance with other laws and facilitate a safe environment for networked vehicles. Standards like this are an essential component of corporate governance in China and notably fill in compliance gaps left in the law. 

The publication of the draft regulation and the draft standard indicate that the Chinese government is turning its attention towards the data and security practices of the connected cars industry. Below we explain the key aspects of this draft regulation, summarize some of the noteworthy provisions, and conclude with the key takeaways for everyone in the car ecosystem. 

Broad scope of covered entities: from OEMs to online ride-hailing companies

The draft regulation aims to strengthen the protection of “personal information” and “important data,” regulate data processing related to cars, and maintain national security and public interests. The scope of application of this draft regulation is fairly broad, both in terms of who it applies to and the types of data it covers. 

The draft regulation applies to “operators” that collect, analyze, store, transmit, query, utilize, delete, and provide (activities collectively referred to as processing) personal information or important information overseas (during the design, production, sales, operation, maintenance, and management of cars) and “within the territory of the People’s Republic of China.” 

“Operators” are entities that design or manufacture cars, or service institutions such as OEMs (original equipment manufacturers), component and software providers, dealers, maintenance organizations, online car-hailing companies, insurance companies, etc. (Note: The draft regulation includes “etc.,” here and throughout, which appears to mean that it is a non-exhaustive list.)

Covered data: Distinction among “personal information,” “important data,” and “sensitive personal information”

The draft regulation considers three data types, with an emphasis on “personal information” and “important data”, which are defined terms under Article 3. In addition, there is also a third type mentioned within the draft, at Article 8, and in a separate press release document: “sensitive personal information.”  

Personal information includes data from car owners, drivers, passengers, pedestrians, etc. (non-exhaustive list) and also includes information that can infer personal identity and describe personal behavior. This is a broad definition and is notable because it explicitly includes information about passengers and pedestrians. As the business models evolve and the ecosystem of players in the car space grows, it has become more important to consider individuals other than just the driver or registered user of the car. The draft regulation appears to use the words “users” and “personal information subjects” when referring to this group of individuals broadly and also uses “driver,” “owner,” and “passenger” throughout.

The second type of data covered is “important data,” which includes:

The inclusion of this data type is notable because it is defined in addition to “sensitive personal information” and includes data about users and infrastructure (i.e., the car charging network). Article 11 prescribes that when handling important data, operators should report to the provincial cyberspace administration and relevant departments the type, scale, scope, storage location and retention period, the purposes for collection, whether it was shared with a third party, etc. in advance (presumably in advance of processing this type of data, but this is something that may need to be clarified).

The third type of data mentioned in the draft regulation is “sensitive personal information,” and this includes vehicle location, driver or passenger audio and video, and data that can be used to determine illegal driving. There are certain obligations for operators processing this type of data (Articles 8 and 16).

Article 8 prescribes that where “sensitive personal information” is collected or provided outside of the vehicle, operators must meet certain obligations:

The definitions of these three types of data mirror similar definitions in other Chinese laws or draft laws currently being considered for adoption, such as the Civil Code and, respectively, the Personal Information Protection Law and the Cybersecurity Law. Consistency across these laws indicates a harmonization of China’s emerging data governance regulatory model. 

Obligations based on the Fair Information Practice Principles

Articles 4 – 10 include many of the fair information practice principles, such as purpose specification and data minimization in Article 4 and security safeguards in Article 5, as well as privacy by design (Articles 6(4), 6(5), and 9). There are a few notable provisions worth discussing in more detail which are organized under the following headings below: local processing, transparency and notice, consent and user control, biometric data, annual data security management, and violations and penalties. 

Local (“on device”) processing

Personal information and important data should be processed inside the vehicle, wherever possible (Article 6(1)). Where data processing outside of the car is necessary, operators should ensure the data has been anonymized wherever possible (Article 6(2)).

Transparency and Notice

When processing personal information, the operator is required to give notice of the types of data being collected and provide the contact information for the person responsible for processing user rights (Article 7). This notice can be provided through user manuals, onboard display panels, or other appropriate methods. The notice should include the purpose for collection, the moment that personal information is collected, how users can stop the collection, where and for how long data is stored, and how to delete data stored in the car and outside of the vehicle.

Regarding sensitive personal information (Article 8(3)), the operator is obliged to inform the driver and passengers that this data is being collected through a display panel or a voice in the car. This provision does not include “user manuals” as an example of how to provide notice, which potentially means that this data type is worthy of more active notice than personal information. This is notable because operators cannot rely on notice being given through a privacy notice placed on a website or in the car’s manual.

Consent and User Control, including a two-week deletion deadline

Article 9 requires operators to obtain consent to collect personal information, except where laws do not require consent. This provision notes that consent is often difficult to obtain (e.g., collecting audio and video of pedestrians outside the car). Because of this difficulty, data should only be collected when necessary and should be processed locally in the vehicle. Operators should also employ privacy by design measures, such as de-identification on devices.

Article 8(2) (requirements when collecting sensitive personal information) requires operators to obtain the driver’s consent and authorization each time the driver enters the car. Once the driver leaves the driver’s seat, that consent session has ended, and a new one must begin once the driver gets back into the seat. The driver must be able to stop the collection of this type of data at any time, be able to view and make inquiries about the data collected, and request the deletion of the data (the operator has two weeks to delete the data). It is worth noting that Article 8 includes six subsections, some of which appear to apply only to the driver or owner and not passengers or pedestrians. 

These consent and user control requirements are quite notable and would have a non-trivial impact on the design of the car, the user experience, as well as the internal operations of the operator. It could potentially impact the user experience negatively if consent and authorization were required each time the driver got into the driver’s seat. For example, a relevant comparable experience is using a website and facing the consent-related pop-ups that must be closed out before being able to read or use the website at every visit. Furthermore, stopping the collection of location data, video data, and other telematics data (if used to determine illegal driving) could also present safety and functionality risks and cause the car not to operate as intended or safely. These are some of the areas where stakeholders are expected to submit comments for the public consultation. 

Biometric data

Biometric data is mentioned throughout the draft regulation, as this type of data is implicitly or explicitly included in the definitions of personal information, important data, and sensitive personal information. Biometric data is specifically mentioned in Article 10, which is about the biometric data of drivers. Biometric data is an increasingly common data type collected by cars and deserves special attention. Article 10 would require that the biometric data of the driver (e.g., fingerprints, voiceprints, faces, heart rhythms, etc.) only be collected for the convenience of the user or to increase the security of the vehicle. Operators should also provide alternatives to biometrics. 

Data localization

Articles 12-15 and 18 concern data localization. Both personal information and important data should be stored within China, but if it is necessary to store elsewhere, the operator must complete an “outbound security assessment” through the State Cyberspace Administration, and the operator is permitted to send only the data specified in that assessment overseas. The operator is also responsible for overseeing the overseas recipient’s use of the data to ensure appropriate security and for handling all user complaints. 

Annual data security management status

Article 17 places additional obligations on operators to report their annual data security management status to relevant authorities before December 15 of each year when:

  1. They process personal information of more than 100,000 users, or
  2. They process important data. 

Given that this draft regulation applies to passengers and pedestrians in addition to drivers, it would not take long for the threshold of 100,000 users to be met, especially for operators who manage a fleet of cars for rental or ride-hail. Additionally, since the definitions of personal information and important data are so broad, it is likely that many operators would trigger this reporting obligation. The obligations include recording the contact information of the person responsible for data security and handling user rights; recording relevant information about the scale and scope of data processing; recording with whom data is shared domestically; and other security conditions to be specified. If data is transferred overseas, there are additional obligations (Article 18). 

Violations and Penalties

Violation of the regulations would result in punishment in accordance with the “Network Security Law of the People’s Republic of China” and other laws and regulations. Operators may also be held criminally responsible. 

Conclusion 

China’s draft car privacy and security regulation provides relevant information for policymakers and others thinking carefully about privacy and data protection regarding cars. The draft regulation’s scope is very broad and includes many players in the mobility ecosystem beyond OEMs and suppliers (e.g., online car-hailing companies and insurance companies).

With regards to user rights, the draft regulation recognizes that other individuals, in addition to the driver, will have their personal information processed and provides data protection and user rights to these individuals (e.g., passengers and pedestrians). The draft regulation would apply to three broad categories of data (personal information, important data, and sensitive personal information).

In privacy and data protection laws from the EU to the US, we have continued to see different obligations arise depending on the type or sensitivity of data and how data is used. This underscores the need for organizations to have a complete data map; indeed, it is crucial that all operators in the connected and automated car ecosystem have a sound understanding of what data is being collected from which person and where that data is flowing. 

The draft regulation also highlights the importance of transparency and notice, as well as the challenges of consent and user control. It is a challenge to appropriately notify drivers, passengers, and pedestrians about all of the data types being collected by a vehicle.

Privacy and data protection laws will have a direct impact on the design, user experience, and even the enjoyment and safety of cars. It is crucial that all stakeholders are given the opportunity to provide feedback in the drafting of privacy and data protection laws that regulate data flows in the car ecosystem and that privacy professionals, engineers, and designers become much more comfortable working together to operationalize these rules. 

Image by Tayeb MEZAHDIA from Pixabay 

Check out other blogs in the Global Privacy series:

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

The Right to Be Forgotten is Not Compatible with the Brazilian Constitution. Or is it?

India: Massive Overhaul of Digital Regulation with Strict Rules for Take-down of Illegal Content and Automated Scanning of Online Content