FPF Paper, “The Thin Red Line …,” Receives the Council of Europe’s 2023 Stefano Rodotà Award

On Friday, June 16th, members of the FPF team joined the 44th Plenary meeting of the Council of Europe’s Committee of Convention 108 in Strasbourg, France to accept a tremendous research honor. On this occasion, Katerina Demetzou, Senior Counsel for Global Privacy, Dr. Gabriela Zanfir-Fortuna, VP for Global Privacy, and Sebastião Barros Vale, former Senior Counsel for Europe at FPF, received the 2023 Stefano Rodotà Data Protection award in the category of ‘best academic article’ for their paper, “The Thin Red Line: Refocusing Data Protection Law on Automated Decision-Making, A Global Perspective with Lessons from Case-Law.” Demetzou and Barros Vale were present in Strasbourg during the Plenary meeting to present the paper and lift the award.  

The Council of Europe (CoE), founded in 1949, is an international organization with 46 Member States and 6 Observer States. All Council of Europe Member States have signed up to the European Convention of Human Rights (ECHR), a treaty designed to protect human rights, democracy, and the rule of law. The European Court of Human Rights (ECtHR) oversees the implementation of the ECHR in the Member States. 

Demetzou, Barros Vale, and Dr. Gabriela Zanfir-Fortuna, VP for Global Privacy at FPF, are honored to receive recognition at the birthplace of the CoE’s historic 1981 treaty, the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data. Convention 108 established the first legally binding international instrument in data protection, and the CoE adopted the modernized Convention 108+ in 2018. A year later, the Convention 108+ Committee established the Stefano Rodotà Award to honor the memory and legacy of Stefano Rodotà (1933-2017), a leading Italian Professor and politician, and one of the founding fathers of data protection law in Europe. The Stefano Rodotà Award is awarded to precedent-setting and innovative research in the field of data protection. The dedicated academic, intellectual, and political influence of President Rodotà lives on through Demetzou, Barros Vale, and Dr. Zanfir-Fortuna’s global exploration of data protection instruments safeguarding individuals against harms from ADM in emerging technologies. 

“The Thin Red Line” analyzes the legal protection provided by data protection law to individuals that are being subjected to automated decision making (ADM) on the basis of their personal data. To that end, the authors dedicate the first part of their research to an analysis of European Data Protection Authority (DPAs) enforcement actions of the GDPR on ADM cases. The second section looks to Brazil, Mexico, Argentina, Colombia, China and South Africa to explore protections against harmful ADM found in non-EU general data protection laws. The article concludes that even in cases where a processing operation does not meet the high threshold set by Article 22 GDPR (‘solely by automated means’), DPAs have made use of an array of legal principles, rights, and obligations to protect individuals against ADM practices, nonetheless. With the exception of Colombia, the other non-EU jurisdictions have a specific ADM provision. In all cases studied, the general data protection laws provide a broad material scope, such that any automated processing operation, solely or not, is regulated according to relevant provisions. Additionally, all laws studied include strong transparency and fairness requirements.  

While the debate on a European Regulation for AI is ongoing, this paper aims to contribute to the discussion by highlighting that in cases where algorithms and AI systems process personal data, the GDPR is enforceable and protects individuals. Despite extensive legal scholarship on Article 22 GDPR, FPF’s experts identified a gap in previous literature through their global examination of existing enforcements and interpretations from regulators.

After the award ceremony, Demetzou was especially grateful for “the Committee’s warmth, as well as their committed understanding and appreciation for our research.” Dr. Zanfir-Fortuna called on the importance of the article’s findings while reflecting on emerging AI regulatory trends: “Data protection law has proved to be one of the most relevant existing legal frameworks to deal with the risks posed by the mass deployment of new AI tools. Existing legal obligations related to processing of personal data, on all continents, are stringent and more pressing than possible future AI legislation, as they are immediately applicable to existing AI systems.”  The authors hope this intervention, as well as the paper’s global scan, will support researchers and policymakers in understanding how existing data protection law protects against potential harms from algorithms and AI systems.

For more, read “The Thin Red Line: Refocusing Data Protection Law on ADM, A Global Perspective with Lessons from Case-Law.”

rodota award 2023

Nigeria’s New Data Protection Act, Explained

On June 12, 2023, the President of Nigeria signed the Data Protection Bill into law following a successful third reading at the Senate and the House of Representatives. The Data Protection Act, 2023 (the Act) has had executive and legislative support and marks an important milestone in Nigeria’s nearly two-decade journey towards a comprehensive data protection law. Renewed efforts towards a comprehensive law began in September 2022 when the National Commissioner of the Nigeria Data Protection Bureau (NDPB), now the National Data Protection Commissioner (NDPC), announced that the office would seek legal support for a new law as part of the Nigeria Digital Identification for Development Project. The drafting of the law was followed by a validation process that was conducted in October 2022. After validation, the Act was submitted to the Federal Executive Council for approval, which paved the way for its transmission to the National Assembly. The 2022 Data Protection Bill was introduced in both houses of Nigeria’s bicameral legislature as the Nigeria Data Protection Bill, 2023. The Act commenced upon signature by the President.

The Act provides for data protection principles that are common to many international data protection frameworks. It defines “personal data” broadly and it includes legal obligations for “data controllers” and “processors,” defined similarly to the majority of data protection laws around the world. While the structure and content of the Act align with other international frameworks for data protection, the Act contains notable unique provisions: 

Prior to the introduction of the Act, Nigeria’s data protection landscape was governed by the Nigeria Data Protection Regulation, 2019 (NDPR) and the Nigeria Data Protection Regulation 2019: Implementation Framework (Implementation Framework). However, the need to fill in the gaps under the NDPR, create a legal foundation for the existing data protection body, and as a necessary condition for the rollout of a national digital identification program required the creation of a new legislative framework. However, the NDPR and its Implementation Framework shall remain in force alongside the Act. Under Section 64(2)(f) all existing regulatory instruments, including regulations, directives, and authorizations issued by the National Information Technology Development Agency (NITDA) or NDPB shall remain in force as if they were issued by the Commission until they expire, are repealed, replaced, reassembled or altered. Per Section 63 of the Act, the new law shall take precedence in any instance of a conflict with pre-existing provisions. 

1. Covered Actors: Novel Categories of Data Controllers and Processors

The Act applies to the processing of personal data by data controllers, data processors, and third parties, which may be individuals, private entities, or public entities that process personal data. A data controller is defined as an individual, private entity, public Commission or agency, or any other body which, alone or jointly with others, determines the purposes and means of the processing of personal data. A data processor is defined as an individual, private entity, public authority, or any other body who or which processes personal data on behalf of or at the direction of a data controller or another data processor. The Act does not define third parties. 

The Act introduces a novel category of “data controllers and processors of major importance.” A data controller and processor of major importance is defined as a “data controller or data processor that is domiciled, resident in, or operating in Nigeria and processes or intends to process personal data of more than a such number of data subjects who are within Nigeria.” The Act continues, explaining that “the Commission may prescribe or such other class of data controller or data processor that is processing personal data of particular value or significance to the economy, society, or security of Nigeria as the Commission may designate.” 

While the practical thresholds of this definition are set to be further clarified by the Commission, they will be based on the number of data subjects whose data are processed and the value or significance of the processed data. This categorization has commonalities with the EU’s Digital Service Act’s designation of entities as Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) and may be used to create unique and additional obligations for such controllers and processors. The Act currently requires qualifying entities to meet special registration requirements, appoint a data protection officer, and pay different penalty amounts for violations. Future obligations will relate to processes relating to filing of compliance returns (Section 61(2)(g)), as well as any others that may be prescribed through regulations later issued by the Commission.

2. Covered Data: Broad Categories of Sensitive Personal Data 

The Act covers both personal and sensitive personal data. It defines personal data as “any information relating to an individual, who can be identified or is identifiable, directly or indirectly, by reference to an identifier such as a name, an identification number, location data, an online identifier or one or more factors specific to the physical, physiological, genetic, psychological, cultural, social, or economic identity of that individual.” The definition closely tracks Article 4(1) of the GDPR.

It further defines sensitive personal data as personal data relating to an individual’s:

Section 30(2) of the Act envisions a broad, flexible definition for sensitive personal data by authorizing the Commission to prescribe further categories of sensitive personal data. The Act also prohibits the processing of sensitive personal data unless specified conditions are met. Notable allowances for processing of sensitive personal data include:

These proposed rules closely track Article 9 of GDPR’s restrictions for the processing of “special category” data. Unlike the GDPR, which envisions situations where the prohibition on processing sensitive personal data may not be lifted on the basis of consent, the consent exception under Nigeria’s Act is only restricted to situations where a data subject has given and then withdrawn such consent. Additionally, the Act only applies an “explicit consent” requirement to the potential sharing of sensitive personal data by a “foundation, association, or other not-for-profit body with charitable, educational, literary, artistic, philosophical, religious, or trade union purposes,” while the GDPR’s Article 9(2)(a) requires “explicit consent” for all excepted processing. However, the Act does permit the Commission to potentially create additional regulations that may apply to the processing of sensitive personal data, including regulations expanding categories of sensitive personal data, additional grounds for processing such data, and safeguards to be applied.

3. Territorial Application: Broad Extraterritorial Application of the Act

Section 2(2)(c)) of the Act contains broad extraterritorial authority, covering any form of processing the personal data of a data subject in Nigeria by controllers or processors not established in Nigeria. This provision does not consider the nature of processing being conducted, unlike frameworks such as the GDPR, which include the “targeting” criteria. 

Exemptions from Application of the Act: Increased Protections for Exempted Processing Activities 

Section 3 of the Act provides several different exemptions from the broader application of the law and makes room for the Commission to expand the processing activities that may be exempted from the Act. Processing personal information “solely for personal or household purposes” is exempt, as long as such processing does not violate a data subject’s right to privacy. This is a stark difference from laws such as the GDPR, which wholly exempts processing of personal data by a natural person in the course of personal or household activity, regardless of whether it touches on the person’s right to privacy or not. Therefore, there are instances where personal data processing activities of a non-professional and non-commercial nature may fall under the ambit of the law. The rationale for this condition is not clear. Other exemptions include processing activities by law enforcement during the prevention, investigation, detection, or prosecution of a crime, processing for the prevention or control of a national public health emergency, national security or public interest purposes, and as necessary for the establishment, exercise, or defense of legal claims that are exempt from most of the obligations under Part V of the Act. Exempt entities must still comply with some specific provisions under Part V including:

While the Act reserves for the Commission authority to prescribe additional exemptions, it includes a greater number of protections for exempt processing activities than the 2022 Bill. In addition to the above-mentioned provisions that exempt entities must comply with, the Act empowers the Commission to issue a Guidance Note on the legal safeguards and best practices for exempted data controllers and processors where such processing violates or is likely to violate Sections 24 and 25 of the law. Some exemptions have been narrowed relative to the 2022 Bill. Entities who were exempted from complying with provisions under the 2022 Bill must now comply with the above-mentioned provisions for exempt entities under the 2023 Act, as well as those relating to data security and cross-border data transfers. 

4. Obligations of Data Controllers and Processors: Novel Registration Requirements for Data Controllers and Processors of Major Importance 

Some of the Act’s obligations for data controllers and processors are novel, while others have been maintained from the NDPR. 

Data Controllers and Processors of “Major Importance” 

The designation of “data controllers and processors of major importance” and the Commission’s authority to classify and regulate such entities is a key new development. Section 44 of the Act sets out the process and timelines to which such entities must adhere, including registering with the Commission within six months after the commencement of the Act, or upon meeting the statutory criteria for qualifying as a data controller or processor of major importance. Additionally, the Act empowers the Commission to exempt classes of data controllers and processors of major importance from registration where it considers that such registration is unnecessary or disproportionate. The criteria for exemption may be stipulated through Regulations by the Commission.

Another special obligation for controllers and processors of major importance is the requirement to appoint a Data Protection Officer (DPO), which is imposed by Section 32 on such entities only. This requirement substantially differs from the NDPR and the Implementation Framework; under the NDPR, every data controller must appoint a DPO (Article 4.1.2), while the Implementation Framework stipulates conditions for such an appointment (3.4.1). 

Other important obligations of all data controllers and processors include:

Compliance with Data Protection Principles

Data controllers and processors are responsible for complying with the principles provided in the Act. The principles are similar to the FIPPS-based sets found in many comprehensive data protection regimes but also include the duty of care as a principle for controllers and processors (Section 24(3)). Specifically, both controllers and processors owe “a duty of care” with respect to data processing, which is linked to demonstrating accountability related to compliance with other principles provided by the Act.  

Filing of Audit Reports

As discussed in greater detail below, controllers and processors must seek the services of a data protection compliance organization (DPCO) to perform a data protection audit, among other obligations. As the Act does not create new criteria for entities required to conduct such audits, provisions under the NDPR and Implementation Framework remain in force. While the Implementation Framework provides that the authority may carry out scheduled audits or perform spot checks, the common practice is for controllers and processors that process personal data of more than 2000 data subjects in 12 months to engage a DPCO to conduct annual audits on their behalf. This practice is expected to continue. 

Provision of Information to a Data Subject Prior to Collection of Personal Data

Where a data controller collects personal data directly or indirectly from a data subject, they must supply the data subject with the following information prior to collection:

Where personal data is collected indirectly, a controller may be exempted from providing this information to a data subject if it had already been provided or where it would involve a disproportionate effort (Section 27(2)). The transparency obligations imposed by Section 27(1) (listed above) shall form part of the content of a privacy policy that a controller is obliged to have under the law and must be expressed in “clear, concise, transparent, intelligible, and easily accessible form.” In providing this information, a controller is obligated to take into account the intended class of data subjects. This implies that a privacy notice may need to be adjusted to cater to, among other issues, the literacy levels and language differences among data subjects.

Conducting a Data Protection Impact Assessment

Section 28(1) mandates a data controller to conduct a data protection impact assessment (DPIA) prior to the processing of personal data, where such processing may likely result in a high risk to the rights and freedoms of a data subject. The Act does not specify the period within which a DPIA must be conducted prior to such processing. Laws such as Kenya’s Data Protection Act require a DPIA to be conducted 60 days prior to the processing;  this obligation may be clarified under future Regulations.

The Commission may designate, by Regulation or Directive, categories of processing or persons that automatically trigger the requirement to conduct a DPIA. To qualify, a DPIA must include: 

Overseeing the Conduct of Data Processors and Sub-Processors

Controllers engaging processors, or processors engaging sub-processors, must take “reasonable measures” to ensure that the engaged party complies with the requirements of the Act set out in Section 29(1). These measures must take the form of a written agreement and ensure that the engaged party:

Data Security and  Data Breach Notification Requirements

Controllers and processors shall be required to implement security measures and safeguards for personal data. The level of such measures shall take into account several factors, including:

The measures that controllers and processors may implement are further described under section 39(2), including pseudonymization, encryption, and periodic assessments of risks to processing systems.

Where a data breach occurs that affects a data processor, the processor will be required to notify the data controller or processor that engaged it as soon as the breached party becomes aware of the incident, and must respond to information requests regarding the breach (Section 40(1)).

Where a data controller suffers a breach that is likely to cause a risk to the rights and freedoms of data subjects as defined by Section 40(7), several steps are required, including:

The requirements for communications to the Commission and to affected data subjects also differ. Communication to the Commission should be as detailed as possible and include a description of the nature of the breach, while notice to data subjects should be in plain and clear language and include steps to take to mitigate any adverse effects. Section 40(4) highlights the common information that should be present in both cases, such as the name and contact details of a point of contact for the data controller. Information relating to a breach may be provided in a phased manner, where it is impossible to provide all information in a single communication.

5. Lawful Grounds for Processing Personal Data, Consent Requirements, and Children’s Personal Data

The Act provides for six lawful grounds for processing personal data similar to those under the GDPR, including:

Consent Requirements

The Act requires that consent be freely given, specific, informed, and unambiguous. This is similar to consent requirements under the NDPR and Implementation Framework. The Act prohibits implied consent – i.e., the inference of consent from a data subject’s inactivity or the use of pre-ticked boxes. This corresponds with most of the consent provisions under the Implementation Framework, other than the fact that the Framework provides exceptions for consent relating to cookies. The Framework (5.6) provides that consent for cookies may be implied from the continued surfing of a website and does not mandate explicit consent. This effectively limits the extent of consent to direct marketing that is required under 5.3.1(a) of the Implementation Framework.

Children’s Privacy

The Act expands the protections accorded to children and persons lacking legal capacity compared to the NDPR and its Implementation Framework. It increases the age threshold under which a data subject is considered a  “child” to 18 years, in alignment with the Nigeria Child Rights Act (however, not all states have domesticated the Act), and contrasts with the Implementation Framework,  which categorizes a child as a person under 13 years of age). The Act also includes specific consent requirements for children and persons lacking the legal capacity to consent. While the NDPR and Implementation Framework are silent on whom to obtain such consent from, under the Act, consent shall be obtained explicitly from parents or legal guardians (Section 31(1)). To effect this, the Act requires controllers and processors to adopt consent verification mechanisms. To guarantee stronger privacy protections for children, the Commission will create Regulations to guide the personal data processing of a child of 13 years and above in the course of their usage of online products and services.

However, there are instances where a controller or processor may process the personal data of children and persons lacking legal capacity without the consent of a parent or legal guardian, such as: 

Further Protection for Processing of Personal Data Relating to Children and Persons Lacking Legal Capacity 

In addition to the consent requirements, the Act further requires controllers and processors to adopt age verification mechanisms. Age verification is required “where feasible,” taking into consideration available technology. Presentation of any government-approved identification documents will be permitted as a verification mechanism. 

6. Data Subject Rights: Robust Rights with No Implementation Mechanisms for Data Subjects and Narrow Restrictions on Exercise of Rights

The Act provides for data subject rights, which data controllers and processors must comply with prior to and during the processing of personal data, including the rights to: 

The Act does not provide comprehensive mechanisms for implementing these rights, such as parameters and modalities to respond to data subject requests. However, the Implementation Framework (2.3.2(c)) requires controllers to inform data subjects on the method to use to withdraw consent before obtaining consent. The Act states that “a controller should ensure that it is as easy for the data subject to withdraw as it is to give consent.” 

The Act does not provide general restrictions/limits to the rights except for specific cases such as:

7. Cross Border Data Transfers: Broad Grounds for Transfers of Personal Data as well as Parliamentary Authorizations to Protect Data Sovereignty

The Act establishes as a rule that personal data should not be transferred outside of Nigeria, allowing for two exceptions. First, personal data can be transferred when the recipient of the personal data (the data importer) is subject either to (1) a law, (2) Binding Corporate Rules (‘BCRs’), (3) contractual clauses, (4) a Code of Conduct, or (5) a certification mechanism that “affords an adequate level of protection” to that provided by the Act. In the absence of such adequate protection through one of the enumerated means, personal data can also be transferred outside of Nigeria in exceptional situations, listed in Section 43 and mapping precisely to the set of derogations under Article 49 GDPR (consent of the individual, or for the performance of a contract, among others).

Controllers are under an obligation to keep a record of the legal basis for transferring personal data outside Nigeria, as well as to record “the adequacy of protection,” according to the criteria described in detail under Section 42 of the Act. This wording suggests that the adequacy of the means of transfers used can be validly assessed by each controller. This is a departure from other existing adequacy regimes, which usually require an official body to declare a specific jurisdiction adequate. 

The Commission is tasked with issuing guidelines on how to assess the adequacy of a particular means of transfer, under the criteria established by Section 42 of the Act. This section explains that an adequate level of protection means “upholding principles that are substantially similar (n. – our emphasis) to the conditions for processing personal data” under the Act. The criteria relevant for adequacy include “access of a public authority to personal data,” potentially complicating such assessments in line with the broader global debate on “government access to data held by private companies.”

Of note, the Commission is given the possibility under the Act to determine whether “a country, region, or specified sector within a country, or standard contractual clauses, affords an adequate level of protection.” In this sense, it is important to recall that the NDPR and Annex C of the Implementation Framework already provide a white list of 41 countries whose laws are considered adequate. Interestingly, the Act specifically allows the Commission to make an adequacy determination under the Nigerian law based on an adequacy decision “made by a competent authority of other jurisdictions,” if such adequacy is based on similar criteria to those listed in Section 42 of the Act. This opens the door for Nigeria to potentially equivalate adequacy decisions made by foreign bodies, like the European Commission, making an “adequacy network effect” functional. The Commission is also empowered to approve BCRs, Codes of Conduct, and certification mechanisms for data transfers. 

Finally, and particularly interesting in the context of emerging certification frameworks like the Global Cross Border Privacy Rules (CBPR) framework, the Act requires that any specific “international, multinational cross-border data transfer codes, rules, or certification mechanisms” relating to data subject protection or data sovereignty must be approved by the National Assembly of Nigeria. This provision on data sovereignty aligns with the Nigeria National Data Strategy, 2022, which incorporates data sovereignty as one of its enabling pillars. Under the Strategy, data sovereignty will facilitate data residency and ensure that data is treated in accordance with national laws and regulations.

In this sense, the Act also empowers the Commission to “designate categories of personal data that are subject to additional specified restrictions on transfer to another country.” This designation would be based on “the nature” of such personal data and on “risks” to data subjects. This provision opens the door to potential future data localization requirements for specific categories of personal data. 

8. Enforcement: Legal Foundation for the Nigeria Data Protection Bureau, Creation of a Governing Council and Expected Regulations

Establishment of the Commission

Originally created through an Executive Order in February 2022, the NDPB has now been renamed the “Nigeria Data Protection Commission” and will operate as an independent and impartial body to oversee the Act’s implementation and enforcement. Previously, data protection enforcement in Nigeria was conducted under the auspices of the Nigeria Information and Technology Development Agency. However, concerns that the NITDA lacked powers to oversee data protection in the country may have necessitated the creation of a new agency. The Commission will function as a successor agency, and all persons engaged in the activities of the Commission shall, upon enactment of the Act, have the same rights, powers, and remedies held by the NDPB before the commencement of the law (Section 64(1)). All regulatory instruments issued by the NITDA, including the NDPR, shall remain in force, and shall have the same weight as if they had been issued by the Commission until they expire, are repealed, replaced, reassembled, or altered (Section 64(2)(f)).

Functions and Powers of the Commission  

Some of the key functions and powers of the Commission include:

  1. Accrediting, Licensing, and Registering Suitable Bodies to Provide Data Protection Compliance Services (Section 5(c)).

Section 28 of the Act provides the Commission with the power to delegate the duty to monitor, audit, and report on compliance with the law to licensed data protection compliance organizations. This model was introduced under the NDPR and allows the data protection authority to delegate some functions under existing regulations to monitor, audit, and report on compliance by data controllers and data processors. Detailed provisions on the operation of DPCOs can be found under the NDPR and Implementation Framework and shall continue to apply to controllers and processors. 

  1. Designating, Registering, and Collecting Fees from Data Controllers and Processors of Major Importance (Section 5(d)).

Following successful registration of a controller or processor of major importance, the Commission is tasked to publish a register of duly registrants on its website. The Commission is also expected to prescribe fees and levies to be paid by this class of controllers and processors. 

  1. Participating in international fora and engaging with national and regional authorities responsible for data protection to develop efficient strategies for the regulation of cross-border transfers of personal data (Section 5(j)).

Currently, the Commission’s predecessor, the NDPB, continues to fulfill this mandate, as seen in its recent participation in initiatives such as the Cross Border Privacy Rules Forum

  1. Issuing Regulations, Rules, Directives, and Guidance.

The Commission is expected to develop certain regulations as prescribed under the law and as detailed above, including in relation to designating new categories of sensitive data, adequate steps for data breach notification, conducting DPIAs, or issuing data localization regulations for specific categories of personal data.  

Other functions of the Commission include promoting public awareness and understanding of personal data protection, the rights and obligations imposed under the law, and the risks to personal data; receiving complaints alleging violations of the Act or subsidiary legislation; and ensuring compliance with national and international personal data protection obligations and good practice.

In a bid to ensure that the services of the Commission are accessible beyond urban areas, the Commission is allowed to establish its offices in other parts of Nigeria (Section 3(b)). This is important as part of creating awareness of the importance of data protection across the country.  

The Commission will be governed by a “Governing Council” (the Council), whose members will be appointed by the President on the recommendation of the Minister on a part-time basis, drawn from the public and private sector to serve for a term of 5 years that is renewable once. This rule exempts the National Commissioner, who will serve as the Secretary to the Council.

The Council is tasked with providing overall policy direction of the affairs of the Commission, approving strategic and action plans, budgeting support programs submitted by the National Commissioner, as well as providing advice and counsel to the National Commissioner.

9. Offenses, Sanctions, and Compensation: Higher Penalties for Data Controllers and Processors of Major Importance 

The Act provides a data subject who has suffered injury, loss, or harm arising from a violation of the law with a private right of action that allows recovery of damages in a civil proceeding. Where a controller or processor violates the provisions of the Act or subsidiary legislation, the Commission may issue a compliance order requiring them to take specific measures to remedy the situation within a specified period as well as inform them of their right to a judicial review. The Commission may also impose an enforcement order or a sanction. In issuing an enforcement order or a sanction, the Commission may:

However, it is not clear from the Act what conditions may trigger an enforcement order, sanction, and thus a penalty or any other such measure. In laws such as Section 62 of Kenya’s Data Protection Act, failure to comply with the requirements of an enforcement order (referred to as a compliance order under the Act) triggers a penalty notice. The Act does not specify the period within which complaints may be heard and concluded.

The penalty amount depends on whether the violator is a data controller or processor of major importance or not. Penalties against data controllers or processors of major importance shall be the higher of N10,000,000 (approximately 22,000 USD) or 2% of the annual gross revenue of the preceding financial year. Penalties against other data controllers and processors shall be greater than N2,000,000 (approximately 4,300 USD) or 2% of the annual gross revenue of the preceding financial year.

The Commission is empowered to create regulations that create new offenses and that impose penalties not exceeding those prescribed under the Act (Section 56(3)).

Conclusion 

As Nigeria continues to make its mark within the global digital economy and rapidly expand its technology ecosystem, this Act represents a continued focus on protecting the personal data of Nigerian citizens, in alignment with common internationally accepted principles of data protection. 

However, the Act contains unique provisions that should not be overlooked, including a new classification of data controllers and processors “of major importance” and specific obligations attached to them, as well as broader protections for exempt processing activities. Overall, the Act represents a significant step in Nigerian data protection and notably resolves the long-running dispute regarding the identity and institutional authority of Nigeria’s primary data protection regulator. 

Unveiling China’s Generative AI Regulation

Authors: Yirong Sun and Jingxian Zeng

The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech and Jingxian Zeng, research fellow at the University of Hong Kong Philip K. H. Wong Centre for Chinese Law. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.

The Draft Measures for the Management of Generative AI Services (the “Draft Measures”) were released on April 11, 2023, with their comment period closed on May 10. Public statements by industry participants and legal experts provided insight into the likely content of their comments. It is now the turn of China’s cyber super-regulator – the Cyberspace Administration of China (“CAC”) – to consider these comments and likely produce a revised text.

This blog analyzes the provisions and implications of the Draft Measures. It covers the Draft Measures’ scope of application, how they apply to the development and deployment lifecycle of generative AI systems, and how they deal with the ability of generative AI systems to “hallucinate” (that is, produce inaccurate or baseless output). It also highlights potential developments and contextual points about the Draft Measures that industry and observers should pay attention to.

The Draft Measures aim to protect the “collective” interests of “the public” within the territory of the People’s Republic of China (PRC) in relation to the Management of Generative AI Services. The primary risk foreseen by the CAC involves the potential use of the novel technology to manipulate public opinion and fuel social mobilization by spreading sensitive or false information. The Draft Measures also seek to tackle issues arising from high-profile societal events, such as data leaks, frauds, privacy breaches, intellectual property infringements, as well as overseas incidents widely reported in Chinese media, including defamation and extreme cases of suicide following interactions with AI chatbots. Notably, the Draft Measures set high standards for data authenticity and impose safeguards for personal information and user input. They also mandate the disclosure of information that may impact users’ trust and the provision of guidance for using the service rationally. 

Meanwhile, concerns have arisen that the Draft Measures may slow down the development of generative AI-based products and services by Chinese tech giants. Companies providing services based on generative AI, including those provided through application programming interfaces (“APIs”), are all subject to stringent requirements in the Draft Measures. The Draft Measures thus concern not only those who have the means to train their own models, but also smaller businesses who want to leverage on open-source pre-trained models to deliver services. In this regard, the Draft Measures are likely to present compliance challenges within the open-source context.

While this blog focuses on the Draft Measures, it is important to note that industrial policies from both central and local governments in China also exert substantial influence over the sector. Critically, the task to promote AI advancement amid escalating concerns is overseen by authorities other than the CAC, such as the Ministry of Science and Technology (“MST”) and the Ministry of Industry and Information Technology (“MIIT”). Recently, the China Academy of Information and Communications Technology (“CAICT”), a research institute affiliated with the MIIT, introduced China’s first-ever industry standards1 for assessing generative AI products. These agencies, along with their competition and coordination, can and will co-play a significant role with the CAC in the realm of generative AI regulation. 

1. Notable aspects of the Draft Measures’ scope of application: Definition of “public” and extraterritorial application

Ambiguity in the definition of “public”

The Draft Measures regulate all generative AI-based services offered to “the public within the PRC territory.”2 This scope of application diverges from existing Chinese laws and regulations where intended service recipients are not usually considered. For instance, regulations targeting deep synthesis and recommendation algorithms both apply to the provision of service using these technologies regardless of service recipients being individuals, businesses or “the public.” Looking at its context, Article 6 of the Draft Measures suggests that generative AI-based services have the potential to shape public opinion or stimulate social mobilization, essentially highlighting their impact on “the public.” This new development thus likely signifies the CAC’s goal to prioritize the protection of wider societal interests over individual ones such as privacy or intellectual property which could be protected under previous regulations.  

However, the Draft Measures leave “the public (公众)” undefined. This gives rise to ambiguity as to the scope of application for the Draft Measures. For example, would a service licensed exclusively to a Chinese private entity for in-house use fall in the scope? How about a service accessible only to certain public institutes but not to the unaffiliated, or one customized for individual clients who each receive a unique product derived from a common foundation model, or simply an open-source model that is ready to download and install?

Extraterritorial application

The new approach also suggests a more extensive extraterritorial reach. Regardless of where the service is provided, as long as the public within the PRC territory has access to it, the Draft Measures apply. To avoid being subject to Chinese law, OpenAI, for example, has reportedly begun blocking users based in mainland China. This development could further restrict Chinese users’ access to overseas generative AI services, especially since even before the Draft Measures were released, most Chinese users’ access to such services was  already geo-blocked – either by the service providers themselves (e.g., by requiring a foreign telephone number for registration), or by the Chinese government through enforcement measures. At the same time, the scale of China’s user market and its involvement in AI development render it a “vital” jurisdiction in terms of AI regulation. OpenAI CEO has recently called for collaboration with China to counter AI risks, a trend we might see more in the future. 

2. The Draft Measures adopt a compliance approach based on the lifecycle of generative AI systems 

The Draft Measures are targeted at “providers” of generative AI-based services

The Draft Measures take the approach of regulating generative AI-based service providers. As per Article 5, “providers (提供者)” are those “using generative AI to offer services such as chat, text, image, audio generation; including providing programmable interface and other means which support others to themselves generate text, images, audio, etc.” The obligations are as follows:

Incentivizing providers to allocate risk upstream to developers

By imposing lifecycle compliance obligations on the end-providers, the Draft Measures create incentives for end-providers to allocate risks to upstream developers through mechanisms like contracts. Whether the parties can distribute their rights and obligations fairly and efficiently depends on various factors, such as the resources available to them and the presence of asymmetric information among them. To better direct this “private ordering” with significant social implications, the EU has planned to create non-binding standard contractual clauses based on each party’s level of control in the AI value chain. The CAC’s stance in this new and fast-moving area remains to be seen.

The Draft Measures pose potential challenges for deploying open-source generative AI systems

Open-source models raise a related but distinct issue. Open-source communities are currently developing highly capable large language models (“LLMs”), and businesses have compelling commercial incentives to adopt them, as training a model from scratch is relatively hard. However, many open-source models are released without a full disclosure of their training datasets, due to reasons such as the extensive effort required for data cleaning and privacy issues, especially when user data is involved. Adding to this complexity is the fact that open-source LLMs are not typically trained in isolation. Rather, they form a modification chain where the models build on top of each other with modifications made by different contributors. Consequently, for those using open-source models, several obligations in the Draft Measures become difficult or even impossible to fulfill, including pre-launch assessment, post-launch retraining, and information disclosure.

3. The Draft Measures target the “hallucination” of generative AI systems

The Draft Measures describe generative AI as “technologies generating text, image, audio, video, code, or other such content based on algorithms, models, or rules.” In contrast to the EU’s new compromise text on rules for generative AI, which adopts a technical definition of “foundation models,” the Draft Measures focus on the technology’s function, regardless of their underlying mechanisms. Moreover, according to Article 6 of the Draft Measures, generative AI-based services automatically fall under the scope of Regulations for the Security Assessment of Internet Information Services Having Public Opinion Properties or Social Mobilization Capacity, which mandate security assessment.  A group of seven Chinese scholars have proposed removing this provision and applying security assessment only to those that actually possess these properties.

The Draft Measures contain provisions targeted at ensuring accuracy throughout the developmental lifecycle of generative AI systems. These echo the CAC’s primary concern that the technology could be misused to generate and disseminate misinformation. Article 7(4) of the Draft Measures stipulates that providers must guarantee the “veracity, accuracy, objectivity, and diversity” of the training data. Article 4(4) of the Draft Measures requires that all content generated be “true and accurate,” and that providers of generative AI-based products and services must adopt measures in place to “prevent the generation of false information.” Such providers are responsible for filtering out any non-compliant material and preventing its regeneration within three months (Article 15). However, industry representatives and legal practitioners in China have raised concerns about the baseline and technical feasibility of ensuring data authenticity, given the use of open internet information and synthetic data in the development of generative AI.

4. Looking Ahead

The CAC is expected to refine the Draft Measures after gathering public feedback. The final version and subsequent promulgation may be influenced by a broader set of contextual factors. We believe the following aspects also warrant consideration:

1Chinese major players in the AI industry are forming interest groups to channel their influence on policy makers. For example, China’s industry standards for generative AI were drafted by over 40 entities including tech companies such as Baidu, SenseTime, Xiaomi, NetEase. SenseTime also launched an open platform for AI safety governance to shape practices around AI regulatory issues such as cybersecurity, traceability, IP protection.

2A widely circulated translation of Article 2 states: “These Measures apply to the research, development, and use of products with generative AI functions, and to the provision of services to the public within the territory of the People’s Republic of China.” However, we believe this is misleading. A more accurate read of the original Chinese text and its context suggest that “the provision of services to the public” is a cumulative requirement rather than a separate one.

3The Draft Measures seem to exhibit technical sophistication in their terminology. In Articles 7 and 17, the data compliance obligation is split into two phases – pre-training and optimization. However, the choice of terminology is peculiar, as the prevailing terms in machine learning are pre-training and fine-tuning. Optimization is typically employed to describe a stage within the training process, often used in conjunction with forward and backward propagation.

FIRST JAPAN PRIVACY SYMPOSIUM CONVENING G7 REGULATORS FOCUSES ON GLOBAL TRENDS AND ENFORCEMENT PRIORITIES

The Future of Privacy Forum (FPF), a global non-profit focused on data protection and privacy, and S&K Brussels LPC will jointly present the first edition of the Japan Privacy Symposium on June 22, 2023. The event will convene in Tokyo, bringing together leaders in the Japanese privacy community with data protection and privacy regulators from across the globe.

The event coincides with the G7 Data Protection Authorities and Privacy Commissioners’ Summit, and the Symposium will convene leaders in the Japanese privacy community with data protection and privacy regulators from across the globe to discuss key issues on AI governance and data protection law, the future of adtech, global cooperation and enforcement trends.  The line-up of speakers includes: Ms. Rebecca Kelly Slaughter (Commissioner, U.S. Federal Trade Commission), Dr. Wojciech Wiewiórowski (European Data Protection Supervisor), Mr. Philippe Dufresne (Federal Privacy Commissioner, Canada), Ms. Ginevra Cerrina Feroni (Vice President of the Garante, Italy), Mr. John Edwards (Information Commissioner, UK), with a keynote address from Mr. Shuhei Ohshima (Commissioner, Japan’s Personal Information Protection Commission).

“We’re excited to co-host this valuable event that will bring together data protection and privacy regulators from around the world alongside the Japanese privacy community,” Gabriela Zanfir-Fortuna, FPF’s Vice President for Global Privacy, said. “Data protection and privacy regulators from the G7 economies are meeting in Tokyo to strategize about coordinated approaches to tackle the challenges raised by the advancement of new technologies fueled by data and their impact on society, people, and economy. This Symposium offers a forum for the regulators and the Japanese data protection and privacy community members to exchange ideas, share an overview of the state of play in global regulation and strategize for the future.”

Takeshige Sugimoto, Managing Director and Partner at S&K Brussels LPC, FPF’s Senior Fellow for Global Privacy, and Co-Founder and Board Member of Japan DPO Association, added: “S&K Brussels is delighted to co-host the inaugural Japanese privacy symposium to bring together esteemed privacy and data protection leaders from G7 countries.  Opportunities for collaboration in the global data protection and privacy community are vital, and we hope that the Japan Privacy Symposium will set the stage for important participation and dialogue for years to come.”

FPF is focused on the expansion of its international reach in Asia, with its August 2021 Asia Pacific office opening in Singapore and the announcement of a new FPF APAC Managing Director, Josh Lee Kok Thong, last July.

For more information about the event, the agenda, and speakers, visit the FPF site.

###

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks and develop appropriate protections.

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on Twitter and LinkedIn.

About S&K Brussels LPC

S&K Brussels LPC is a Japanese law firm composed of lawyers and foreign lawyers whose main practice area is data protection and privacy laws in five jurisdictions i.e., EU, UK, US, China and Japan, which opened in Brussels, Belgium in 2019. We focus on Future Proof efforts to read the shape of future regulations, including AI regulations and other data-related regulations as regulations closely related to data protection legislation.

— データ保護とプライバシーに焦点を当てた世界的な非営利団体であるフューチャー・オブ・プライバシー・フォーラム(Future of Privacy Forum(FPF))弁護士法人S&K Brussels法律事務所は、2023年6月22日に第1回日本プライバシーシンポジウムを共同で開催します。当イベントは、日本のプライバシーコミュニティのリーダーたちと、世界中のデータ保護およびプライバシー規制監督当局が一堂に会し、東京で開催されます。

G7データ保護監督当局・プライバシーコミッショナーのサミットと同時期の開催となる当シンポジウムでは、日本のプライバシーコミュニティのリーダーたちと世界中のデータ保護・プライバシー規制監督当局が一堂に会し、AIガバナンスとデータ保護法、広告技術の未来、グローバル協力、執行動向に関する重要課題について議論する予定です。当シンポジウムでは、レベッカ・ケリー・スローター(Rebecca Kelly Slaughter)氏(米国連邦取引委員会委員)、ヴォイチェフ・ヴィエヴィオロフスキー(Wojciech Wiewiórowski)氏(欧州データ保護監督官)、フィリップ・デュフレーヌ(Philippe Dufresne)氏(カナダ連邦プライバシーコミッショナー)、ジネーヴラ・チェリーナ・フェローニ(Ginevra Cerrina Feroni)氏(イタリアデータ保護監督当局(Garante)副委員長)、ジョン・エドワーズ(John Edwards)氏(英国情報コミッショナーオフィス情報コミッショナー)にパネリストとして御登壇頂くとともに、大島周平(Shuhei Ohshima)氏(日本個人情報保護委員会委員)に基調講演に御登壇頂きます。

FPFのグローバルプライバシー担当副社長であるガブリエラ・ザンフィル=フォルトゥナ(Gabriela Zanfir-Fortuna)氏は、「世界中のデータ保護およびプライバシー規制監督当局と日本のプライバシーコミュニティが集まるこの貴重なイベントを共催できることを大変嬉しく思います」、「G7経済圏のデータ保護およびプライバシー規制監督当局が東京に集まり、データによって促進される新しい技術の進歩や、それらが社会、人々、経済に与える影響によって生じる課題に取り組むための協調的なアプローチについて戦略を練っています。当シンポジウムは、規制監督当局と日本のデータ保護およびプライバシーコミュニティのメンバーが意見を交換し、世界の規制の現状を共有し、将来に向けて戦略を立てるための場を提供するものです。」と述べています。

弁護士法人S&K Brussels法律事務所代表パートナー、FPFのグローバルプライバシー担当シニアフェロー、一般社団法人日本DPO協会の共同設立者兼理事である杉本武重(Takeshige Sugimoto)氏は「S&K Brusselsは、G7諸国の尊敬すべきプライバシーとデータ保護のリーダーたちが集まる、第1回日本プライバシーシンポジウムを共同開催することができて嬉しく思います。 世界のデータ保護とプライバシーコミュニティにおける協力の機会は不可欠であり、日本プライバシーシンポジウムが、今後何年にもわたって重要な参加と対話の舞台となることを期待しています。」と述べています。

FPFは、2021年8月にアジア太平洋オフィス(FPF APAC)をシンガポールに開設し、昨年7月には新しいFPF APACマネージングディレクターのジョシュ・リー・コク・トン(Josh Lee Kok Thong)氏の就任を発表するなど、アジアでの国際展開に力を入れています。

当イベントの詳細、アジェンダおよび登壇者については、FPFのウェブサイトを御覧下さい。

###

Future of Privacy Forum(FPF)について

Future of Privacy Forum(FPF)は、学術関係者、市民社会、政府関係者、産業界が集まり、データ活用の社会的、政策的、法的な意味を評価し、リスクを特定し、適切な保護を開発するための世界的な非営利団体です。FPFは、適切な法律、政策および規則が整備されれば、技術とデータは社会に利益をもたらし生活を向上させることができると考えています。FPFはワシントンD.C.、ブリュッセル、シンガポールおよびテルアビブにオフィスを構えています。TwitterLinkedInでFPFをフォローしてください。

S&K Brussels法律事務所について

S&K Brussels法律事務所は2019年にベルギーのブリュッセルで開業したEU、英国、米国、中国及び日本の5つの法域のデータ保護法制を主な取扱分野とする弁護士・外国弁護士によって構成される日本の法律事務所です。データ保護法制に密接に関連する規制としてのAI規制をはじめとするデータ関連規制を含め、将来の規制の形を読むFuture Proofの取組みに力を入れています。

FPF at CPDP 2023: Covering Hot Topics, from Data Protection by Design and by Default, to International Data Transfers and Machine Learning

At this year’s annual Computers, Privacy and Data Protection (CPDP) conference in Brussels, several Future of Privacy Forum (FPF) staff took part in different panels, organized by FPF, as well as academic, industry, and civil society groups. This blogpost provides a brief overview of these exciting events, and CPDP will publish recordings of them shortly. 

May 24: EU Commission and ASEAN launch Joint Guide to Model Clauses for Data Transfers

On the conference’s first day, FPF Vice President for Global Privacy Gabriela Zanfir-Fortuna joined a panel organized by Haifa Center for Law and Technology (Faculty of Law, University of Haifa) on the GDPR’s effectiveness, alongside Tal Zarsky, Dean and Professor of Law at the University of Haifa’s Faculty of Law, Raphael Gellert, assistant professor in ICT and private law at Radboud University, Sam Jungyun Choi, associate in the technology regulatory group of Covington & Burling LLP, and Amit Ashkenazi, research student and adjunct lecturer on cyber law and policy at the University of Haifa. The panel contributed to current reflections on the effectiveness of the GDPR, five years after its enactment, by focusing on challenges arising from the regulatory design it sets forth. Gabriela noted that data protection law is much broader than the GDPR because it has a fundamental element behind it, meaning that the right to the protection of personal data is protected as a fundamental right at a constitutional level in the EU. She also stressed that ongoing adoption of the GDPR has catalyzed more societal interest in law, technology, and data protection rights and concepts. 

cpdp 2023 1

Photo: CPDP Panel on Exploring the Many Faces of the GDPR – in Search of Effective Data Protection Regulation, 5/24/2023

Later that day, Gabriela moderated a panel organized by the EU Commission, which served as a platform to launch a “Joint Guide to ASEAN Model Contractual Clauses and EU Standard Contractual Clauses.” The Guide identifies commonalities between the two sets of model clauses and aims to “assist companies present in both jurisdictions with their compliance efforts under both sets of clauses.” The panel was joined by Denise Wong, Deputy Commissioner-designate, Personal Data Protection Commission Singapore, and Assistant Chief Executive-Designate of the Infocomm Media Development Authority, Alisa Vekeman, European Commission, International Affairs and Data Flow team, and Philipp Raether, Group Chief Privacy Officer, Allianz. The panelists noted that model clauses are the most used mechanisms for international data transfers and that efforts like the Joint Guide are a promising solution for a global regime underpinning flows of personal data across different jurisdictions, while providing safeguards for individuals and their data. Officials in the panel noted that the Guide is just the first step in this EU-ASEAN collaboration on model clauses, noting that a set of best practices from companies who use both is to be expected in the near future.

To wrap up the first day of the conference, FPF’s Policy Councel for Global Privacy, Katerina Demetzou, joined a panel on the constitutionalization of data rights in the Global South, along with Mariana Marques Rielli, Institutional Development Coordinator at Data Privacy Brazil, Laura Schertel Mendes, law Professor at the University of Brasilia (UnB) and at the Brazilian Institute for Development, Education and Research (IDP) and Senior Visiting Researcher at the Goethe-Universität Frankfurt am Main with the Capes/Alexander von Humboldt Fellowship, and Risper Onyango, advocate of the High Court of Kenya, currently serving as a Digital Policy Lead under the Digital Economy Department at the Lawyers Hub. In her intervention, Katerina explored how the GDPR has been applied by Data Protection Authorities in Europe to emotion recognition AI systems and to Generative AI. Her examples emphasized that discussions about AI governance and AI regulation should examine existing data protection law applications to these systems and develop in response to gaps in these legal systems.

img 7732

Photo: Panel on From Theory to Practice: Digital Constitutionalism and Data Justice in Movement in the Global South, 5/24/2023

May 25: High Level Discussion Spurred by FPF’s Data Protection by Design and by Default Case-Law Report

On May 17, FPF launched a comprehensive Report on the enforcement of the EU’s GDPR Data Protection by Design and by Default (DPbD&bD) obligations, which are outlined in GDPR Article 25. The Report is informed by extensive research covering more than 92 decisions from Data Protection Authorities (DPAs) and national Courts, and it offers specific Guidance and other policy documents issued by regulators.

On May 25, FPF organized a panel moderated by the Report’s co-author Christina Michelakaki, FPF Policy Fellow for Global Privacy, on the enforcement of Article 25 GDPR and the uptake of Privacy Enhancing Technologies (PETs). Marit Hansen, State Data Protection Commissioner of Land Schleswig-Holstein, Jaap-Henk Hoepman, Professor, Radboud University Nijmegen/University of Groningen, Cameron Russell, Primary Privacy Advisor on Global Payments Matters at eBay, and Stefano Leucci, Legal and Technology expert at the European Data Protection Supervisor joined the panel. The speakers offered their perspectives on the Article 25 GDPR enforcement, delving into topics such as the interrelation between dark patterns and by default settings, the role of Article 25 GDPR in preventing harms from AI systems, and the maturity of PETs. 

Photos: CPDP workshop on State-of-Play of Privacy Preserving Machine Learning (PPML), and CPDP Panel on the Enforcement of Data Protection by Design & Default: Consequences for the Uptake of Privacy-Enhancing Technologies, 5/25/2023

cpdp 2023 4 6

Photo: CPDP Panel on the Enforcement of Data Protection by Design & Default: Consequences for the Uptake of Privacy-Enhancing Technologies, 5/25/2023

FPF’s Managing Director for Europe, Rob van Eijk, organized and facilitated a workshop exploring how to clear the path towards alternative solutions for processing of (personal) data with Machine Learning. Four data scientists, Lindsay Carignan (Holistic AI), Nigel Kingsman (Holistic AI), Victor Ruehle (Microsoft Research), and Reza Shokri (National University of Singapore) joined the workshop. The group introduced an easy to understand privacy auditing framework that quantitatively measures privacy risks in ML systems, while also exploring the relationship between bias and regulations in legislation such as the EU AI Act. You can watch the recording of the workshop here.

The same day, Rob also joined a panel on PETs, consumer protection, and the online ads ecosystem with Marek Steffen Jansen, Privacy Policy Lead – EMEA/Global at Google, Anthony Chavez, VP of Product Management of Google, Marie-Paule Benassi, lawyer, economist, data scientist, and Head of Enforcement of Consumer Law and Redress at the European Commission, Stefan Hanloser, VP Data Protection Law at ProSiebenSat.1 Media SE, and Christian Reimsbach-Kounatze, Information Economist and Policy Analyst at the OECD Directorate for Science, Technology and Innovation. You can watch the recording of the panel here.

May 26: Reflections on automation, compliance and data protection law

Finally, Gabriela participated in a day-long “philosopher’s seminar” on compliance and automation in data protection law organized by CPDP, ALTEP-DP, COHUBICOL under the leadership of Prof. Mireille Hildebrandt, which will flow into a series of published research papers later on in 2023.

While celebrating the five years anniversary of the GDPR becoming applicable, at a pivotal moment of growth and change for emerging technologies, CPDP 2023 in Brussels gave the FPF team an extraordinary opportunity to engage with and facilitate collaborative dialogues with leading academics, technologists, policy experts, and regulators.

New FPF Report: Unlocking Data Protection by Design and by Default: Lessons from the Enforcement of Article 25 GDPR

On May 17, the Future of Privacy Forum launched a new report on enforcement of the EU’s GDPR Data Protection by Design and by Default (DPbD&bD) obligations, which are outlined in GDPR Article 25. The Report draws from more than 92 data protection authority (DPA) cases, court rulings, and guidelines from 16 EEA member states, the UK, and the EDPB to provide an analysis of enforcement trends regarding Article 25. The identified cases cover a spectrum of personal data processing activities, from accessing online services and platforms, to tools for educational and employment contexts, to “emotion recognition” AI systems for customer support, and many more.

The Report aims to explore the effectiveness of the DPbD&bD obligations in practice, informed by how DPAs and courts enforced Article 25. For instance, we analyze whether DPAs and courts find breaches of Article 25 without links to other infringements of the regulation and what provisions enforcers tend to apply together with Article 25 the most, including the general data protection principles and requirements related to data security under Article 32. We also look at what controls and controller behavior are and are not deemed sufficient to comply with Article 25.

The GDPR’s DPbD&bD provisions in Article 25 oblige controllers to: 1) adopt technical and organizational measures (TOMs) that, by design, implement data protection principles into data processing and protect the rights of individuals whose personal data is processed; and 2) ensure that only personal data necessary for each specific purpose is processed. Given the breadth of these obligations, it has been argued that Article 25 makes the GDPR “stick” by bridging the gap between its legal text and practical implementation. GDPR’s DPbD&bD obligations are seen as a tool to enhance accountability for data controllers, implement data protection effectively, and add emphasis to the proactive implementation of data protection safeguards.

Our analysis on the enforcement, and ultimately the effectiveness, of Article 25 is all the more important, given the increasing development and deployment of novel technologies involving very complex personal data processing, like Generative AI, and rising data protection concerns. Understanding how Article 25 obligations manifest in practice and the requirements of DPbD&bD may prove essential for the next technological age.

This Report outlines and explores the key elements of GDPR Article 25, including the:

Additionally, we analyze the individual concepts of “by Design” and “by Default,” identify divergent enforcement trends, and explore three common applications of Article 25 (direct marketing, privacy preservation and Privacy Enhancing Technologies (PETs), and EdTech). This Report also includes a number of Annexes that seek to provide more information on the specific cases analyzed and a comparative overview of DPA enforcement actions. 

Our analysis determines that European DPAs diverge in how they interpret the preventive nature of Article 25 GDPR. Some are reluctant to find violations in cases of isolated incidents or where Article 5 GDPR principles are not violated, while others apply Article 25 preventively before further GDPR breaches or even planned data processing. Our research also finds that most DPAs are reluctant to specify appropriate protective measures and to explicitly outline the role of PETs. Ultimately, the Report shows that despite the novelty of Article 25, and the criticism surrounding its vague and abstract wording, it is a frequent source of some of the highest GDPR fines, highlighting the need for organizations to maintain a firm grasp over the concepts of DPbD&bD.

Vietnam’s Personal Data Protection Decree: Overview, Key Takeaways, and Context

Author: Kat MH Hille

The following is a guest post to the FPF blog from Kat MH Hille, an attorney with expertise in corporate, aviation, and data protection law. She graduated with a J.D. from the University of Iowa, School of Law, and has extensive experience practicing law in both the United States and Vietnam (contact: https://www.linkedin.com/in/katmhh/). The guest blog reflects the opinion of the author only. Guest blog posts do not necessarily reflect the views of FPF.

On April 17, 2023, the Vietnamese Government promulgated the Decree of Personal Data Protection (Decree), which was initially published as a draft on February 9, 2021 and went through several revisions. Before the Decree’s issuance, personal data protection in Vietnam was governed by 19 different laws and regulations, resulting in a fragmented legal framework. The Decree aims to fill these gaps and provide a comprehensive and uniform approach to personal data protection in Vietnam, extending safeguards for personal data to over 97 million people.

This post provides an overview of the Decree, including key dates, context, legal effects, requirements and how they fare with other comprehensive data protection law regimes around the world. Building on this foundation, certain key provisions and notable features of the Decree that warrant attention, including:

These provisions will be discussed in detail below.

1. Overview

The Decree is significant despite its lower status in Vietnam’s hierarchy of laws

As personal data protection is a new and developing area of law in Vietnam, Vietnam’s first legislative instrument on personal data protection takes the form of a “decree,” which is ranked lower in Vietnam’s statutory hierarchy than a code or law, and it is the result of executive action. A benefit of enacting a decree is that it can be done so more easily, without the need for approval from the National Assembly. Nevertheless, the Vietnamese Government’s goal is to ultimately enact a comprehensive and robust law for effective and enforceable personal data protection in 2024, according to a Decision issued by the Prime Minister in January 2022.

However, the Decree’s status means that in the event of conflicting regulations on the same issue, codes and laws would take precedence over the Decree. That said, the Decree remains the first comprehensive personal data protection regulation in Vietnam. Despite its lower legal status,  the Decree still carries significant weight and impact in regulating personal data protection in Vietnam, and those who fail to comply with its provisions will still face legal consequences.

The Decree incorporates a unique blend of global standards and Vietnamese characteristics

Like other data protection laws inspired by the European Union (EU)’s General Data Protection Regulation (GDPR), the Decree sets out the responsibilities of organizations and individuals that process personal data, as well as the rights of  individuals over their personal data. 

However, the Decree also includes unique provisions that are specific to Vietnam’s context, such as a prohibition on the sale and purchase of personal data through any means, unless otherwise provided by law (Article 3.4), which may have significant consequences on the activity of data brokers and other businesses engaged in commodification of personal data. Additionally, organizing the collection, transfer, purchase, or sale of personal data without the consent of the data subject or the act of establishing software systems, as well as implementing technical measures for these purposes constitutes a violation of the Decree.

The Decree introduces the concept of “Personal Data Controllers and Processors,” which are entities or individuals that function both as Personal Data Controllers and Personal Data Processors. This definition is unique to the Decree and distinguishes it from other data protection laws around the world that typically only recognize the separate categories of Personal Data Controllers and Personal Data Processors. While the inclusion of Personal Data Controllers and Processors is meant to provide greater clarity and precision in defining the roles and responsibilities of different actors involved in personal data processing, it may actually add unnecessary complexity to the already complex landscape of privacy laws. This is because a single entity could be classified as both a Personal Data Controller and a Personal Data Processor depending on the specific definition being used, making it difficult to navigate and comply with the requirements of different privacy laws across different jurisdictions.

Further, the enacted Decree does not include a specific fine structure for violation of the Decree (the 2021 draft of the Decree proposed specific fines for single violations of the Decree, including fines of up to 5% of a personal data processor’s revenue for the most serious violations). Rather, the enacted Decree outlines a general provision that violators may be subject to disciplinary action, administrative penalties, or criminal prosecution, depending on the seriousness of the offense. 

Furthermore, compared with the 2021 draft of the Decree, the final Decree does not provide for the establishment of a personal data protection commission to enforce the regulation. Rather, the Decree assigns responsibility for enforcing its requirements to an existing agency within the Ministry of Public Security (MPS), the Cybersecurity and High-Tech Crime Prevention Department (A05).

While MPS will need to clarify key provisions in subsequent regulations, the Decree creates the first comprehensive foundation to govern data processing activities in Vietnam. The Decree will take effect on July 1, 2023, giving organizations only two months to make the necessary adjustments to their business and operations in order to comply with the new regulations. Significant aspects of the Decree are explored below in greater detail.

2. The (extra)territorial scope introduces a nationality criterion for covered entities

The Decree applies to Vietnamese agencies, organizations, and individuals (whether based within or outside of Vietnam), and to foreign agencies, organizations, and individuals that are either based in Vietnam or that are based overseas and directly participate in or are otherwise involved in personal data processing activities in Vietnam. 

Note that “personal data processing” covers a wide range of activities in relation to personal data, including collection, recording, analysis, verification, storage, alteration, disclosure, combination, access, retrieval, erasure, encryption, decryption, copying, sharing, transmission, provision, transfer, and deletion, as well as other related actions (Article 2.7).

There is still ambiguity as to the distinction between being “involved in” and “directly participating in” personal data processing activities, as well as the level of involvement with such activities that would bring a party within the scope of the Decree. Clarity on these issues through further regulations or guidance would be useful, especially considering that many third-party service providers or software vendors may arguably have some involvement in processing personal data.

3. The Decree recognizes a slightly different set of covered actors than other data protection laws

The Decree covers four categories of parties who process personal data:

In recognizing a distinction between controllers and processors, the final Decree removes ambiguity that was present in the 2021 draft of the Decree, which only provided for two categories of actors: personal data processors and third parties.

4. New processing principles, such as “no sale and purchase of personal data by any means”

The Decree outlines eight principles that govern data processing activities, which are similar to those recognized by the GDPR, including lawfulness, transparency of processing, purpose limitation, data minimization, accuracy, storage limitation, and appropriate measures to ensure the security of personal data. However, there are some notable differences.

Sale or Purchase of Personal Data: The Decree takes a more stringent stance than the GDPR by explicitly prohibiting the sale and purchase of personal data in any form, unless otherwise permitted by law. However, another provision in the Decree states that the act of “setting up software systems, technical measures or organization of the … purchase and sale of personal data without the consent of the data subject” is a violation (Article 22). Read together, the two provisions appear to imply that the purchase or sale with consent from the data subject could be permissible. Due to its ambiguity, further clarification is needed.

This stringent prohibition is a direct response to the numerous cases of personal data misuse that have occurred in Vietnam in recent years, including identity theft, financial fraud, intrusive advertising, and the exploitation of vulnerable individuals. A report showed that in 2022 alone, more than 17 million pieces of personal data were illegally harvested and sold for fraud and each personal data entry has been traded 987 times per day. However, the inclusion of a strict prohibition may conversely have a significant impact on industries that rely heavily on the use of personal data to drive innovation and business growth. It is possible that future circulars or guidelines may provide more clarity on this issue, including potential exceptions or allowances for certain use cases.

Notwithstanding this broad prohibition, PDCs and PDCPs may still share personal data with others if they obtain the data subject’s consent to do so, except when such sharing could harm national defense, national security, or public order and safety or could affect the safety or physical or mental health of others (Article 14). However, business entities and individuals providing marketing, product launching, and advertising services may only utilize personal data of their customers collected through their own business activities for conducting such services, if they obtain the data subject’s consent (Article 21).

Purpose Limitation: The Decree imposes a stricter purpose limitation compared to the GDPR, which allows for additional processing if it is compatible with the original purpose. Under the Decree, personal data can only be processed for the specific purposes that have been “registered” or “declared” by the PDC, PDP, PDCP, or TP. This requires these entities to ensure that their data processing activities do not deviate from or expand upon the registered and declared purposes. However, it is important to note that the Decree does not provide any guidance on how processing purposes are to be registered.

5. Covered data: broad definition of sensitive personal data, and stricter accountability rules for its processing

The Decree provides a broad definition of personal data, aligned with other comprehensive data protection laws. It defines personal data as any information that is expressed in the form of symbol, text, digit, image, sound or in similar forms in an electronic environment that is associated with a particular natural person or helps identify a particular natural person. Personally identifiable information means any information that is formed from the activities of an individual and, when used with other maintained data and information, can identify such particular natural person.

The Decree categorizes personal data into two groups: basic personal data and sensitive personal data, and includes an additional set of rules for the latter. 

Basic personal data includes the following forms of personal data:

Sensitive personal data is defined as personal data related to an individual’s privacy, a breach of which would directly affect the individual’s legitimate rights and interests. 

The Decree provides a non-exhaustive list of types of personal data that would be considered sensitive, including:

The list of sensitive personal data provided is more extensive than the GDPR’s definition of sensitive personal data. It includes types of data such as customer information from financial institutions and location data obtained through location services. As non-cash transactions and targeted advertising become increasingly prevalent in Vietnam, these types of data are frequently collected by most businesses. As a result, a wider range of entities, including small and medium businesses, may be subject to sensitive personal data protection requirements due to the broad scope of the list.

The Decree imposes more stringent protection measures for sensitive personal data than for basic personal data. For instance, regulated entities that process sensitive personal data must specifically notify data subjects of any processing of their sensitive personal data. Organizations that are covered by the Decree also must designate a department within their organization and appoint an officer which will be responsible for overseeing the protection of sensitive personal data and communicating with the A05.

Nevertheless, it is important to note that small, medium, and start-up enterprises are given a grace period of 2 years from their establishment to comply with these sensitive data requirements, unless such enterprises are directly engaged in processing personal data (Article 43). To qualify for the exemption, companies in agriculture, forestry, aquaculture, industrial, and construction sectors must have fewer than 200 employees and annual revenue below 200 billion Vietnamese dong (equivalent to approximately 8.7 million USD) or total capital below 100 billion Vietnamese dong (approximately 4.3 million USD), while commercial and service sector companies must have fewer than 100 employees and annual revenue below 300 billion Vietnamese dong (approximately 13 million USD) or total capital below 100 billion Vietnamese dong (approximately 4.3 million USD) in accordance with Decree No. 80/2021/ND-CP (2021) on Elaboration of Articles of the Law on Provision of Assistance for Small and Medium Enterprises.

6. Legal bases for processing personal data: no “legitimate interests,” but introducing “publicly disclosed” personal data

The Decree recognizes six legal bases for processing personal data, namely:

Additionally, under Article 18 of the Decree, competent governmental agencies may obtain personal data from audio and video recording activities in public places without the consent of data subjects. However, when conducting recording activities, the authorized agencies and organizations are responsible for informing data subjects that they are being recorded.

Notably, the Decree does not provide a “legitimate interests” lawful ground like the GDPR. Nevertheless, legitimate interests are recognized in other provisions of the Decree. In particular, Article 8 stipulates “Prohibited Acts,” including processing personal data to create information that affect “legitimate rights and interests of other organizations and individuals”.

As for “valid consent”, there are several conditions that must be met when obtaining it, pursuant to Article 11 of the Decree:

The given consent remains valid until it is withdrawn by the data subject or until a competent state agency requests otherwise in writing. PDCs and PDCPs bear the burden of proof in case of a dispute regarding the lack of consent from a data subject. 

Data subjects may request to withdraw their consent to processing of their personal data (Article 12). When a data subject does so, the PDC or PDCP must inform the data subject of any potential negative consequences or harms from the withdrawal of consent.

If the data subject still wishes to proceed, all parties involved in processing the personal data, including the PDC or PDCP and any PDPs or TPs, must cease processing the personal data. There is no set time frame for fulfilling this obligation, but it should be done within a reasonable period of time. 

The withdrawal of consent must be in a format that can be printed, copied in written form, or verified electronically. The withdrawal of consent shall not render unlawful any data processing activities that were lawfully performed based on the consent given prior to the withdrawal.

7. The rights of the data subject include transparency and control rights, but also rights to legal remedies

Article 9 of the Decree provides data subjects with 11 rights over their personal data, which are linked to corresponding obligations on entities that process personal data:

Note that all of these rights are subject to exceptions provided by the Decree or other relevant laws.

7.1. Transparency requirements include detailed notices and access rights on a tight deadline

According to Article 11 and 13, before processing a data subject’s personal data, a PDC or PDCP must provide a notification to the data subject containing the following information:

However, such notification is not required when personal data is being processed by a competent state authority or if the data subjects have been fully informed of, and have given valid consent to, the processing of their personal data.

Data subjects have the right to request that PDCs and PDCPs provide them with a copy of their personal data or share a copy of their personal data to a third party acting on their behalf (Article 14). The PDC or PDCP must fulfill such a request within 72 hours of receiving it. 

The request must be submitted in the Vietnamese language and made in a standardized format as set out in the Appendix to the Decree. The request must include the requestert’s full name, residential address, national identification number, citizen identification card number, or passport number; fax number, telephone number, and email address (if any); and the form of access and the reason and purpose for requesting the personal data. The data subject must also specify the name of the document, file, or record to which their request pertains (Article 14.6). This requirement can impose a significant burden on data subjects as they may not always be fully aware of which documents or records their personal data is contained within. Additionally, the complexity of data processing can further complicate matters and make it difficult for the data subject to identify the relevant documents.

It is important to note that, unlike the GDPR, the Decree does not require a PDC or PDCP to provide data subjects with comprehensive information about the processing of their personal data in a concise, transparent, intelligible, and easily accessible form, using clear and plain language. 

Moreover, there are certain circumstances in which a PDC or PDCP are not required to provide the data subject with a copy of their personal data. These include where:

7.2. The Decree provides for an absolute right to object to processing, as well as correction and deletion rights

A PDC or PDCP must promptly fulfill a data subject’s request to access their personal data, correct their personal data, or have their personal data corrected, according to Article 15.

The PDP and any third party shall be authorized to edit the personal data of the data subject only after obtaining written consent from the PDC and PDCP and ensuring that the data subject has given their consent

If the PDC or PDCP is unable to fulfill the request due to technical or other reasons, the PDC or PDCP must notify the data subject within 72 hours. 

If a data subject requests that the processing of their personal data be restricted or otherwise objects to the processing of their personal data, the PDC or PDCP must respond to the request within 72 hours of receiving it (Article 9). 

One important difference between this requirement and the one in the GDPR is that the Decree does not provide any exceptions to this requirement. Under the GDPR, a controller may be able to demonstrate compelling legitimate grounds that override the interests, rights, and freedoms of the data subject, or may be able to claim that they need the data for the establishment, exercise, or defense of legal claims.

According to Article 16, the PDC or PDCP must delete personal data about a data subject within 72 hours of a request by the data subject, if:

Personal data shall be deleted irretrievably by the PDC, PDCP, PDP, and/or TP if it was processed for improper purposes or the consented purpose(s) has been fulfilled, if storage is no longer necessary, or if the entity responsible for the data has dissolved or terminated business operations due to legal reasons.

Like the GDPR, the Decree recognizes certain exceptions to the right to delete personal data, such as where:

However, unlike the GDPR, personal data that has been lawfully made available to the public is also exempt from the right to deletion (Article 18). As a result, the PDC or PDCP may reject a data subject’s request to delete personal data that has become public, regardless of whether there are any other lawful grounds for retaining such data. This differs from the GDPR, which does not provide exceptions based solely on the public availability of data.

8. Obligations of Controllers and Processors, from written processing agreements to data security and accountability obligations

PDPs are under an obligation to only receive personal data from a PDC after signing an agreement on data processing with the PDC and only process the data within the scope of that agreement (Article 39). The Decree also provides that personal data must be deleted or returned to the PDC upon completion of the data processing.

8.1. Data security and data breach notification requirements

The Decree has dedicated data security requirements for PDCs. For instance, Article 38 asks them to implement organizational and technical measures, as well as appropriate security and confidentiality measures to ensure that personal data processing activities are conducted lawfully. They also need to review and update these measures as necessary, and record and store a log of the system’s personal data processing activities.

Appropriate security measures are also relevant in the PDC – PDP relationship, as PDCs must select a suitable PDP for specific tasks and only work with a PDP that has in place appropriate protection measures. Interestingly, both PDCs and PDPs have a distinct obligation to cooperate with the MPS and competent state agencies by providing information for investigation and processing of any violations of the laws and regulations on personal data protection.Organizations and individuals involved in personal data processing must implement measures to protect personal data and prevent unauthorized collection of personal data from their systems and service devices. Article 22 of the Decree also prohibits the use of software systems, technical measures, or the organization of activities for the unauthorized collection, transfer, purchase, or sale of personal data without the consent of the data subject.

Under Article 23 of the Decree, in the event of a violation of personal data protection regulations, both the PDC and the PDP, or PDCP, are required to promptly inform the A05. The notification must be made no later than 72 hours after the violation occurred. If the notification is delayed, the reason for the delay must be provided. The current wording in the Decree is broad and without further clarifications and guidance it could be interpreted as meaning a notification is required for any violation of the Decree, not just for data breaches. 

The notification must include a detailed description of the violation, such as the time, location, act, organization or individual involved, types and amount of personal data affected, contact details of those responsible for protecting personal data, potential consequences and damages of the violation, and measures taken to resolve or minimize harm. If it is not feasible to provide a complete notification at once, it can be done incrementally or progressively.

However, Decree 13 does not provide a specific procedure for A05 to handle complaints related to personal data protection violations. Further guidance or clarifications may be issued in the future.

8.2. “Impact Assessment Reports” that have to be made available for inspection

Article 24 of the Decree requires PDCs and PDCPs to compile an impact assessment report (IAR) from the commencement of personal data processing and make the report available for inspection by the A05 within 60 days thereafter.

The IAR must contain:

PDPs are also required to compile an IAR. However, the required content is slightly different, reflecting the difference in roles between PDCs/PCDPs and PDPs. For instance, the Decree requires a PDP to provide a description of the processing activities and types of personal data processed, rather than stating the purpose(s) for processing the data.

9. Cross-Border Data Transfers have a legal definition and a registration requirement

Article 25 of the Decree defines a cross-border transfer of personal data as:

This definition includes the:

In the absence of further specification and relying on a literal reading of the wording in Article 25, a possible interpretation of this definition is that processing outside of Vietnam the personal data of Vietnamese citizens who live outside Vietnam would also qualify as a cross-border data transfer under the Decree. If this interpretation is correct, it would mean that all foreign organizations or individuals processing personal data outside of Vietnam would be subject to the Decree’s “cross-border data transfer” requirements even if there is no actual border of Vietnam involved, insofar as they process the personal data of Vietnamese citizens. It should be noted that the scope of the Decree, as stipulated in Article 1.2, only applies to foreign agencies, organizations, and individuals that are in Vietnam or that directly participate or are involved in the personal data processing activities in Vietnam. This ambiguity may be clarified in a guidance document in the future.

Before a covered entity may transfer personal data out of Vietnam, the Decree requires that the entity must:

The DTA must contain the following information:

In light of the consent disclosure required as part of the DTA and in the absence of further regulatory guidance, it seems that consent is the only basis for cross-border transfers. In addition to all requirements for a valid consent, in the context of cross-border transfers, the consent shall include a clear explanation of the feedback mechanism and the available procedures for lodging complaints in the event of incidents or requests, ensuring a comprehensive understanding for the individuals involved.

The MPS will conduct inspection of the DTA annually unless a violation, data incident, or leakage occurs. The MPS may cease transfers in cases where:

It should be noted that data localization is separately governed under Decree No. 53/2022/ND-CP, which implements the Law on Cybersecurity. The decree applies to both domestic and foreign companies operating in Vietnam’s cyberspace, specifically those providing telecom, internet, and value-added services that collect, analyze, or process private information or data related to their service users. According to the decree, these companies must store the data locally and have a physical presence in Vietnam. They are also required to retain the data for a minimum of 24 months. The types of personal data subject to localization include “(i) personal information of cyberspace service users in Vietnam in the form of symbols, letters, numbers, images, sounds, or equivalences to identify an individual; (ii) data generated by cyberspace service users in Vietnam, including account names, service usage timestamps, credit card information, email addresses, IP addresses from the last login or logout session, and registered phone numbers linked to accounts or data; (iii) data concerning the relationships of cyberspace service users in Vietnam, such as friends and groups with whom these users have connected or interacted.” (Article 26, Decree 53). The governing authority responsible for these regulations is A05 as well.

However, it remains unclear from the provided information whether personal data falling within the scope of Decree 53 can be transferred cross-border after fulfilling all requirements, including obtaining valid consents from data subjects. It is possible that the regulations are strictly interpreted to prohibit cross-border transfers for such types of data.

10. Specific Requirements for Children Personal Data

Like the GDPR, Article 20 of the Decree provides special protection for children’s personal data, with a focus on safeguarding their rights and best interests. However, the age threshold for obtaining valid consent differs between the two laws. In Vietnam, the Decree requires the consent of a parent or legal guardian and of children aged seven or older, while the GDPR only allows individuals over 16 to give consent independently for processing of their personal data. 

It is important to note that in Vietnam, children under the age of 16 are not considered to have legal  capacity, meaning that they cannot legally enter into contracts on their own behalf except in exceptional cases. As such, the effect of the child’s consent absent that of a parent or legal guardian is not entirely clear, although the requirement to obtain consent from the child was likely included in the Decree to reflect the child’s opinion on the processing of their personal data.

PDCs, PDPs, PDCPs, and TPs must verify the age of children before processing their personal data. However, the Decree does not explicitly provide an age verification process. Processing of children’s personal data must cease, and the personal data must be deleted irretrievably, where:

The Decree states that only the child’s parent or legal guardian can withdraw consent for the processing of the child’s data, leaving it unclear whether the child can revoke their consent and have their data deleted if they wish to do so.

Conclusion

Vietnam’s new Decree on Personal Data Protection marks a significant milestone in protecting personal data in the country. The Decree introduces key concepts and principles of personal data protection, and sets out specific requirements for data processors and controllers. It also establishes a regulatory framework for obtaining consent for data processing activities, cross-border data transfers, and children data protection, which can contribute to safeguarding the privacy and security of individuals’ personal data.

While the Decree addresses many of the current challenges facing personal data protection in Vietnam, there are still gaps that need to be addressed in forthcoming guiding documents, including the lack of a specific procedure for handling complaints related to personal data protection violations, the conflicting provisions on the sale of personal data need to be clarified, the impact of cross-border data transfers and clear guidelines and requirements for such transfers and a more defined fine structure. It should also provide guidance on automated processing and establish regulations for biometric data. As Vietnam continues to develop its data protection laws, it is important for the law to address key issues such as automated personal data processing, biometrics or facial recognition, global data transfer baseline standards, and the need to balance business development with data protection.

In conclusion, the country’s commitment to personal data protection and privacy is a crucial step in the digital age. As Vietnam continues to strengthen its data protection framework, it will be interesting to see how it aligns with, and how it contributes to emerging frameworks in the region and around the world.

Editors: The success of this article would not have been possible without the dedicated efforts of Dominic Paulger, Josh Lee Kok Thong, and Isabella Perera, as well as the tremendous encouragement of Dr. Gabriela Zanfir-Fortuna from the Future of Privacy Forum.

Analysis of a Decade of Israeli Judicial Decisions Related to Data Protection (2012-2022)

Adv. Rivki Dvash with the assistance of Mr. Guy Zomer1

Background

The Future of Privacy Forum’s office in Tel Aviv (Israel Technology Policy Institute – ITPI) sought to examine the judicial decisions in civil actions under Israel’s Privacy Law, which includes rules that regulate data protection. We examined the extent to which the general public demands protection of the right to privacy through judicial proceedings. We also analyzed the privacy and data protection issues that concern the public enough to appeal to the court, as well as identified any patterns in the appeals.

It is important to note that there is a contradiction inherent in taking civil actions to remedy privacy and data protection violations since appealing to judicial bodies brings attention to and publicly catalogs the disputes. 2 As such, there is an occasional interest to not pursue these matters in order to prevent additional publication or exposure of information that could increase the harm of the initial violation of privacy. Accordingly, the data gathered in this analysis does not necessarily reflect the complete interest and desire the public has in protecting privacy, but rather the cases in which individuals chose to seek judicial remedy under the Privacy Law.

In order to examine all of these cases, we asked Mr. Guy Zomer of Octopus – Public Information for All (R.A.) – which works to make public information, including that related to judicial proceedings, accessible through the Tolaat Hamishpat – to compile all the rulings since 2012 that mention privacy violations and retrieve relevant metadata for our analysis.

The overview below highlights the information and insights gathered from the metadata.

Methodology

Collection of rulings from the Nevo website

In order to locate rulings related to privacy violations, we queried all published rulings issued from January 1, 2012, to December 31, 2022 to find those that included reference to Section 2 of the Privacy Law, 5741-1981 (from now on referred to as “the Law”), which defines an invasion of privacy and what constitutes a civil tort (and a criminal offense). The dataset only includes rulings issued in ordinary courts (magistrate, district, and supreme), and not those issued in special courts such as the Family Court and the Labor Court.

Initial screening

Since we wanted to concentrate on civil proceedings to discover common patterns, we removed criminal judgments and appeal proceedings from the dataset. We also chose to examine and compare decisions related to class actions separately from other civil proceedings.

We identified a total of 293 judgments issued in civil lawsuits and 29 judgments in class actions that referred to privacy violations.

Data collection

The dataset of civil claim decisions related to privacy violations initially only contained primary data such as the opening and closing dates of proceedings and the amount of the claims. We then added the following secondary data:

  1. The additional grounds in the civil lawsuit (defamation,  spam, etc.), if any;
  2. The specific grounds for which the claim was filed (in other words, which subsection of Section 2 is used), even if the court did not recognize the requested cause or all the grounds for which the claim was filed;
  3. The relationship between the plaintiff and the defendant (neighbor, employer-employee3, family, etc.);
  4. Whether the plaintiff claimed concrete damage or compensation without proof of damage;
  5. Whether the court recognized defense claims (this refers to the acceptance of defense claims in a judicial decision, and not to the fact that the defending party raised them);
  6. Who won the lawsuit;
  7. The amount of compensation mandated due to the violation of privacy;
  8. The amount of expenses that have been mandated; and
  9. The total amount of compensation that was mandated, including expenses or other grounds.

We examined class action cases separately from civil lawsuits since class actions focus more on potential harm to a group of people rather than an individual and the monetary compensation is structured differently with three components: individual winnings, group winnings, and lawyer fees, which are higher than is usually customary and serve as an incentive to file class actions.

Preliminary research findings

1) It should be noted that the data we examined only related to published judgments. We have yet to learn about the number of relevant claims in which the proceedings were stopped for various reasons (such as a settlement or lack of legal proceedings by the plaintiff or closed-door proceeding). Given that there is no labeling of privacy protection procedures in the Net HaMishpat (the computerized system for managing court cases in Israel), it is impossible to locate such information.

2) There is a small number of verdicts related to privacy violations and there are only several dozen privacy cases yearly. In comparison, in 2019, about 200,000 cases were closed in the Magistrates’ and District Courts. 4 Furthermore, in 2020, about 192,400 cases were closed in these courts. 5 In other words, the judgments in matters of privacy in Israel are a negligible percentage of all civil proceedings.

screenshot 2023 04 27 at 12.32.46 am

3) We looked at the approximate weight of published privacy violation claims as a percentage of total published civil lawsuits over several years to see whether there are any patterns. Although this method is not statistically accurate, it is still useful to examine the variable ratio between all judgments and privacy judgments published in Nevo.

However, even in the test mentioned above, we could not locate or indicate a clear trend, as seen below.

screenshot 2023 04 27 at 12.33.54 am

Findings

Civil Lawsuits

1. In all the cases, except for one,7 the plaintiffs preferred to claim compensation without proof of damage under section 29A of the Law.

2. The most common issue in civil lawsuits is the photographing of a person and placing of cameras in public, and sometimes even private spheres, accounting for 5.1% of claims. 

3. We did not find any civil lawsuits for torts from privacy violations in databases. The initial assumption was that such claims are found in class actions (see below).

4. Civil lawsuits for privacy violations were generally connected to legal claims for other torts. Less than 20% of the claims filed for privacy violations were filed as a single damage (17%).

5. 19.8% of plaintiffs chose to file their claim in “Small Claims Courts,” which allow for relatively quick and no-frills compensation in an amount limited to up to NIS 36,400 (roughly USD 10,000).

6. The main ground for civil lawsuits is the “spying on or tracing of a person,” or other harassment. This ground appears in 36.9% of civil court rulings. For context of how dominant this cause is, the second most common ground (photographing a person without their permission) is cited in only 16% of all judgments.

screenshot 2023 04 27 at 12.35.48 am

7. The most common relationship between plaintiffs and defendants is a consumer relationship (24%) or a neighbor’s dispute (21.8%). A citizen’s claims against the authorities account for 8.9% of all claims, with the leading cause of action for this type of relationship being a breach of the confidentiality obligation established by the Law (40%).

8. Although privacy violations from media exposure create significant harm due to their broad exposure of information, only a low percentage of filed claims are due to this type of violation (7.5%). Additionally, claims due to this type of violation are always accompanied by a civil lawsuit for other claims such as defamation. Generally, defamation claims appear next to privacy violation claims (52%).

9. 9.9% of privacy claims also involved spam claims filed under Section 30A of the Communications Law. This finding is interesting because during the legislative process for spam regulations, it was determined that they should be incorporated into the Communications Law instead of the Privacy Law. Regardless, even in decisions that recognized both privacy and spam violations, the compensation amounts remained extremely low (no more than a few thousand shekels).

10. In most cases (57.3%), the plaintiff won the claim, compared to 34.4% of cases in which the defendant won (in the other claims, there was no definitive decision). However, a deeper examination of these claims shows that only 46.7% of them were compensated for the privacy violation. In other words, sometimes the plaintiff won the case, but not on the grounds of the privacy violation, or general compensation was provided without specifically referring to the privacy violation.

screenshot 2023 04 27 at 12.38.47 am

11. In almost a quarter of the rulings (24.5%), the court recognized legal defense protections under the Law. 9 The most recognized protection (40.3%) is in the case of “legitimate personal interest” (section 18(2)(c)).

Class Actions

12. Class actions related to privacy violations  (29 cases) account for a small number of all class actions (6493 cases). However, the relative share (4.5%) is larger than the ratio of civil privacy violation claims compared to all civil claims (about 0.09%). This larger relative share is even more significant given that  privacy violation class actions in Israel are more limited tools than civil lawsuits since class actions can only apply to the specific types of claims listed in the second addendum to the Class Actions Law, 5766-2006. 10

13. Most of the class actions that include grounds for privacy violations are also related to consumer protection.

14. Spam violations constitute the additional (or, more precisely, the primary) ground in a significant share of privacy violation class actions (69%). Four cases (15.4%) also mentioned the issue of registering the databases that are the subjects of the claims.11 Furthermore, in four cases (15.4%), it was claimed that the information security of the databases in question were compromised.

15. In 17.2% of privacy violation cases the court rejected the motion to file a class action.

16. Of the 29 cases in which a judgment was given (including court rejection to form a class action), in 41.4% of cases, the court approved the settlement, and in 37.9% of cases, the court approved the plaintiff’s motion for leave.

screenshot 2023 04 27 at 12.40.54 am

17. 69.2% of claims ended in favor of the plaintiff, and only about 26.9% of the decisions favored the defendant, with plaintiffs liable for expenses in only four cases (15.4%).

screenshot 2023 04 27 at 12.41.29 am

Conclusion

Despite the difficulty in getting clear insights into privacy violation civil lawsuits and class actions due to the scarcity of rulings in this area, it is still necessary to examine these decisions.

The small number of claims in this area may indicate the public’s lack of interest in exercising its right to compensation when privacy violations occur. Part of this disinterest is likely due to the desire to prevent additional publication or exposure of information that could increase the harm from the initial privacy violation.  Interestingly, the larger amount of privacy violation class actions as a percentage of all class action lawsuits (compared to civil lawsuits) indicates that given a larger financial incentive and decreased risk of exposure of individuals’ personal information, the desire to file lawsuits may increase. This tentative hypothesis is supported by the higher numbers of class action and civil lawsuits related to spam violations, both of which have high compensation potential and do not reveal additional personal information about plaintiffs. However, given the small absolute number of both class action and civil lawsuits related to privacy violations, more research is needed to fully examine the motivations of plaintiffs.

Even with the small number of claims, there are still several interesting findings, including clarity into the types of privacy violations that concern the public. For example, it is evident that plaintiffs mostly bring violations related to neighbor disputes and placement of cameras in public spaces for surveillance. The research also shows that despite the higher potential for privacy violations from state authorities or the greater harm from violations of database-related provisions of the Law, there are almost no lawsuits concerning these issues. One potential hypothesis for the lack of these claims is that there are power gaps between citizens and authorities, as well as data subjects and database owners, that disincentivize lawsuits.  Although class actions can strengthen the power of the consumer, they still require proof of damage and also cannot be filed against the state.

In conclusion, it is impossible to point to a change or a clear trend of citizens exercising their right to privacy in civil lawsuits over the past decade.

Editor: Isabella Perera

This text has been translated and adapted into English from the original report published on January 30, 2023, available in Hebrew following this link.


1 Thanks to Adv. Limor Shmerling-Magazanik, former Director of ITPI, for her comments on this report.

2 In Israel, the default is that legal proceedings are published stating the parties’ names.

3 It should be noted that even in civil proceedings in ordinary courts (not the Labor Court), we still found claims related to employee-employer relationships.

4 See Annual Report 2019 – Court Administration (in Hebrew), pp. 25 and 37. In the district courts, 8,278 civil cases were closed, and in magistrates’ courts, 191,444 such cases were closed.

5 See Annual Report 2020 – Court Administration (in Hebrew), pp. 25 and 37. In the district courts, 7,578 civil cases were closed, and in magistrates’ courts, 184,874 such cases were closed.

6 We did not include 2022 because there was a change in the classification of cases in civil lawsuits that altered how the selected group was sampled.

7 Civil Action (Magistrate court – Haifa) 54043-11-12 Naor v. Clal Pension and Provident Fund Ltd. (11/4/2014) (in Hebrew), in which the plaintiff lost.

8 As of January 2023.

9 Section 18 of the Privacy Law.

10 Such as dealers, banking corporations, financial services providers, etc.

11 In Israel, there is still an obligation to register databases.

Workplace Discrimination and Equal Opportunity

Why monitoring cultural diversity in your European workforce is not at odds with GDPR

Author: Prof. Lokke Moerel*

The following is a guest post to the FPF blog from Lokke Moerel, Professor of Global ICT Law at Tilburg University and a lawyer with Morrison & Foerster (Brussels).

The guest blog reflects the opinion of the author only. Guest blog posts do not necessarily reflect the views of FPF.

“It has been said that figures rule the world. Maybe. But I am sure that figures show us whether it is being ruled well or badly.” – Johann Wolfgang von Goethe

I. Introduction

It is a known fact that discrimination persists in today’s labor markets,1 this despite EU anti-discrimination and equality laws—such as the Racial Equality Directive—specifically prohibiting practices that put employees at a particular disadvantage based on racial or ethnic origin.2 In a market where there is an acute scarcity of talent,3 we see HR departments struggle with how to eliminate workplace discrimination and create an inclusive culture in order to be able to recruit and support an increasingly diverse workforce. By now, many organizations have adopted policies to promote diversity, equity, and inclusion (DEI) in their organizations and the need has arisen to monitor and evaluate their DEI efforts.

Without proper monitoring, DEI efforts may well be meaningless or even counterproductive.4 To take a simple example, informal mentoring is known to be an important factor for internal promotions, and informal mentoring is less available for women and minorities.5 Organizations setting up a formal internal mentoring program to address this imbalance would like to monitor whether the program is attracting minorities to participate in the program and achieving its goal of promoting equity. If not, the program may unintentionally only exacerbate existing inequalities. Monitoring is therefore required to evaluate whether the mentoring indeed results in more equal promotions across the workforce or whether changes to the program should be made.

screen shot 2023 02 07 at 7.22.46 pm

Organizations are hesitant to monitor these policies in the EU based on a seemingly persistent myth that the EU General Data Protection Regulation 2016/679 (GDPR) would prohibit such practices. This article shows that it is actually the other way around. Where discrimination, lack of equal opportunity, or pay inequity at the workplace is pervasive, monitoring of DEI data is a prerequisite for employers to be able to comply with employee anti-discrimination and equality laws, and to defend themselves appropriately against any claims.6

For historic reasons,7 the collection of racial or ethnic data is considered particularly sensitive in many EU member states. EU privacy laws provide for a special regime to collect sensitive data categories such as data revealing racial or ethnic origin, disability, and religion, based on the underlying assumption that collecting and processing such data increases the risk of discrimination.

However, where racial or ethnic background are ‘visible’ as a matter of fact to recruiters and managers alike, individuals from minority groups may be discriminated against without recording any data. It is therefore only by recording the data that potential existing discrimination may be revealed, and bias can be eliminated from existing practices.8

screen shot 2023 02 07 at 7.23.34 pm

A similar issue has come to the fore where tools are used which are powered by artificial intelligence (AI). We often see in the news that the deployment of algorithms leads to discriminatory outcomes.9 If self-learning algorithms discriminate, it is not because there is an error in the algorithm, it is because the data used to train the algorithm are “biased.” It is only when you know which individuals belong to vulnerable groups that bias in the data can be made transparent and algorithms trained properly.10 Also here, it is not the recording of the sensitive data that is wrong, it is humans who discriminate, and the recording of the data detects this bias. Organizations should be aware that the “fairness” principle under the GDPR cannot be achieved by unawareness. In other words, race blind is not race neutral, and unawareness does not equal fairness. That sensitive data may be legitimately collected for these purposes under European data protection law11 is explicitly provided for in the proposed AI Act.12

It is therefore not surprising, that minority interest groups that represent the groups whose privacy is actually at stake, actively advocate for such collection of data and monitoring. Equally, EU and international institutions unequivocally consider collection of DEI data indispensable for monitoring and reporting purposes in order to fight discrimination. EU institutions further explicitly confirm that the GDPR should not be considered an obstacle preventing the collection of DEI data, but instead establishes conditions under which collecting and processing of such data are allowed. 

From 2024 onwards, large companies in the EU will be subject to mandatory disclosure requirements for compliance with environmental, social, and governance (ESG) standards under the upcoming EU Corporate Sustainability Reporting Directive (CSRD). The CSRD requires companies to report on actual or potential adverse impacts on their workforce with regard to equal treatment and opportunities, which are difficult to measure without collecting and monitoring DEI data.

Currently, the regulation of collection and processing of DEI data is mainly left to the Member States. EU anti-discrimination and equality laws do not impose an obligation on organizations to collect DEI data for monitoring purposes, but neither do they prohibit collecting such data. In the absence of a specific requirement or prohibition, the processing of DEI data is regulated by the GDPR. The GDPR provides for ample discretionary powers for the Member States to provide for legal bases in their national laws to process DEI data for monitoring purposes. In practice, most Member States, however, have not used the opportunity under the GDPR to provide for a specific legal basis in their national laws for processing racial or ethnic data for monitoring purposes (with notable exceptions).13 As a consequence, collection and processing of DEI data for monitoring purposes is taking place on a voluntary basis, whereby employees are asked to fill out surveys based on self-identification. This is in line with the GDPR, which provides for a general exception allowing organizations to process DEI data based on the explicit consent of the individuals concerned, provided that the Member States have not excluded this option in their national laws. In practice, Member States have not used this discretionary power; they have not excluded the possibility of relying on explicit consent for the collection of DEI data. This leaves explicit consent as a valid, but also the only practically viable, option to collect DEI data for monitoring purposes.14 Both human rights frameworks and the GDPR itself facilitate such monitoring, provided there are safeguards to protect abuse of the relevant data in accordance with data minimization and privacy-by-design requirements.15 We now see best practices developing as to how to monitor DEI data while limiting the impact on the privacy of employees, and rightfully so. In literature, collecting racial or ethnic data for monitoring is rightfully described as “a problematic necessity, a process that itself needs constant monitoring.”16

2. Towards a positive duty to monitor for workplace discrimination

Where discrimination in the workplace is pervasive, monitoring of DEI data for quantifying discrimination in those workplaces is essential for employers to be able to comply with anti-discrimination and equality laws. As indicated above, there is no general requirement under the Racial Equality Directive to collect, analyze, and use DEI data. This Directive, however, does provide for a shift in the burden of proof.17 Where a complainant establishes facts from which a prima facie case of discrimination can be presumed, it will fall to the employer to prove that there has been no breach of the principle of equal treatment. Where workplace discrimination is pervasive, a prima facie case will be easy to make, and it will fall to the employer to disprove any such claim, which will be difficult without any data collection and monitoring. The argument that the GDPR does not allow for processing such data will not relieve the employer of its burden of proof. See, in a similar vein, European Committee of Social Rights, European Roma Rights Centre v. Greece18

Data collection 

27. The Committee notes that, in connection with its wish to assess the allegation of the discrimination against Roma made by the complainant organisation, the Government stated until recently that it was unable to provide any estimate whatsoever of the size of the groups concerned. To justify its position, it refers to legal and more specifically constitutional obstacles. The Committee considers that when the collection and storage of personal data [are] prevented for such reasons, but it is also generally acknowledged that a particular group is or could be discriminated against, the authorities have the responsibility for finding alternative means of assessing the extent of the problem and progress towards resolving it that are not subject to such constitutional restrictions.”

Since as early as 1989,19 all relevant EU and international institutions have, with increasing urgency, issued statements that the collection of DEI data for monitoring and reporting purposes is indispensable to the fight against discrimination.20 See, for example, the EU Anti-racism Action Plan 2020‒202521 (the “Action Plan”) in which the European Commission explicitly states:

Accurate and comparable data is essential in enabling policy-makers and the public to assess the scale and nature of discrimination suffered and for designing, adapting, monitoring and evaluating policies. This requires disaggregating data by ethnic or racial origin.22

In the Action Plan, the European Commission notes that equality data remains scarce, “with some member states collecting such data while others consciously avoid this approach.” The Action Plan subsequently provides significant steps to ensure collection of reliable and comparable equality data at the European and national level.23

screen shot 2023 02 07 at 6.08.47 pm

On the international level, the United Nations (UN) takes an even stronger approach, considering collection of DEI data that allow for disaggregation for different population groups to be part of governments’ human rights obligations. See the UN 2018 report, “A Human Rights-based approach to data, leaving nobody behind in the 2030 agenda for sustainable development” (the “UN Report”):24

Data collection and disaggregation that allow for comparison of population groups are central to a HRBA [human rights-based approach] to data and form part of States’ human rights obligations.25 Disaggregated data can inform on the extent of possible inequality and discrimination.

The UN Report notes that this was implicit in earlier treaties, but that “more recently adopted treaties make specific reference to the need for data collection and disaggregated statistics. See, for example, Article 31 of the Convention on the Rights of Persons with Disabilities.”26

Many of the reports referred to above explicitly state that the GDPR should not be an obstacle for collecting this data. For example, in 2021 the EU High Level Group on Non-discrimination Equality and Diversity issued “Guidelines on improving the collection and use of equality data,”27 which explicitly state:

Sometimes data protection requirements are understood as prohibiting collection of personal data such as a person’s ethnic origin, religion or sexual orientation. However, as the explanation below shows the European General Data Protection Regulation (GDPR), which is directly applicable in all EU Member States since May 2018, establishes conditions under which collection and processing of such data [are] allowed.

The UN Special Rapporteur on Extreme Poverty and Human Rights even opined that the European Commission should start an infringement procedure where a Member State continues to misinterpret data protection laws as not permitting data collection on the basis of racial and ethnic origin.28

screen shot 2023 02 07 at 6.21.06 pm

In light of the above, employers referring to GDPR requirements to avoid collecting DEI data for monitoring purposes are starting to appear to be driven more by a wish to avoid workplace scrutiny than by genuine concerns for the privacy of their employees.29 The employees whose privacy is at stake are exactly those who are potentially exposed to discrimination. At the risk of stating the obvious, invoking the GDPR as a prohibition on DEI data collection with the outcome that organizations avoid or are constrained from detecting discrimination of these groups, runs contrary to the GDPR’s entire purpose. The GDPR is about preserving the privacy of the employees while protecting them against discrimination.

screen shot 2023 02 07 at 6.23.44 pm

Not surprisingly, the minority interest groups, who represent the groups whose privacy is actually at stake, actively advocate for such collection of data and monitoring.30 If anything, their concerns as to collection of DEI data for DEI monitoring purposes is that these groups often do not feel represented in the categorization of the data collected.31 If these are too generic or do not allow for splitting out of intersecting inequalities (like being female and from a minority), specific vulnerable groups may well fall outside the scope of DEI monitoring and therefore outside the scope of potential DEI policy measures. It is widely acknowledged that already the setting of the categories may represent bias, and the core principle of relevant human rights frameworks for collecting DEI data is to involve relevant minority stakeholders in a bottom-up process of indicator selection (the human rights principle of participation),32 and further ensure data collection is based on the principle of self-identification, which requires that surveys should always allow for free responses (including no responses) as well as indicating multiple identities (see section 4 on the human rights principle of self-identification). 

3. ESG reporting

Under the upcoming CSRD,33 large companies34 will be subject to mandatory disclosure requirements on ESG matters from 2024 onwards (i.e., in their annual reports published in 2025).35 The European Commission is tasked with setting the reporting standards and has asked the European Financial Reporting Advisory Group (EFRAG) to provide recommendations for these standards. In November 2022, EFRAG published the first draft standards. The European Commission is now consulting relevant EU bodies and Member States, before adopting the standards as delegated acts in June 2023. 

One of the draft reporting standards provides for a standard on reporting on a company’s own workforce (European Sustainability Reporting Standard S1 (ESRS S1)).36 This standard requires a general explanation of the company’s approach identifying and managing any material, actual, and potential impacts on its own workforce in relation to equal treatment and opportunities for all, including “gender equality and equal pay for work of equal value” and “diversity.” From the definitions in ESRS S1, it is clear that “equal treatment” requires that there shall be “no direct or indirect discrimination based on criteria such as gender and racial or ethnic origin; “equal opportunities” refers to equal and nondiscriminatory access to opportunities for education, training, employment, career development and the exercise of power without any individuals being disadvantaged on the basis of criteria such as gender and racial or ethnic origin. 

ESRS S1 provides a specific chapter on metrics and targets, which requires mandatory public reporting metrics on a set of specific characteristics of a company’s workforce, which does include gender, but not racial or ethnic origin.37 Reading all standards, however, it is difficult to imagine how companies could report on the standards without collecting and monitoring DEI data internally.

For example, the general disclosure requirements of ESRS S1 require the company to disclose all of its policies relating to equal treatment and opportunity,38 including:

d) Whether and how these policies are implemented through specific procedures to ensure discrimination is prevented, mitigated, and acted upon once detected, as well as to advance diversity and inclusion in general.

It is difficult to see how companies can comply with reporting the information in clause (d) which requires reporting how the policies are implemented to ensure discrimination is prevented, mitigated, and acted upon once detected, without collecting DEI data.

ESRS S1 further clarifies how disclosures under S1 relate to disclosures under ESRS S2, which includes disclosures where potential impacts on a company’s own workforce have an impact on the company’s strategy and business model(s). See:

screen shot 2023 02 07 at 6.29.56 pm

Based on the reporting requirements above, collecting and monitoring of DEI data will be required for mandatory disclosures, which also provides for a legal basis under the GDPR for collecting such data, provided the other provisions of GDPR are complied with as well as broader human rights principles. Before setting out the GDPR requirements, a brief summary is provided of the broader human rights principles that apply to data collection of DEI data for monitoring purposes.

4. Human rights principles

The three main human rights principles in relation to data collection processes are self-identification, participation, and data protection.39 The principle of self-identification requires that people should have the option of self-identifying when confronted with a question seeking sensitive personal information related to them. As early as 1990, the Committee on the Elimination of Racial Discrimination held that identification as a member of a particular ethnic group “shall, if no justification exists to the contrary, be based upon self-identification by the individual concerned.”40 A personal sense of identity and belonging cannot in principle be restricted or undermined by a government-imposed identity and should not be assigned through imputation or proxy. This entails that all questions on personal identity, whether in surveys or administrative data, should allow for free responses (including no response) as well as multiple identities.41 Also, owing to the sensitive nature of survey questions on population characteristics, special care is required by data collectors to demonstrate to respondents that appropriate data protection and disclosure control measures are in place.42

5. EU data protection law requirements

Collecting racial or ethnic data for monitoring is rightfully described in literature as “a problematic necessity, a process that itself needs constant monitoring.”43 The collection and use of racial and ethnic data to combat discrimination is not an “innocent” practice.44 Even if performed on an anonymized or aggregated basis, it can contribute to exclusion and discrimination. An example is when politicians argue, based on statistics, that there are “too many” people of a certain category in a country.45

Collection and processing of racial and ethnic data is not illegal in the EU. In general, no Member State imposes an absolute prohibition on collecting this data.46 There is also no general requirement under the Racial Equality Directive to collect, analyze, and use equality data. Obligations to collect racial or ethnic data also do not generally seem to be codified in law in the Member States, with notable exceptions in Finland, Ireland and (pre-Brexit) the UK.47

In the absence of specific prohibitions and specific requirements in EU and Member State law, processing of racial and ethnic data is governed by the GDPR, which provides a special regime for processing “special categories of data” such as data revealing racial or ethnic origin.48/49 

Article 9 of the GDPR prohibits the processing of special categories of data, with notable exceptions. The prohibition does not apply (insofar as relevant here)50 when:

For the conditions to apply, provisions must be made in EU or Member State law which permit processing where necessary for substantial public interest or statistical purposes. In practice, most Member States have not used their discretionary power under the GDPR to provide a specific legal basis in their national law for processing racial or ethnic data for these purposes.51 Member States have, however, also not used the possibility under GDPR to provide in their national law that the prohibition under Article 9 may not be lifted by consent. This leaves explicit consent as a valid, but also the only practically viable, option to collect DEI data for monitoring purposes.52 This is in line with human rights principles, provided reporting is based on self-identification.

Once the CDRD has been implemented in the national laws of the Member States, collecting DEI data will be required for mandatory ESG disclosures, which will be permitted under Article 9(2) sub. (g) GDPR (reason of substantial public interest). Where organizations collect this data the human rights principles set out above should be observed, in particular that reporting should be based on self-identification. In practice, the legal basis of substantial public interest, will therefore very much mirror the legal basis of explicit consent and the safeguards and mitigating measures set out below will equally apply.

5.1 Explicit consent

The requirements for valid consent are strict, especially in the workplace.53 For instance, consent must be ‘freely given’, which is considered problematic in view of the imbalance of power between the employer and the individual.54 The term ‘explicit’ refers to the way consent is expressed by the individual. It means that the data subject must give an express statement of consent. An obvious way to make sure consent is explicit would be to expressly confirm consent in a written statement, an electronic form or in an email.55

For employee consent to be valid, employees need to have genuine free choice as whether to provide the information or not without any detrimental effects. This includes no downside whatsoever to an employee refusing to provide consent, which would be the case if refusal of consent would, for example, exclude the employee from any positive action initiatives.56 To ensure that consent is indeed freely given, the voluntary nature of the reporting for employees should be twofold: (1) the act of completing a survey or questionnaire related to one’s racial or ethnic background should be voluntary and (2) the survey or questionnaire should include options for the employee to respond with (an equivalent of) “I choose not to say.” The individual status of a survey or questionnaire (i.e., completed or not completed), as well as the provided answers, should not be visible to the employer on an individual basis. This is in practice realized by privacy-by-design measures (see further below).

Note that for consent to be valid, it needs to be accompanied with clear information as to why it is being collected and how it will be used (consent needs to be “specific and informed”).57 In addition, employees should be informed that consent can be withdrawn at any time and that any withdrawal of consent will not affect the lawfulness of processing prior to the withdrawal.58

When consent is withdrawn, any processing of personal data (to the extent it is identifiable) will need to stop from the moment that the consent is withdrawn. However, where data are collected and processed in the aggregate (see section 5.2 below on privacy-by-design requirements), employees will no longer be identifiable or traceable, and, therefore, any withdrawal of consent will not be effective in relation to data already collected and included in such reports.

5.2 General data protection principles

Obtaining consent does not negate or in any way diminish the data controller’s obligations to observe the principles of processing enshrined in the GDPR, especially Article 5 of the GDPR with regard to fairness, necessity, and proportionality, as well as data quality.59 Employers will further have to comply with principles of privacy-by-design.60 In practice, this means that employers should only process the personal data that they strictly need for their pursued purposes and in the most privacy-friendly manner. For example, employers can collect DEI data without reference to individual employees (i.e., without employees providing their name, or other unique identifier, such as a personnel number or email address). In this manner, employers will comply with data minimization and privacy-by-design requirements, limiting any impact on the privacy of their employees. In practice we see also that employers involve a third-party service provider, and request employees to send the information directly to the third-party provider. The third-party service provider subsequently only shares aggregate information with the employer. 

From a technical perspective, it is possible to achieve a similar segregation of duties within the company’s internal HR system (like Workday or SuccessFactors), whereby data are collected on a de-identified basis and only one or two employees within the diversity function have access to de-identified DEI data for statistical analysis and subsequently report to management on an aggregate basis only (ensuring individual employees cannot be singled out or re-identified).61 This requires customization of HR systems, which is currently underway. Where employers have a works council, the works council will need to provide its prior approval for any company policy related to the processing of employee data. As part of the works council approval process, the privacy-by-design measures can be verified.

For the sake of completeness, note that where data collection and processing are on a truly anonymous basis, the provisions of the GDPR would not apply.62 The threshold under the GDPR for data to be truly anonymous is, however, very high and is unlikely to be met where employers collect such data from their employees. Any application of anonymization techniques, such as pseudonymization (e.g., removal or replacement of unique identifiers and names) therefore do not take the data processing outside the scope of the GDPR, but are rather necessary measures to meet data minimization and privacy-by-design requirements.

6. The way forward

It is no longer possible to hide behind the GDPR to avoid collecting DEI data for monitoring purposes. The direction of travel is towards meaningful DEI policies, monitoring, and reporting (such as under CSRD). Collecting data relating to racial and ethnic origin has been labeled “a problematic necessity, a process that itself needs constant monitoring.” This is the negative way of qualifying DEI data collection and monitoring. A positive human rights-based approach is that data-collection processes should be based on self-identification, participation, and data protection. Where all three principles are safeguarded, the process will be controlled and can be trusted without being inherently problematic or in need of constant monitoring. The path forward revolves around building trust with the workforce (and their works councils and trade unions). If trust is not already a given, the recommendation is to start small (in less sensitive jurisdictions), engage with works councils or the workforce at large, and in light of the upcoming CRSD, start now.63

Self-identification: A company requires the full trust of its employees to be able to collect representative DEI data from them based on self-identification. If introduction of DEI data collection is perceived by employees as abrupt and countercultural, or a box-ticking exercise unlikely to result in meaningful change, surveys will not be successful. For employees to fill out surveys disclosing sensitive data, trust is required that their employer is serious about its DEI efforts and that data collection and monitoring complements these efforts based on the aphorism “We measure what we treasure.” Practice shows that when a certain tipping point is reached, employees are proud to self-identify and contribute to the DEI statistics of their company. 

Trust will be undermined if employees do not recognize themselves in any pre-defined categories. Proper self-identification entails that any pre-defined categories are relevant to a country’s workforce, allow for free responses (including no response) as well as allow for identifying with multiple identities. Trust of employees will be enhanced, if the company has put careful thought into the reporting metrics, ensuring that reporting can actually inform where the company can focus interventions to bring about meaningful change. For example, is important to ensure reporting metrics are not just outcome-based (tracking demographics, without knowing where a problem exists), but are also process-based. Process-based metrics can pinpoint problems in employee-management processes such as hiring, evaluation, promotion, and executive sponsorship. If outcome metrics inform a company that it has limited percentages of specific minorities, process metrics may show in which part of its processes (or part of a process, e.g., which part of the hiring process) a company needs to focus to bring about meaningful change. Examples of these metrics include the speed at which minorities move up the corporate ladder and salary differentials between different categories in comparable jobs.

Participation: Trust requires an inclusive bottom-up process whereby employees (and their works councils) have a say in the data collection and monitoring procedure. For example, in setting the categories in a survey to ensure minority employees can ‘recognize’ themselves in those categories, in setting the reporting metrics to ensure these may bring about meaningful change as well as in setting the data protection safeguards (see below). 

Data protection: To gain employees’ trust, data protection principles, such as data security, data minimization and privacy-by-design, must be fully implemented. A company will need to submit a data collection and processing protocol to its works council and receive its approval, specifying all organizational, contractual and technical measures ensuring that data are collected on a de-identified basis, and access controls are in place to ensure access to the data is limited to one or two employees of the diversity team in order to generate statistics only.

Country Reports

Below we provide a summary of the legal basis available under the laws of France, Germany, Italy, Spain and The Netherlands, and available to collect racial and ethnic background data of their employees for purposes of monitoring their DEI policies (DEI Monitoring Purposes). Note that in all cases also the general data processing principles apply (such as privacy-by-design requirements) as set out in section 5.2, but are not repeated here.  

Belgium

Olivia Vansteelant & Damien Stas de Richelle, Laurius

Summary requirements for processing racial and ethnic background data

Under Belgian law, there is neither a specific legal requirement for employers to collect data revealing racial or ethnic origin of their employees, nor is there a general prohibition for employers to collect such data.

Companies with their registered office or at least one operating office in the Brussels-Capital Region can lawfully process personal data on foreign nationality or origin for DEI Monitoring Purposes on the basis of necessity to exercise their labour law rights and obligations. All companies can lawfully process racial or ethnic background data for DEI Monitoring Purposes based on explicit consent of their employees. Employers with a works council should consult their works council before implementing any policy related to processing data revealing racial or ethnic origin of their employees, but no approval from the works council is required by law.

Necessity to exercise labour law rights and obligations (art. 9(2) b) GDPR). The basis for this exception can be found in the Decision of the Government of the Brussels-Capital Region of 7 May 2009 regarding diversity plans and the diversity label and the Ordonnance of the Government of the Brussels-Capital Region of 4 September 2008 on combatting discrimination and promoting equal treatment in the field of employment.

According to this Decision, companies with their registered office or at least one operating office in the Brussels-Capital Region are entitled to draft a diversity plan to address the issue of discrimination in recruitment and develop diversity at the workplace. No similar regulations currently exist in the Flemish or Walloon regions. Many Flemish NGOs are urging the Flemish Government to work towards a sustainable and inclusive labour market with monitoring and reporting as an important basis for evaluation of diversity. They are asking the Flemish Government to put its full weight behind this before the 2024 elections.

Under this Decision, employers are permitted to analyze the level of diversity amongst their personnel by classifying their workforce into categories, including that of foreign nationality or origin. To classify employees in this category and, hence, collect data on foreign nationality or origin. It is possible that employers may indirectly collect data revealing racial or ethnic origin due to a possible link with, or inference drawn from, information on nationality or origin. However, the Decision does not cover data revealing racial or ethnic origin and there would be no condition permitting such collection under Article 9(2)(b) GDPR. 

Explicit consent. The Belgian Implementation Act does not expressly exclude the possibility of processing racial or ethnic data based on employees’ consent. For all other purposes of processing racial and ethnic background data, employers can therefore rely on explicit consent and voluntary reporting by employees. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Belgium. To ensure that consent is indeed freely given, the voluntary nature of the reporting for employees should be twofold: (1) the act of completing a survey or questionnaire related to one’s racial or ethnic background should be voluntary and (2) the survey or questionnaire should include options for the employee to respond with (an equivalent of) “I choose not to say.”

France

Héléna Delabarre & Sylvain Naillat, Nomos, Société D’Avocats

Summary requirements for processing racial and ethnic background data

Under French law, the processing of race and ethnicity data is prohibited in principle under (i) a general provision of the French Constitution, and (ii) some specific provisions of French data protection laws, which is also the public position of the French Data Protection Authority’s (CNIL). French law does not recognize any categorization of people based on their (alleged) races or ethnicity and the prohibition of processing race and ethnicity data has been reaffirmed by the French Constitutional Court in a decision related to public studies whose purpose was to measure diversity/minority groups. However, while race and ethnicity data may not be collected or processed, objective criteria relating to geographical and/or cultural origins, such as name, nationality, birthplace, mother tongue, etc., can be considered by employers in order to measure diversity and to fight against discrimination.

In a public paper from 201264 (that has not been contradicted since) the CNIL confirmed that employers may collect and process data about objective criteria relating to “origin,” such as the birthplace of the respondent and his/her parents, his/her mother tongue, his/her nationality and that of his/her parents, etc., if such processing is necessary for the purpose of conducting statistical studies aiming at measuring and fighting discrimination. The CNIL also considers that questions about self-identification and how the respondent feels perceived by others can be asked if necessary, in view of the purpose of the data collection and any other questions asked. See the CNIL’s paper:

In accordance with the decision of the Constitutional Court of 15 November 2007 and the insights of the Cahiers du Conseil, studies on the measurement of diversity cannot, without violating Article 1 of the Constitution, be based on the ethnic or racial origins of the persons. Any nomenclature that could be interpreted as ethno-racial reference must therefore be avoided. It is nevertheless possible to approach the criterion of “origin” on the basis of objective data such as the place of birth and the nationality at birth of the respondent and his or her parents, but also, if necessary, on the basis of subjective data relating to how the respondent self-identifies or how the person feels perceived by others.65

Based on the guidance from the CNIL, several public studies have been conducted relying on the collection of information considered permissible by the CNIL, i.e., (i) whether or not respondents felt discriminated against based on their origins or skin colour; (ii) how the respondent self-identifies; and (iii) statistics about the geographical and/or cultural origins of the respondents.66 The provision of any information should be entirely voluntary and the rules regarding explicit consent in section 5.1 above apply in the same manner to France. Any questions relating to the collection of data regarding geographical and/or cultural origins should be objective, and in the absence of the need to identify (directly or indirectly) the individuals, then the collection process should be entirely anonymous.

Germany

Hanno Timner, Morrison & Foerster

Legal basis for processing racial and ethnic background data

Under German law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data. 

Employers in Germany can lawfully process racial and ethnic background data for DEI Monitoring Purposes on the basis of (i) necessity to exercise their labour law rights and obligations or (ii) based on explicit consent of their employees. If the employer has a works council, the works council has a co-determination right for the implementation of diversity surveys and questionnaires in accordance with Section 94 of the Works Council Act (Betriebsverfassungsgesetz – “BetrVG”) if the information is not collected anonymously and on a voluntary basis. If the information is collected electronically, the works council may have a co-determination right in accordance with Section 87(1), no. 6 BetrVG.

Necessity to exercise labour law rights or obligations. According to Section 26(3) of the German Federal Data Protection Act (Bundesdatenschutzgesetz – “BDSG”), the processing of racial and ethnic background data in the employment context is only permitted if the processing is necessary for the employer to exercise rights or comply with legal obligations derived from labour law, social security, and social protection law, and there is no reason to believe that the data subject has an overriding legitimate interest in not processing the data. One of the rights of the employer derives from Section 5 of the German General Equal Treatment Act (Allgemeines Gleichbehandlungsgesetz – “AGG”), according to which employers have the right to adopt positive measures to prevent and stop discrimination on the grounds of race or ethnicity. As a precondition to the adoption of such measures, employers may collect data to identify their DEI needs. 

Explicit consent. For all other purposes of processing racial and ethnic background data, employers will have to rely on explicit consent and voluntary reporting by employees. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Germany. Further, Section 26(2) BDSG specifies that the employee’s level of dependence in the employment relationship and the circumstances under which consent was given have to be taken into account when assessing whether an employee’s consent was freely given. According to Section 26(2) BDSG, consent may be freely given, in particular, if it is associated with a legal or economic advantage for the employee, or if the employer and the employee are pursuing the same interests. This can be the case if the collection of data also benefits employees, e.g., if it leads to the establishment of comprehensive DEI management within the employer’s company.

Ireland

Colin Rooney & Alison Peate, Arthur Cox

Summary requirements for processing racial and ethnic background data

Under Irish law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data.

Explicit consent: Employers in Ireland can lawfully process race and ethnicity data for their own specified purpose based on the explicit consent of employees. It should be noted that the Irish Data Protection Commission has said that in the context of the employment relationship, where there is a clear imbalance of power between the employer and employee, it is unlikely that consent will be given freely. While this does not mean that employers can never rely on consent in relation to the processing of employee data, it does mean that the burden is on employers to prove that consent is truly voluntary, as explained in section 5.1 above. In the context of collecting data relating to an employee’s racial or ethnic background, employers should ensure that employees are given the option to select “prefer not to say”.

Statistical purposes: If the employer intends to process race and ethnicity data solely for statistical purposes, it could rely on Article 9(2)(j) of the GDPR and section 54(c) of the Irish Data Protection Act 2018 (the “2018 Act”), provided that the criteria set out in section 42 of the 2018 Act are met. This allows for race and ethnicity data to be processed where it is necessary and proportionate for statistical purposes and where the employer has complied with section 42 of the 2018 Act. Section 42 requires that: (i) suitable and specific measures are implemented to safeguard the fundamental rights and freedoms of the data subjects in accordance with Article 89 GDPR; (ii) the principle of data minimisation is respected; and (iii) the information is processed in a manner which does not permit identification of the data subjects, where the statistical purposes can be fulfilled in this manner.

Italy

Marco Tesoro, Tesoro and Partners

Summary requirements for processing racial and ethnic background data

The Italian Data Protection Code (IDPC) regulates the processing of personal data under Article 9 of the GDPR, stating that the legal basis for the processing of such data must be to comply with a law or regulation (art. 2-ter(1), IDPC), or for reasons of relevant public interest. Under Italian law, no specific legal basis has been implemented to process racial or ethnic data for reasons of public interest.

Explicit consent. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Italy. The IDPC does not expressly exclude the possibility of processing racial and ethnicity data based on employees’ consent. Employers wanting to collect and process racial and ethnicity data on the basis of employees’ consent under Art. 9 of the GDPR, however, should ensure that the consent is granted on a free basis and, where possible, involve the trade unions they are associated with (as well as their Works Council, where relevant). The trade unions should be able to i) ascertain and certify that the employees’ consent has been freely given; and ii) ensure that employees are fully aware of their rights and of the consequences of providing such data. In the absence of associated trade unions, employers may inform the local representative of the union associations who signed the collective bargaining agreement (CBA) that applies (if any). Furthermore, employers should ensure that employees are given the option to “prefer not to say.”

Statute of Workers. It is also worth noting that under Italian law, there is a general prohibition on the collection of information not strictly related or needed to assess the employee’s professional capability. Per Article 8, Law 23 May 1970, no. 300 “Statute of Workers,” race and ethnicity data should not be collected or used by employers to impact in any way the decision to hire a candidate or to manage any of the terms of the employment relationship.

Spain

Laura Castillo, Gómez-Acebo & Pombo

Summary requirements for processing racial and ethnic background data

Under the Organic Law 3/2018 of 5th December on the Protection of Personal Data and Guarantee of Digital Rights (SDPL), there is a general prohibition on collecting racial and ethnic background information unless: (i) there is a legal requirement to do so (per Article 9 of the SDPL); or (ii) the employees have provided their explicit consent (although the latter is not without risk).

Fulfilment of a legal requirement. The Comprehensive Law 15/2022 of 12th July, for Equal Treatment and Non-Discrimination (the “Equal Treatment Law”) guarantees and promotes the right to equal treatment and non-discrimination. This Law expressly states that no limitations, segregations, or exclusions may be made based on ethnicity or racial backgrounds, i.e., nobody can be discriminated against on grounds of race or ethnicity. In this context, any positive discrimination measures that have been implemented as a result of the Equal Treatment Law have been included in collective bargaining agreements (CBA) or collective agreements as agreed with the unions or the relevant employee representatives. Where there is a requirement in the CBA to collect race and ethnicity data from employees, employers can do so, as this would constitute a legal requirement. In circumstances where the CBA does not specifically require the collection of this type of information, employers can either seek to include such a provision in the terms of the CBA or a collective agreement and work with the unions or legal representatives to do so, or take an alternative approach and rely on explicit consent, as set out immediately below.

Explicit consent. In principle, an employee’s consent on its own is not sufficient to lift the general prohibition on the processing of sensitive data under the SDPL. However, one of the main aims of the prohibition pursuant to the SDPL is to avoid discrimination. Therefore, if the purpose of collection is to promote diversity, it is arguable (although this has not yet been tested in Spain) that employers can rely on explicit consent, and we refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Spain. In addition to the conditions in section 5.1, Spanish case law has determined that the employee’s level of dependence within the employment relationship and the circumstances under which consent is given should be considered when assessing whether an employee’s consent is freely given. It is therefore not recommended that employers obtain or process race and ethnicity data of its employees during the recruitment or hiring process, or before the end of the probationary period, unless a CBA regulates this issue in a different manner. Employers should also ensure that employees are given the option to “prefer not to say” and ensure that they are able to prove that consent is genuinely voluntary, as explained in section 5.1 above.

The Netherlands

Marta Hovanesian, Morrison & Foerster

Summary requirements for processing racial and ethnic background data

Under Dutch law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data. 

Employers in the Netherlands can lawfully process racial and ethnic background data of their employees for DEI Monitoring Purposes on the basis of (i) the substantial public interest on the basis of Dutch law or (ii) the explicit consent of the employees.  Employers with a works council need to ensure their works council approves any policy related to processing Equality Data.

Substantial public interest. The Netherlands has implemented the conditions of Article 9(2) GDPR for the processing of racial and ethnic background data in the Dutch GDPR Implementation Act (the “Dutch Implementation Act”). More specifically, Article 25 of the Dutch Implementation Act provides that racial and ethnicity background data (limited to country of birth and parents’ or grandparents’ countries of birth) may be processed (on the basis of substantial public interest) if processing is necessary for the purpose of restoring a disadvantaged position of a minority group, and only if the individual has not objected to the processing. Reliance on this condition requires the employer to, among other things, (i) demonstrate that certain groups of people have a disadvantaged position; (ii) implement a wider company policy aimed at restoring this disadvantage; and (iii) demonstrate that the processing of race and ethnicity data is necessary for the implementation and execution of said policy.

Explicit consent. Employers can collect racial and ethnicity background data of their employees for DEI Monitoring Purposes based on explicit consent and voluntary reporting by employees. The conditions for consent set out above in section 5.1 apply in the same manner to the Netherlands. 

Cultural Diversity Barometer. Note that Dutch employers with more than 250 employees have the option to request DEI information from Statistics Netherlands about their own company. Statistics Netherlands, upon the Ministry of Social Affairs and Employment’s request, created the “Cultural Diversity Barometer”. The Barometer allows employers to disclose certain non-sensitive personal data to Statistics Netherlands, which, in turn, will report back to the relevant employers with a statistical and anonymous overview of the company’s cultural diversity (e.g., percentage of employees with a (i) Dutch background, (ii) western migration background, and (iii) non-western migration background). Statistics Netherlands can either provide information about the cultural diversity within the entire organization or within specific departments of the organization (provided that the individual departments have more than 250 employees).

United Kingdom

Annabel Gillham, Morrison & Foerster (UK) LLP

Summary requirements for processing racial and ethnic background data

Under UK law, there is no general prohibition on the collection of employees’ racial or ethnic background data by employers, provided that specific conditions pursuant Article 9 of the retained version of the GDPR (UK GDPR) are met. It is fair to say that the collection of such data is increasingly common in the UK workplace, with several organizations electing to publish their ethnicity pay gap.[1] In some cases, collection of racial or ethnic background data is a legal requirement. For example, with respect to accounting periods beginning on or after April 1, 2022 certain large listed companies are required to include in their published annual reports a “comply or explain” statement on the achievement of targets for ethnic minority representation on their board [2] and a numerical disclosure on the ethnic background of the board.[3]

Employers in the UK can lawfully process racial and ethnic background data of their employees for DEI Monitoring Purposes where the processing is (i) necessary for reasons of substantial public interest on the basis of UK law [4]; or (ii) carried out with the explicit consent of the employees [5]. Employers should ensure that they check and comply with the provisions of any agreement or arrangement with a works council, trade union or other employee representative body (e.g., relating to approval or consultation rights) when collecting and using such data.

Substantial public interest. Article 9 of the retained version of the UK GDPR prohibits the processing of special categories of data, with notable exceptions similar to those set out in section 5 above. Schedule 1 to the UK Data Protection Act 2018 (DP Act 2018) sets out specific conditions for meeting the “substantial public interest” ground under Article 9(2)(g) of the UK GDPR. Two conditions are noteworthy in the context of the collection of racial and ethnic background data.

The first is an “equality of opportunity or treatment” condition. This is available where processing of personal data revealing racial or ethnic origin is necessary for the purposes of identifying or keeping under review the existence or absence of equality of opportunity or treatment between groups of people of different racial or ethnic origins with a view to enabling such equality to be promoted or maintained.[6] There are exceptions – the data must not be used for measures or decisions with respect to a particular individual, nor where there is a likelihood of substantial damage or substantial distress to an individual. Individuals have a specific right to object to the collection of their information.

The second condition covers “racial and ethnic diversity at senior levels of organisations”.[7] Organisations may collect personal data revealing racial or ethnic origin where as part of a process of identifying suitable individuals to hold senior positions (e.g., director, partner or senior manager), is necessary for the purposes of promoting or maintaining diversity in the racial and ethnic origins of individuals holding such positions and can reasonably be collected without the consent of the individual. When relying on this condition, organisations should factor in any risk that collecting such data may cause substantial damage or substantial distress to the individual.

In order to rely on either condition set out above, organisations must prepare an “appropriate policy document” outlining the principles set out in the Article 9 UK GDPR conditions and the measures taken to comply with those principles, along with applicable retention and deletion policies.

Explicit consent [8]The conditions for consent set out in section 5.1 above apply in the same manner to the UK. Consent must be a freely given, specific, informed and unambiguous indication of an employee’s wishes. Therefore, any request for employees to provide racial or ethnic background data should be accompanied with clear information as to why it is being collected and how it will be used for DEI Monitoring Purposes.

[1] Ethnicity pay gap reporting – Women and Equalities Committee (parliament.uk)

[2] At least one board member must be from a minority ethnic background (as defined in the Financial Conduct Authority, Listing Rules and Disclosure Guidance and Transparency Rules (Diversity and Inclusion) Instrument 2022, https://www.handbook.fca.org.uk/instrument/2022/FCA_2022_6.pdf.

[3] Financial Conduct Authority, Listing Rules and Disclosure Guidance and Transparency Rules (Diversity and Inclusion) Instrument 2022, https://www.handbook.fca.org.uk/instrument/2022/FCA_2022_6.pdf.

[4] Article 9(2)(g) UK GDPR.

[5] Article 9(2)(a) UK GDPR.

[6] Paragraph 8, Part 2 of Schedule 1 to the DP Act 2018.

[7] Paragraph 9, Part 2 of Schedule 1 to the DP Act 2018.

[8] Article 9(2)(a) UK GDPR.


* Ms. Moerel thanks Annabel Gillham, a partner at Morrison & Foerster in London, for her valuable input on a previous version of this article.   

1 For recent statistics, see “A Union of equality: EU anti-racism action plan 2020-2025,” https://ec.europa.eu/info/sites/default/files/a_union_of_equality_eu_action_plan_against_racism_2020_-2025_en.pdf, p. 2, referring to wide range of surveys conducted by the EU Agency for Fundamental Rights (FRA) pointing to high levels of discrimination in the EU, with the highest level in the labor market (29%), both in respect of looking for work but also at work.

2 See, specifically, Council Directive 2000/43/EC of June 29, 2000, implementing the principle of equal treatment between persons irrespective of racial or ethnic origin (“Racial Equality Directive”), Official Journal L 180, 19/07/2000 P. 0022–0026. Action to combat discrimination and other types of intolerance at the European level rests on an established EU legal framework, based on a number of provisions of the European Treaties (Articles 2 and 9 of the Treaty on European Union (TEU), Articles 19 and 67(3) of the Treaty on the Functioning of the European Union (TFEU), and the general principles of non-discrimination and equality, also reaffirmed in the EU Charter of Fundamental Rights (in particular, Articles 20 and 21).

3 WEF, Jobs of Tomorrow, Mapping Opportunity in the New Economy, Jan. 2020 (“WEF Report”), WEF_Jobs_of_Tomorrow_2020.pdf (weforum.org). See also here.

4 For a list of examples why diversity policies may fail: https://hbr.org/2016/07/why-diversity-programs-fail. See also Data-Driven Diversity (hbr.org): “According to Harvard Kennedy School’s Iris Bohnet, U.S. companies spend roughly $8 billion a year on DEI training—but accomplish remarkably little. This isn’t a new phenomenon: An influential study conducted back in 2006 by Alexandra Kalev, Frank Dobbin, and Erin Kelly found that many diversity-education programs led to little or no increase in the representation of women and minorities in management.”

5 Research shows that lack of social networks and mentoring and sponsoring is a limiting factor for the promotion of women, but this is even stronger for cultural diversity, due to the lack of a “social bridging network,” a network that allows for connections with other social groups, see Putnam, R.D. (2007) “E Pluribus Unum: Diversity and Community in the Twenty-first Century,” Scandinavian Political Studies, 30 (2), pp. 137‒174. While white men tend to find mentors on their own, women and minorities more often need help from formal programs. Introduction of formal mentoring shows real results: https://hbr.org/2016/07/why-diversity-programs-fail.

6 Quote is from https://hbr.org/2022/03/data-driven-diversity.

7 Historically, there have been cases of misuse of data collected by National Statistical Offices (and others), with extremely detrimental human rights impacts, see Luebke, D. & Milton, S. 1994, “Locating the Victim: An Overview of Census-Taking, Tabulation Technology, and Persecution in Nazi Germany.” IEEE Annals of the History of Computing, Vol. 16 (3). See also W. Seltzer and M. Anderson, “The dark side of numbers: the role of population data systems in human rights abuses,” Social Research, Vol. 68, No. 2 (summer 2001), the authors report that during the Second World War, several European countries, including France, Germany, the Netherlands, Norway, Poland, and Romania, abused population registration systems to aid Nazi persecution of Jews, Gypsies, and other population groups. The Jewish population suffered a death rate of 73 percent in the Netherlands. In the United States, misuse of population data on Native Americans and Japanese Americans in the Second World War is well documented. In the Soviet Union, micro data (including specific names and addresses) were used to target minority populations for forced migration and other human rights abuses. In Rwanda, categories of Hutu and Tutsi tribes introduced in the registration system by the Belgian colonial administration in the 1930s were used to plan and assist in mass killings in 1994.

8 The quote is from the Commission for Racial Equality (2000), Why Keep Ethnic Records? Questions and answers for employers and employees (London, Commission for Racial Equality).

9 In the U.S., for example, “crime prediction tools” proved to discriminate against ethnic minorities. The police stopped and searched more ethnic minorities, and as a result this group also showed more convictions. If you use this data to train an algorithm, the algorithm will allocate a higher risk score to this group. Discrimination by algorithms is therefore a reflection of discrimination already taking place “on the ground”. https://www.cnsnews.com/news/article/barbara-hollingsworth/coalition-predictive-policing-supercharges-discrimination.

10 See L. Moerel, “Algorithms can reduce discrimination, but only with proper data,” Op-ed IAPP Privacy Perspectives, Nov. 16, 2018, https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/.

11 See also the guidance of the UK Information Commissioner (ICO) on AI and data protection, Guidance on AI and data protection | ICO. In earlier publications, I have argued that the specific regime for processing sensitive data under the GDPR is no longer meaningful. Increasingly, it is becoming more and more unclear whether specific data elements are sensitive. Rather, the focus should be on whether the use of such data is sensitive. Processing of racial and ethnic data to eliminate discrimination in the workplace is an example of non-sensitive use, provided that strict technical and organizational measures are implemented to ensure that the data are not used for other purposes. See https://iapp.org/news/a/gdpr-conundrums-processing-special-categories-of-data/ and https://iapp.org/news/a/11-drafting-flaws-for-the-ec-to-address-in-its-upcoming-gdpr-review/.

12 Article 10(2) sub. 5 of the draft AI Act allows for the collection of special categories of data for purposes of bias monitoring, provided that appropriate safeguards are in place, such as pseudonymization.

13 A notable exception is the UK, which long before its exit from the EU, legislated for the collection of racial and ethnic data to meet the requirements of the substantial public interest condition for purposes of both “Equality of opportunity or treatment” and “Racial and ethnic diversity at senior levels” and further provided for criteria for processing ethnic data for statistical purposes, see Schedule 1 to the UK Data Protection Act 2018 (inherited from its predecessor, the Data Protection Act 1998), and the Information Commissioner’s Office Guidance on special category data, 2018. Schedule 1 also provides for specific criteria to meet the requirements of the statistical purposes condition. See here. Another exception is the Netherlands, which allows for limited processing of racial and ethnic data (limited to country of birth and parents’ or grandparents’ countries of birth) for reason of substantial public interest.

14 For example, in the Netherlands, it is generally accepted that collecting DEI data can take place on a voluntary basis. See Dutch Social Economic Council,“Meten is Weten, zicht op effecten van diversiteits- en inclusiebeleid,” Charter Document, pp. 7–10, Dec. 2021. See also the report titled “Het moet wel werken,” p. 30, https://goldschmeding.foundation/wp-content/uploads/Rapport-Het-Moet-Wel-Werken-Vergelijkende-analyse-juli-2021.pdf.

15 The European Handbook on Equality Data (2016) provides a comprehensive overview of how equality data can be collected, https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1/language-en; EU High Level Group on Non-discrimination Equality and Diversity, “Guidelines on improving the collection and use of equality data,” 2021, https://commission.europa.eu/system/files/2022-02/guidance_note_on_the_collection_and_use_of_equality_data_based_on_racial_or_ethnic_origin_final.pdf. See also the reports listed in the previous endnote.

16 Bonnett and Carrington 2000, p. 488.

17 Article 8 Racial Equality Directive.

18 Complaint No. 15/2003, decision on the merits, Dec. 8, 2004, § 27.

19 See, e.g., ECRI General Policy Recommendation No. 4 on national surveys on the experience and perception of discrimination and racism from the point of view of potential victims, adopted on Mar. 6, 1998.

20 See the Report prepared for the European Commission, “Analysis and comparative review of equality data collection practices in the European Union Data collection in the field of ethnicity,” 2017, data_collection_in_the_field_of_ethnicity.pdf (europa.eu), https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1#:~:text=The%20European%20Handbook%20on%20Equality,to%20achieve%20progress%20towards%20equality. For example, the European Handbook on Equality Data initially dating from 2007, already stated that “Monitoring is perhaps the most effective measure an organisation can take to ensure it is in compliance with the equality laws.” The handbook was updated in 2016 and provides a comprehensive overview of how equality data can be collected, https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1/language-en.

21 EU Anti-racism Action Plan 2020‒2025, p. 15.

22 See a longer but similar statement in the 2021 Report of the European Commission evaluating the Racial Equality Directive, p. 14, https://ec.europa.eu/info/sites/default/files/report_on_the_application_of_the_racial_equality_directive_and_the_employment_equality_directive_en.pdf.

23 EU Anti-racism Action Plan 2020-2025, p. 21, under reference to: Niall Crowley, Making Europe more Equal: A Legal Duty? https://www.archive.equineteurope.org/IMG/pdf/positiveequality_duties-finalweb.pdf, which reports on the Member States that already provide for such positive statutory duty. See p. 16 for an overview of explicit preventive duties requiring organizations to take unspecified measures to prevent discrimination, shifting responsibility to act from those experiencing discrimination to employers. They can stimulate the introduction of new organizational policies, procedures and practices on such issues.

24 https://www.ohchr.org/sites/default/files/Documents/Issues/HRIndicators/GuidanceNoteonApproachtoData.pdf

25 For instance, target 17.18 in the 2030 Agenda requests that Social Development Goals indicators are disaggregated by income, gender, age, race, ethnicity, migratory status, disability, geographic location, and other characteristics relevant in national contexts.

26https://www.ohchr.org/sites/default/files/Documents/Issues/HRIndicators/GuidanceNoteonApproachtoData.pdf, p. 15, footnote 27.

27 EU High Level Group on Non-discrimination Equality and Diversity, “Guidelines on improving the collection and use of equality data,” 2018, p.11, https://commission.europa.eu/system/files/2021-09/en-guidelines-improving-collection-and-use-of-equality-data.pdf.

28 United Nations, End-of-mission statement on Romania, Professor Philip Alston, United Nations Human Rights Council Special Rapporteur on Extreme Poverty and Human Rights, http://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=16737&LangID=E#sthash.42v5AefT.dpuf.

29 Quote is from Michael O’Flaherty, Director of the Fundamental Rights Agency (FRA), Equality data round table report, 30 December 2021, https://commission.europa.eu/system/files/2021-12/roundtable-equality-data_post-event-report.pdf.

30 See policy statement on the website of the European Network Against Racism (ENAR); as well as the statement dated 28 September 2022 issued by Equal@Work Partners calling on the EU to implement conducive legal frameworks that will bring operational and legal certainty to organizations willing to implement equality data collection measures on the grounds of race, ethnicity and other related categories., https://www.enar-eu.org/about/equality-data. See also here.

31 For an excellent article on all issues related to ethnic categorization, see Bonnett, A., & Carrington, B. (2000), “Fitting into categories or falling between them? Rethinking ethnic classification,” British Journal of Sociology of Education, 21(4), pp. 487‒500.

32 The Office of the United Nations High Commissioner for Human Rights (OHCHR), “Human Rights Indicators, A Guide to Measurement and Implementation,” 2012 (OHCHR Guide), Chapter III sub. A (Ethical, statistical and human rights considerations in indicator selection), p. 46,  https://www.ohchr.org/sites/default/files/Documents/Publications/Human_rights_indicators_en.pdf.

33 See Corporate Sustainability Reporting Directive (Nov. 10, 2022), at point (4) of Article 1, available here.

34 The new EU sustainability reporting requirements will apply to all large companies that fulfill two of the following three criteria: more than 250 employees, €40 million net revenue, and more than €20 million on the balance sheet), whether listed or not. Lighter reporting standards will apply to small and medium enterprises listed on public markets.

35 See Article 1 (Amendments to Directive 2013/34/EU) sub. (8), which introduces new Chapter 6a Sustainability Reporting Standards, pursuant to which the European Commission will adopt a delegated act, specifying the information that undertakings are to disclose about social and human factors, which include equal treatment and opportunity for all, including diversity provisions.

36 See European Financial Reporting Advisory Group, Draft European Sustainability Reporting Standard S1 Own Workforce, Nov. 2022, here

37 The initial working draft of ESRS S1 published by EFRAG for public consultation did include a public reporting requirement also of the total number of ‘employees belonging to vulnerable groups, where relevant and legally permissible to report’ (see disclosure requirement 11), Download (efrag.org). This requirement was deleted from the draft standards presented by EFRAG to the European Commission.

38 See S1-1 – Policies related to own workforce.

39 OHCHR Guide, p. 46, https://www.ohchr.org/sites/default/files/Documents/Publications/Human_rights_indicators_en.pdf.

40 General Recommendation 8, Membership of racial or ethnic groups based on self-identification, 1990, https://www.legal-tools.org/doc/2503f1/pdf/.

41 https://www.ohchr.org/sites/default/files/Documents/Issues/HRIndicators/GuidanceNoteonApproachtoData.pdf,
pp. 12‒13. See also The EU High Level Group on Non-discrimination Equality and Diversity, “Guidelines on improving the collection and use of equality data,” 2021, p.36, https://commission.europa.eu/system/files/2022-02/guidance_note_on_the_collection_and_use_of_equality_data_based_on_racial_or_ethnic_origin_final.pdf.

42 OHCHR Guide, p. 48.

43 Bonnett and Carrington 2000, p. 488.

44 The Dutch Young Academy, Antidiscrimination data collection in academia: and exploration of survey methodology practices outside of the Netherlands, https://www.dejongeakademie.nl/en/publications/2300840.aspx?t=Antidiscrimination-data-practices-worldwide-and-views-of-students-and-staff-of-colour.

45 See older examples in Bonnett and Carrington 2000.

46 European Commission, Analysis and comparative review of equality data collection practices in the European Union Data collection in the field of ethnicity, p. 14, https://commission.europa.eu/system/files/2021-09/data_collection_in_the_field_of_ethnicity.pdf. Even in France, often seen as a case of absolute prohibition, ethnic data collection is possible under certain exceptions. The same applies to Italy. Italian Workers’ Statute (Italian Law No. 300/1970), Article 8, expressly forbids employers from collecting and using ethnic data to decide whether to hire a candidate and to decide any other aspect of the employment relationship already in place (like promotions). Collecting such data for monitoring workplace discrimination and equal opportunity falls outside the prohibition (provided it is ensured such data cannot be used for other purposes). 

47 Three notable exceptions, Finland, Ireland and the UK (before leaving the EU), place a duty of equality data collection on public bodies as part of their equality planning, see https://ec.europa.eu/info/sites/default/files/data_collection_in_the_field_of_ethnicity.pdf, p. 16. 

48 Data revealing racial or ethnic origin qualifies as a “special category” of personal data under Article 9 of the GDPR. Data on nationality or place of birth of a person or their parents do not qualify as special categories of data and can as a rule be collected without consent of the surveyed respondent. However, if they are used to predict ethnic or racial origin, they become subject to the regime of Article 9 GDPR for processing special categories of data.

49 There is no debate that the summary of requirements here is a correct reflection of the requirements of the GDPR. A similar summary of relevant provisions of the EU High Level Group on Non-discrimination, Equality, and Diversity can be found in its 2018 guidelines, p. 12, https://ec.europa.eu/info/sites/default/files/en-guidelines-improving-collection-and-use-of-equality-data.pdf and further in its 2021 Guidance Note on the collection and use of equality data based on racial or ethnic origin, p. 29 – 31 https://commission.europa.eu/system/files/2022-02/guidance_note_on_the_collection_and_use_of_equality_data_based_on_racial_or_ethnic_origin_final.pdf. See further the guidelines issued by the Dutch Social Economic Council in Dec. 2021, “Meten is Weten, zicht op effecten van diversiteits- en inclusiebeleid,” Charter Document, pp. 7– 10, Dec. 2021; and an earlier report by PWC on assignment of the Dutch government, “Onderzoek Vrijwllige vastlegging van culturele diversiteit,” https://www.rijksoverheid.nl/documenten/publicaties/2017/12/22/onderzoek-vrijwillige-vastlegging-van-culturele-diversiteit.

50 Note that Article 9(2)(b) of the GDPR provides a condition for collecting racial and ethnic data where it “is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment … in so far as it is authorised by Union or Member State law….” There is currently no Union or Member State law that provides for an employer obligation to collect racial or ethnic data for monitoring purposes. However, EU legislators considered this to be a valid exception for Member States to implement in their national laws. Art. 88 of the GDPR states that Member States may, by law or collective agreements, provide for more specific rules to ensure the protection of the rights and freedoms of employees with respect to the processing of employees’ personal data in the employment context. In particular, these rules may be provided for the purposes of, inter alia, equality and diversity in the workplace.

51 See endnote 13.

52 For example, in the Netherlands, it is generally accepted that collecting DEI data can take place on a voluntary basis, see Dutch Social Economic Council, Dec. 2021, “Meten is Weten, zicht op effecten van diversiteits- en inclusiebeleid,” Charter Document, pp. 7–10, Dec. 2021. See also the report titled “Het moet wel werken,” p. 30, https://goldschmeding.foundation/wp-content/uploads/Rapport-Het-Moet-Wel-Werken-Vergelijkende-analyse-juli-2021.pdf.

53 Article 4(11) of the GDPR defines consent as “any freely given, specific, informed, and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

54 Guidance of the European Data Protection Board has made it explicit in a series of guidance documents that, for the majority of data processing at work, consent is not a suitable legal basis due to the nature of the relationship between employer and employee. See also Opinion 2/2017 on data processing at work (WP249), paragraph 3.3.1.6.2, at https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf.

55 EDPB Guidelines 05/2020 on consent under Regulation 2016/679, adopted on May 4, 2020 (EDPB Guidelines on consent), p. 21,  https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf.

56 WP249, paragraph 6.2.

57 EDPB Guidelines on consent, pp. 13–15.

58 Article 7(3) GDPR.

59 EDPB Guidelines on consent, p. 5.

60 Article 25 GDPR.

61 See Article 29 Working Party Opinion 05/2014 on Anonymization Techniques, adopted on 10 April 2014.

62 Recital 26 of the GDPR states the principles of data protection should not apply to anonymous information which does not relate to an identified or identifiable natural person, or which relates to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. Recital 26 further states that the GDPR does not concern the processing of such anonymous information, including for statistical or research purposes.

63 For an informative article on the practicalities of implementing data-driven diversity proposals, see Data-Driven Diversity (hbr.org). The distinction between outcome-based and process-based metrics is based on this article.

64https://www.cnil.fr/sites/default/files/atoms/files/ddd_gui_20120501_egalite_chances_0.pdf.

65https://www.cnil.fr/sites/default/files/atoms/files/ddd_gui_20120501_egalite_chances_0.pdf, p. 70.

66 This article from the workers union “CGT,” refers to a study conducted by the “Défenseur des droits” (a public body) who notably co-signed the CNIL’s 2012 paper referred to in footnotes 80 and 81 above. https://www.cgt.fr/actualites/france/interprofessionnel/discriminations/le-defenseur-des-droits-denonce-un-racisme. See also recent studies conducted by the public institutions INED and INSEE: https://teo.site.ined.fr/fichier/s_rubrique/29262/teo2_questionnaire.fr.pdf

FPF at IAPP’s Europe Data Protection Congress 2022: Global State of Play, Automated Decision-Making, and US Privacy Developments

Authored by Christina Michelakaki, FPF Intern for Global Policy

On November 16 and 17, 2022, the IAPP hosted the Europe Data Protection Congress 2022 – Europe’s largest annual gathering of data protection experts. During the Congress, members of the  Future of Privacy Forum (FPF) team moderated and spoke at three different panels. Additionally, on November 14, FPF hosted the first Women@Privacy awards ceremony at its Brussels office, and on November 15, FPF co-hosted the sixth edition of its annual Brussels Privacy Symposium with the Vrije Universiteit Brussels (VUB)’s Brussels Privacy Hub on the issue of “Vulnerable People, Marginalization, and Data Protection” (event report forthcoming in 2023).

In the first panel for IAPP’s Europe Data Protection Congress, Global Privacy State of Play, Gabriela Zanfir-Fortuna (VP for Global Privacy, Future of Privacy Forum) moderated a conversation on key global trends in data protection and privacy regulation in jurisdictions from Latin America, Asia, and Africa. Linda Bonyo (CEO, Lawyers Hub Africa), Annabel Lee (Director, Digital Policy (APJ) and ASEAN Affairs, Amazon Web Services), and Rafael Zanatta (Director, Data Privacy Brasil Research Association) participated. 

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale (EU Privacy Counsel, Future of Privacy Forum) led a discussion on FPF’s ADM case-law report and impactful cases and relevant concepts for automated decision-making regulation under the GDPR. Ruth Boardman (Partner, Co-head, International Data Protection Practice, Bird & Bird), Simon Hania (DPO, Uber), and Gintare Pazereckaite (Legal Officer, EDPB) participated.

Finally, in the third panel, Perspectives on the Latest US Privacy Developments, Keir Lamont (Senior Counsel, Future of Privacy Forum) participated in a conversation focused on data protection developments at the federal and state level in the United States. Cobun Zweifel-Keegan (Managing Director, D.C., IAPP) moderated it, and Maneesha Mithal (Partner, Privacy and Cybersecurity, Wilson Sonsini Goodrich & Rosati) and Dominique Shelton Leipzig, (Partner, Cybersecurity & Data Privacy; Leader, Global Data Innovation & AdTech, Mayer Brown) also participated.

Below is a summary of the discussions in each of the three panels:

1. Global trends and legislative initiatives around the world

In the first panel, Global Privacy State of Play, Gabriela Zanfir-Fortuna stressed that although EU and US developments in privacy and data protection are in the spotlight, the explosion of regulatory action in other regions of the world is very interesting and deserves more attention.

Linda Bonyo touched upon the current movement in Africa where countries are adopting their own data protection laws, primarily inspired by the European model of data protection regulation, since they trust that the GDPR is a global standard and lack the resources to draft policies from scratch. Bonyo also added that the lack of resources and limited expertise are the main reasons why African countries struggle to establish independent Data Protection Authorities (DPAs). She then stressed that the Covid-19 pandemic revived discussions about a continental legal framework to address data flows. Regarding enforcement, she noted that for Africa, the approach looks rather “preventative” than “punitive.” Bonyo also underlined that it is common for big tech companies to operate outside of the continent and only have a small subsidiary in the African region, rendering local and regional regulatory action less impactful than in other regions.

Annabel Lee offered her view on the very dynamic Asia-Pacific region, noting that the latest trends, especially post-GDPR, include not only the introduction of new GDPR-like laws but also the revision of existing ones. Lee noted, however, that the GDPR is a very complex piece of legislation to “copy,” especially if a country is building its first data protection regime. She then focused on specific jurisdictions, noting that South Korea has overhauled its originally fragmented framework with a more comprehensive one and that Australia will implement a broad extraterritorial element in its revised law. Then Lee stated that when it comes to implementation and interpretation, data protection regimes in the region differ significantly, and countries try to promote harmonization by mutual recognition. With regards to enforcement, she stressed that it is common to see occasional audits and that in certain countries, such as Japan, there is a very strong culture of compliance. She also added that education can play a key role in working towards harmonized rules and enforcement. Lee offered Singapore as an example, where the Personal Data Protection Commission gives companies explanations not only on why they are in breach but also on why they are not in breach.

Rafael Zanatta explained that after years of strenuous discussions, there is an approved data protection legislation in Brazil (LGPD) that has already been in place for a couple of years. The new DPA created by the LGPD will likely ramp up its enforcement duties next year and has, so far, focused on building experimental techniques (to help incentivize associations and private actors to cooperate) and publishing guidelines, namely non-binding rules that will provide future interpretation for cases. Zanatta stressed that Brazil has been experiencing the formalization of autonomous data protection rights with supreme court rulings stating that data protection is a fundamental right different from privacy. He underscored that it will be interesting to see how the private sector applies data protection rights given their horizontal effect and the development of concepts like positive obligations and the collective dimension of rights. He explained that the extraterritorial applicability of Brazil’s law is very similar to the GDPR since companies do not need to operate in Brazil for the law to apply. He also touched upon the influence of Mercosur, a South American trade bloc, in discussions around data protection and the collective rights of the indigenous people of Bolivia in light of the processing of their biometric data. With regards to enforcement, he explained that in Brazil, it is happening primarily through the courts due to Brazil’s unique system where federal prosecutors and public defenders can file class actions.

img 1056

2. Looking beyond case law on automated decision-making

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale offered an overview of FPF’s ADM Report, noting that it contains analyses of more than 70 DPA decisions and court rulings concerning the application of Article 22 and other related GDPR provisions. He also briefly summarized the Report’s main conclusions. One of the main points he highlighted is that the GDPR covers automated decision-making (ADM) comprehensively beyond Article 22, including through the application of overarching principles like fairness and transparency, rules on lawful grounds for processing, and carrying out Data Protection Impact Assessments (DPIA). 

Ruth Boardman underlined that the FPF Report reveals the areas of the law that are still “foggy” regarding ADM. Boardman also offered her view on the Portuguese DPA decision concerning a university using proctoring software to monitor students’ behavior during exams and detect fraudulent acts. The Portuguese DPA ruled that the Article 22 prohibition applied, given that the human involvement of professors in the decisions to investigate instances of fraud and invalidate exams was not meaningful. Boardman further explained that this case, along with the Italian DPA’s Foodhino case, shows that the human in the loop must have meaningful involvement in the process of making a decision for Article 22 GDPR to be inapplicable. She added that internal guidelines and training provided by the controller may not be definitive factors but can serve as strong indicators of meaningful human involvement. Regarding the concept of “legal or similarly significant effects” — another condition for the application of Article 22 GDPR – Boardman noted the link between such effects and contract law. For example, in the case of national laws transposing the e-Commerce Directive in which adding a product to a virtual basket counts as an offer to the merchant and not as a binding contract, no legal effects are triggered. She also added that meaningful information about the logic behind ADM should include the consequences that data subjects can suffer and referred to an enforcement notice from the UK’s Information Commissioner Office concerning the creation of profiles for direct marketing purposes.

Simon Hania argued that the FPF Report showed the robustness of the EDPB guidelines on ADM and that ADM triggers GDPR provisions that are relevant to fairness and transparency. With regards to the “human in the loop” concept, Hania claimed that it is important to involve multiple humans and ensure that they are properly trained to avoid biased decisions. Then he elaborated on a case concerning Uber’s algorithms that match drivers with clients, where Uber drivers requested access to data to assess whether the matching process was fair. For the Amsterdam District Court, the drivers did not demonstrate how the matching process could have legal or similarly significant effects on them, which meant that drivers did not have enhanced access rights that would only apply if ADM covered by Article 22 GDPR was at stake. However, when ruling on an algorithm used by another ride-hailing company (Ola) to calculate fare deductions based on drivers’ performance, the same Court found that the ADM at issue had significant effects on drivers. For Hania, a closer inspection of the two cases reveals that both ADM schemes affect drivers’ ability to earn or lose remuneration, which highlights the importance of financial impacts when assessing the effects of ADM as per Article 22. He also touched on a decision from the Austrian DPA concerning a company that scored individuals on the likelihood they would belong to certain demographic groups, as the DPA mandated the company to inform individuals about how it calculated their individual scores. For Hania, the case shows that controllers need to explain the reasons behind their automated decisions – regardless of whether they are covered by Article 22 GDPR or not – to comply with the fairness and transparency principles of Article 5 GDPR.

Gintare Pazereckaite noted that the FPF Report is particularly helpful in understanding inconsistencies in how DPAs apply Article 22 GDPR. She then stressed that the interpretation of “solely automated processing” should be done in light of protecting and safeguarding data subjects’ fundamental rights. Pazereckaite also referred to the criteria set out by the EDPB guidelines that clarify the concept of the “legal and similarly significant effects.” She added that data protection rules such as accountability and data protection by design play an important role in allowing data subjects to understand how ADM works and what consequences it may bring up. Lastly, Pazereckaite commented on Article 5 of the proposed AI Act – which contains a list of prohibited AI practices – and its importance when an algorithm does not trigger Article 22 GDPR.

img 1063

3. ADPPA and regional laws re-shaping US data protection regime

In the last panel, Perspectives on the Latest US Privacy Developments, Keir Lamont offered an overview of recent US Congressional efforts to enact the American Data Privacy and Protection Act (ADPPA) and outstanding areas of disagreement. For him, the bill would introduce stronger rights and protections than those set forth in existing state-level laws; including a broad scope; strong data minimization provisions; limitations on advertising practices; enhanced privacy-by-design requirements; algorithmic impact assessments; and a private right of action. In contrast, existing state laws typically adhere to the outdated opt-in/opt-out paradigm for establishing individual privacy rights.

Maneesha Mithal explained that in the absence of comprehensive federal privacy legislation, the Federal Trade Commission (FTC) has largely taken on the role of DPA by virtue of having jurisdiction over a broad range of sectors in the economy and acting both as an enforcement and rulemaking agency. Mithal explained that the FTC enforces four existing privacy laws in the US and can also take action against both unfair and deceptive trade practices. For example, the FTC can enforce against any statement (irrespective of whether it is in a privacy policy or the context of user interfaces), material omissions (for example, the FTC has concluded that a company did not inform its clients that it was collecting second by second television data and was further sharing it), and unfair practices in the data security area. Mithal pointed out that since the FTC does not have the authority to seek civil penalties for first-time violations, it is trying to introduce additional deterrents by naming individuals (for example, in the case of an alcohol provider, the FTC named the CEO for failing to prioritize security) and is using its power to obtain injunctive relief. For example, in a case where a company was unlawfully using facial recognition systems, the FTC ordered the company to delete any models or algorithms that were used, and thus FTC applied the fruit to the poisonous tree theory. Mithal also noted that although the FTC has historically not been active as a rulemaking authority due to procedural issues along with the lack of resources and time considerations, it is initiating a major rulemaking involving “Commercial Surveillance and Lax Data Security Practices.”

Finally, Dominique Shelton Leipzig offered remarks on state-level legislation focusing on the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA), adding that Colorado, Connecticut, Utah, and Virginia have similar laws. She elaborated on the CPRA’s contractual language, comparing California’s categorization of “Businesses,” “Contractors,” “Third Parties,” and “Service Providers” to the GDPR’s distinction between controllers and processors. Shelton Leipzig also explained that the CPRA introduced a highly disruptive model for the ad tech industry since consumers can opt out of both the sale of data, as well as the sharing of data. The CPRA also created a new independent rulemaking and enforcement Agency, the first in the US, focusing only on data protection and privacy. Finally, she addressed the recently enacted California Age-Appropriate Design Code Act, which focuses on the design of internet tools, and stressed that companies are struggling to implement it.

img 1077

Further reading: