On November 15, 2022, the Future of Privacy Forum (FPF) and the Brussels Privacy Hub (BPH) of Vrije Universiteit Brussel (VUB) jointly hosted the sixth edition of the Brussels Privacy Symposium on the topic of “Vulnerable People, Marginalization, and Data Protection.” Participants explored the extent to which data protection and privacy law including the EU’s General Data Protection Regulation (GDPR) and other data protection laws like Brazil’s General Data Protection Law (LGPD) safeguard and empower vulnerable and marginalized people. Participants also debated balancing the right to privacy with the need to process sensitive personal information to uncover and prevent bias and marginalization. Stakeholders discussed whether prohibiting the processing of personal data related to vulnerable people serves as a protection mechanism or, on the contrary, whether it potentially deepens bias.
The event also marked the launch of VULNERA, the International Observatory on Vulnerable People in Data Protection, coordinated by the Brussels Privacy Hub and the Future of Privacy Forum. The observatory aims to promote a mature debate on the multifaceted connotations surrounding the notions of human “vulnerability” and “marginalization” existing in the data protection and privacy domains.
The Symposium was started with short introductory remarks by Jules Polonetsky, FPF’s CEO, and Gianclaudio Malgieri, Associate Professor at Leiden eLaw and BPH’s Co-Director. Polonetsky stressed the importance of understanding that privacy increasingly intersects with other rights and issues. Malgieri offered an overview of VULNERA and incentivized participants to reflect on important questions, such as whether data protection law could serve as a means to address human vulnerabilities and marginalization.
Throughout the day, there were two keynote addresses by Scott Skinner-Thompson, Associate Professor at the University of Colorado Boulder and Hera Hussain, Founder and CEO of Chayn, a nonprofit providing online resources for survivors of gender-based violence, followed by three panel discussions, a brainstorming exercise with the Symposium’s attendees in four different breakout sessions, and closing remarks delivered by FPF’s Vice President for Global Privacy, Gabriela Zanfir- Fortuna, and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski.
This Report outlines some of the most noteworthy points raised by the speakers during the day-long Symposium. It is divided into seven sections: the above general introductions; the ensuing section, which covers the remarks made during the Keynote Speeches; the next three that summarize the content of the discussions held during the panels; the sixth one that touches on the exchanges audience members had during the breakout sessions; and the seventh and final one that provides highlights of the EDPS’s closing remarks.
Iowa Senate Advances Comparatively Weak Consumer Privacy Bill
By Keir Lamont & Mercedes Subhani
Update: On March 28, Governor Kim Reynolds signed SF 262 into law, making Iowa the 6th state to enact a baseline consumer privacy framework.
Lawmakers in Iowa are considering the adoption of a new consumer privacy framework that would fall far short of comparable state privacy laws in terms of consumer rights, business obligations, and enforcement. On Monday, March 6th, the Iowa Senate passed SF 262, an Act relating to consumer data protection, by a 47-0 vote. Companion legislation, HF 346 is currently eligible for a vote in the Iowa House.
Iowa is one of several states that are currently seriously considering the enactment of privacy legislation, demonstrating a commendable focus on the protection of consumer data. However, the Iowa bill, while modeled after frameworks adopted in other states, nevertheless diverges significantly from the most protective state privacy laws.
In order to help stakeholders and policymakers assess Iowa’s privacy proposal, the Future of Privacy Forum is releasing a chart comparing SF 262 to the Connecticut Data Protection Act, which currently stands as one of the strongest and most interoperable state approaches for establishing privacy rights and protections.
At a high level, Iowa’s privacy proposal contains the following protections and notable omissions as compared to bills with similar models:
Instead of requiring that controllers obtain affirmative, opt-in consent for the collection and processing of consumers’ sensitive personal data, Iowa businesses would only need to provide notice and an opportunity to opt-out.
The Iowa bill would establish consumer rights to access, delete, and in certain cases, port their personal information, but does not grant a right to correct inaccurate personal information or to exercise these rights through authorized agents.
The Iowa bill creates a consumer right to opt-out of the “sale” of personal data (narrowly defined as exchanges for “monetary consideration”). It does not create an opt-out right for significant profiling decisions or clearly establish a right to opt-out of targeted advertising.
The Iowa bill would require businesses to disclose their data processing practices and to protect the security of consumer data, but it would not require businesses to conduct risk assessments or adhere to data minimization and use limitation standards.
The Iowa bill would provide for exclusive enforcement authority by the State Attorney General; businesses would have a 90-day right to “cure” any and all alleged violations of the Act.
FPF Files Comments with the National Telecommunications and Information Administration (NTIA) on Privacy, Equity, and Civil Rights
On March 6, the Future of Privacy Forum filed comments with the National Telecommunications and Information Administration (NTIA) in response to their request for comment on privacy, equity, and civil rights.
NTIA’s “Listening Sessions on Privacy, Equity, and Civil Rights,” drew attention to the well-documented and ongoing history of data-driven discrimination in the digital economy. When the digital economy functions properly, all individuals, regardless of race, gender, or other historically discriminated-against attributes, are able to equally access the benefits of technology, including better access to education and employment opportunities, while trusting that their personal data is protected from misuse. Individuals and communities can benefit from digital tools that provide important services regarding education, employment, housing, credit, insurance, and government benefits. In addition, society as a whole benefits from diverse individuals contributing different perspectives, ideas, and objectives without cause to fear discrimination or harm.
But when the digital economy reinforces human bias rather than combats discrimination, individuals suffer concrete harms, including artificially limited educational opportunities, reduced access to jobs and financial services, and more. At the same time, misuse of personal information can contribute to biased outcomes, undermine trust in digital services, or both.
FPF’s comments urge NTIA to include these three recommendations in its forthcoming report to advance protections for data privacy, equity, and civil rights in the US:
1. Support for Congressional efforts to pass a comprehensive federal privacy law that includes limitations on data collection, heightened safeguards for sensitive data use, support for socially beneficial research, and protections for civil rights, including protections regarding automated decision-making that has legal or similarly significant effects.
2. Support the administration to promote a common approach to privacy, AI, and civil rights among executive agencies in the agencies’ guidance to private entities and internally on the processing of personal information, and tech procurement processes.
3. Continue proactive engagement and meaningful consultation with marginalized groups in conversations regarding privacy and automated decision-making across the administration and federal agencies.
Race Equity, Surveillance in the Post-Roe Era, and Data Protection Frameworks in the Global South are Major Topics During This Year’s Privacy Papers for Policymakers Event
Author: Randy Cantz, U.S. Policy Intern, Ethics and Data in Research and former Communications Intern at FPF
The Future of Privacy Forum (FPF) hosted a Capitol Hill event honoring 2022’s must-read privacy scholarship at the 13th annual Privacy Papers for Policymakers Awards ceremony. This year’s event featured an opening keynote by FTC Commissioner Alvaro Bedoya as well as facilitated discussions with the winning authors: Anita Allen, Anupam Chander, Eunice Park, Pawel Popiel, Laura Schwartz-Henderson, Rebecca Kelly Slaughter, and Kate Weisburd. Experts from academia, industry, and government, including Olivier Sylvain, Neil Chilson, Amanda Newman, Gabriela Zanfir-Fortuna, Maneesha Mithal, and Chaz Arnett, moderated these policy discussions.
Stephanie Wong, FPF’s Elise Berkower Fellow, provided welcome remarks and emceed the night, noting this was the first time the award ceremony has met in-person after two years of virtual events due to the pandemic. Ms. Wong noted she was excited to present leading privacy research relevant to Congress, federal agencies, and international data protection authorities (DPAs).
FPF’s Stephanie Wong
In his keynote, Commissioner Bedoya was excited to celebrate privacy by highlighting some of the finest minds working on relevant technology policy issues. Commissioner Bedoya noted that events like Privacy Papers for Policymakers bring together the “brightest minds in technology speaking with the brightest minds in Congress” to determine which issues deserve special sectoral treatment. Commissioner Bedoya concluded his remarks by highlighting the aspects of the winning authors’ work that he finds most salient, stating that Professor Park’s work forces us to reckon with the reality that our health information cannot be confined to a neat box that lives at our doctor’s office; that Professor Weisburd’s work answers the fallacy that surveillance is passive, not a physical, lived experience and that surveillance is a form of punishment; that Professor Allen is a luminary who underscores the importance of seeing surveillance not as data collection, but rather by tracing the differential use of surveillance; that Professor Popiel and Laura Schwartz-Henderson grapple with challenges faced by DPA regulators; that Professors Anupam Chander and Paul Schwartz reveal a divergence between international trade and privacy norms; and that Commissioner Slaughter’s paper describes the harms of algorithmic bias and how to protect consumers from algorithmic harms.
FTC Commissioner Alvaro Bedoya
Following Commissioner Bedoya’s keynote address, the event shifted to discussions between the winning authors and expert discussants. The 2022 PPPM Digest includes summaries of the papers and more information about the authors.
Professor Anita Allen (University of Pennsylvania Carey Law School) kicked off the first discussion of the night with Olivier Sylvain (Senior Advisor at the Federal Trade Commission) by talking about her paper, Dismantling the “Black Opticon”: Privacy, Race Equity, and Online Data-Protection Reform. Professor Allen’s paper analyzes how African-Americans endure discriminatory oversurveillance, discriminatory exclusion, and discriminatory predation, which she calls the “Black Opticon,”and also highlights the role of privacy’s unequal distribution. During her conversation, Professor Allen discussed discriminatory surveillance and oversurveillance of the poor, why race-neutral policies aren’t enough to address discrimination, how commercial practices and reforms are unequally distributed along racial lines, and how the right to privacy emerged in the U.S.’s founding era when the concept of one’s privacy was used to protect slave owners and more recently, those who commit domestic abuse under the same guise of privacy.
Professor Anita Allen and Olivier Sylvain
Next, Professor Anupam Chander (Georgetown University Law Center), discussed his paper, Privacy and/or Trade, with Neil Chilson (Stand Together). Professor Chander’s paper, co-written with Professor Paul Schwartz of the UC Berkeley School of Law, explores the conflict between international privacy and trade. At the outset, Mr. Chilson outlined the paper’s four contributions to privacy law and policy: an analysis of the long history of trade and privacy, the rapid growth of international regulation, that privacy and trade are not necessarily exclusive and can both impact humanitarian issues when we think about rights, and that the paper doesn’t merely observe the problem, but rather gives a potential path to fixing it through a regulatory regime. The conversation centered on privacy and transborder data flows, the concept that “privacy is not bananas,” and the practical effect of kicking the proverbial can down the road to other countries, amongst other salient topics.
Professor Anupam Chander and Neil Chilson
Professor Eunice Park (Western State College of Law) discussed her paper, Reproductive Health Care Data Free or For Sale: Post-Roe Surveillance and the “Three Corners” of Privacy Legislation Needed with Amanda Newman (Office of Congresswoman Sara Jacobs). Professor Park’s paper recommends placing over-due limits on state surveillance to protect the privacy of personal health data. Ms. Newman acknowledged that state criminalization has fallen hardest on marginalized communities post-Roe and that the issue of reproductive health data being available on the open market has made the issue of privacy more concrete and topical. Professor Park explained why the post-Roe era is harsher than the 1960s pre-Roe era, the disconnect between legal protection and digital reality in the context of the First Amendment, and the three corners of protections she recommends, including an inclusive definition for reproductive healthcare data, a substantive prohibition on certain practices, and procedural protections that would require a warrant for any type of information that could criminalize an individual.
Professor Eunice Park and Amanda Newman
During the next panel, Dr. Pawel Popiel (University of Pennsylvania Annenberg School for Communication) and Laura Schwartz-Henderson (Internews) discussed their paper, Understanding the Challenges Data Protection Regulators Face: A Global Struggle Towards Implementation, Independence, & Enforcement with FPF’s Dr. Gabriela Zanfir-Fortuna. Their paper examines the challenges facing DPAs in Africa and Latin America and identifies two prominent factors as key obstacles to effective data protection oversight: resource constraints and threats to independence. The co-authors discussed why they pursued this research and why it is valuable, how they chose the data they worked with and what the key challenges were, how lack of independence can create cross-cutting challenges across organizations, and why independence is so important. They also stated that ensuring local values and needs with baseline data protection from the European model was important.
Dr. Pawel Popiel, Laura Schwartz-Henderson, and Dr. Gabriela Zanfir-Fortuna
Next, FTC Commissioner Rebecca Kelly Slaughter discussed her paper, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission with Maneesha Mithal (Wilson Sonsini Goodrich & Rosati). Commissioner Slaughter’s paper provides a taxonomy of algorithmic harms that portend injustice, describes her view of how the Commission’s existing tools can and should be aggressively applied to thwart injustice, and explores how new legislation or an FTC rulemaking could help structurally address the harms generated by algorithmic decision-making. Commissioner Slaughter reiterated it is important that “we’re not awed by the concept of algorithmic decision making, into thinking it’s an amazing technology.” Commissioner Slaughter discussed consumer protection and “AI snake oil,” the privacy harms of regenerative AI, potential harms to kids, and the importance of transparency and explainability to consumers.
FTC Commissioner Rebecca Kelly Slaughter and Maneesha Mithal
In the evening’s final presentation, Professor Kate Weisburd (George Washington University Law School) was joined by Professor Chaz Arnett (University of Maryland Carey Law School) to discuss her paper, Punitive Surveillance. Professor Weisburd’s paper analyzes “punitive surveillance” practices, which allow government officials, law enforcement, and companies to track, record, share, and analyze the location, biometric data, and other metadata of thousands of people on probation and parole. Professor Weisburd noted how there has been a “shift to digitizing and datifying the carceral state, entrenching and deepening carceral surveillance measures.” Professor Weisburd then discussed the concept of punitive surveillance, the disconnect between how e-monitors are presented in popular culture and how it plays out in practice, and what policymakers could do to connect more with this subject matter.
Professor Kate Weisburd and Professor Chaz Arnett
FPF CEO Jules Polonetsky closed the event by thanking the audience, including a special shout-out to Professor Paul Ohm for helping inspire this conference thirteen years ago and to FPF’s Stephanie Wong for making the event happen.
FPF CEO Jules Polonetsky
Thank you to Commissioner Alvaro Bedoya and Honorary Co-Hosts Congresswoman Diana DeGette and Senator Ed Markey, Co-Chairs of the Congressional Privacy Caucus. We would also like to thank our winning authors, expert discussants, those who submitted papers, and event attendees for their thought-provoking work and support. We hope to see you next year!
ITPI Law and Technology Policy Paper Competition In Collaboration with the Israel Ministry of Justice
Last month (January 11, 2023), the Israel Tech Policy Institute held the final event of the ‘Law and Technology Policy Paper Competition’ with the presence of Adv. Sharon Afek, Deputy Attorney General (Director of the Consulting and Legislative Division), and Saar Abramovich, a patent examiner from the Patent Office, along with the participation of students from four law faculties of universities and academic colleges in Israel who have reached the final stage of the competition – Sapir Academic College, The University of Haifa, Bar-Ilan University, and the Ono Academic College.
The winning students from Sapir Academic College were awarded a prize worth NIS 5,000 by the Ministry of Justice. The winning paper was also translated into English – courtesy of the Israel Technology Policy Institute.
The research question of competition was: Should artificial intelligence be recognized as an “inventor” for the purposes of patent law, or as a “patent owner”? The question was examined against the background of dealing with legal challenges and complexities that arise from the need to regulate the encounter between smart technologies and existing law.
The competition judges included:
Adv. Ayelet Feldman, Consulting and Legislation (Economic Department) Israel Ministry of Justice
Adv. Cedric Sabbah, Director, International Cybersecurity and IT Law, Office of the Deputy Attorney General (International Law), Israel Ministry of Justice
Adv. Asa Kling, Partner at Naschitz, Brandes, Amir & Co., Chair of IP Practice
Adv. Amit Ashkenazi, Law and Technology Expert, Former Head of the Legal Department at Israel National Cyber Directorate
Dr. Asaf Weiner, Managing Director of Research, Policy, and Government Affairs at ISOC-IL | Professor of Internet Law & Policy at TAU and BGU
The competition aroused great interest among scholars, academia, and government officials and will be continued next year as well.
To read the policy papers in Hebrew of the four faculties of law that have reached the final stage of the competition, click here.
The English version of the winning paper by Sapir Academic College written by Uri Avigal, Galit Cohen, Sharon Pustilnik, and Matt Weiser with the supervision of Dr. Sharon Bar-Ziv is available here.
New Report Highlights LGBTQ+ Student Views on School Technology and Privacy
The Future of Privacy Forum and LGBT Tech outline recommendations for schools and districts to balance inclusion and student safety in technology use.
Today, the Future of Privacy Forum (FPF), a global non-profit focused on privacy and data protection, and LGBT Tech, a national, nonpartisan group of LGBT organizations, academics, and high technology companies, released a new report, Student Voices: LGBTQ+ Experiences in the Connected Classroom. The report builds on FPF and LGBT Tech research, including interviews with recent high school graduates who identify as LGBTQ+, to gather firsthand accounts of how student monitoring impacted their feelings of privacy and safety at school.
Student Voices: LGBTQ+ Experiences in the Connected Classroom
“LGBTQ+ students are at greater risk for their online activity being flagged or blocked by some content filtering and monitoring systems, and hearing first-hand about their experiences was incredibly important and informative,” said Jamie Gorosh, FPF Youth & Education Privacy policy counsel and an author of the report. “We hope that the recommendations outlined in this report can help schools and districts develop policies that better reflect the views of LGBTQ+ students and ultimately create a safe learning environment for all students.”
LGBTQ+ Experiences in the Connected Classroomanalyzes the concerns that arose in conversations with LGBTQ+ students regarding school technologies and privacy, including concerns about identity, parent/caregiver access to data, health information, and student safety. The report then outlines a series of recommendations and actionable steps for schools and districts to address those concerns and others, including to:
Improve transparency and communication with the school community about what technology is being used and how the district is using it, including by sharing with students and parents a clear monitoring policy, the name of the monitoring vendor, a list of websites that will be blocked (how a student could contest a decision to block a site), and the school officials who will have access to content flagged by monitoring technology.
Carefully consider how and when monitoring technology is deployed, and develop safeguards for how information is used and shared, including by evaluating the efficacy of the product before purchasing, considering what content the monitoring system reviews and who has access to notifications, and understanding the vendor’s policies and procedures for sharing information with law enforcement.
Consider legislative or policy reforms establishing best practices for protecting LGBTQ+ students and student data privacy, such as a student safety exception amendment to FERPA or on the state level, legislation that emulates a 2014 California law that requires students and parents to be notified and given the opportunity to publicly comment before a district can gather information from students’ social media accounts.
Invest in more robust mental health services and support for LGBTQ+ students, as well as training for administrators to ensure they are able to meet the unique needs of these students. “If schools develop programming specifically aimed at providing resources and mental health care to LGBTQ+ students and create an overall environment of acceptance and inclusivity, that may make all the difference,” the report concludes.
“Our hope would be that every young LGBTQ+ individual would have a supportive family home when it comes to their sexual orientation or gender identity, but unfortunately, over 60% of youth in a recent Trevor Project study did not feel their home was an LGBTQ+ affirming space,” said Christopher Wood, Executive Director, LGBT Tech. “Their public school, its faculty, and technology is most likely the closest line of support that young LGBTQ+ person has access to. With a wave of anti-LGBTQ legislation being passed across the US, it is more important than ever that school monitoring technology is not further inflicting harm on young LGBTQ+ individuals seeking information and support. We hope this report can help schools and districts identify areas for improvement and develop policies that create a safe, supportive learning environment for all students, especially those who might identify as LGBTQ+.”
Knowledge is Power: The Future of Privacy Forum launches FPF Training Program
“An investment in knowledge always pays the best interest” –Ben Franklin
Let’s make 2023 the year we invest in ourselves, our teams, and the knowledge needed to best navigate this dynamic world of privacy and data protection. I am fortunate to know many of you who will read this blog post, but for those who I don’t know, please allow me to introduce myself. My name is Alyssa Rosinski, FPF Senior Director of Business Development, and I am leading the relaunch of the FPF Training Program.
Perhaps when you think of FPF, the first thing that comes to mind is the deep in-house subject matter expertise on topics such as Artificial Intelligence, Ad Tech, Legislation (both US and Global), Biometrics and De-Identification, to name a few. Or you think of the brilliant infographics used to explain complex privacy and technology issues in a digestible format. Maybe it’s the deep-dive reports that you wait for to help you understand the intricacies of the ever-evolving privacy and data protection landscape.
If FPF training didn’t make it to your list, I’m here to tell you it should. The FPF Training Program combines all the strengths and delivers them to you and your teams in an interactive, educational training session. The same subject matter expertise that you rely on to help you understand the relevant laws, policies, underlying technologies, and business practices are the same subject matter experts developing and teaching our training.
Are you looking to understand the evolution of online advertising and how data is used for targeted advertising purposes? Our Fundamentals of Online Advertising was designed with you in mind. Are you curious about the ethical ramifications of how Biometrics are used? FPF experts will explore how Biometrics work and weigh in on the social and ethical considerations of this technology during our Biometrics – Head to Toe training session. Perhaps your team needs to brush up on the technical and legal definitions of De-Identification and the risks associated with re-identification. Look no further than our De-Identification training.
The FPF Training Program is intentionally designed to provide you and your teams with training that lasts no more than two hours and can be done virtually or in-person, in a public training setting or as a closed-door in-house training, all culminating with a Certificate of Completion from Credly, qualifying for CPE credits. During our public training sessions, there’s an open dialogue allowing you to discuss relevant issues with your peers and the trainer as you work your way through the session.
In-house training sessions provide a closed-door forum for collaborative discussions between internal stakeholders and the trainer, making the training session highly relevant to those who attend. Our training is for those of you who play a role in developing policies for your organization, are working with clients on complex privacy issues, or for those of you who are interested in emerging technologies and their implications for privacy and data protection.
Joining one of our training sessions will pay dividends. Check out our public training calendar to reserve a spot for one of our upcoming training sessions, or email me at [email protected] if you’re looking to bring your team together for private, in-house training. I look forward to working with you and your teams this year and participating in the investment you make in professional development.
Evolving Enforcement Priorities in Times of Debate – Overview of Regulatory Strategies of European Data Protection Authorities for 2023 and Beyond
At a time where the effectiveness of the EU General Data Protection Regulation (GDPR) enforcement model is being challenged by the European Parliament, Data Protection Authorities (DPAs), civil society, and policymakers, the European Data Protection Board (EDPB) has launched several initiatives to reform the way DPAs are working together. This includes aligning DPAs’ national enforcement strategies, selecting cases of strategic importance for regulators to pursue in a coordinated fashion, and launching yearly coordinated enforcement initiatives dedicated to specific topics. The European Commission also seems to agree that the cooperation between DPAs should be improved, as its Work Program for 2023 plans “to harmonize some national procedural aspects.”
On the other hand, the majority of national DPAs have recently reported a shortage of adequate human and financial resources, which impacts the performance of their supervisory duties. Nonetheless, they seem increasingly willing to closely cooperate with other DPAs – both bilaterally and multilaterally – and also regulators from other fields. They also seem ready to ramp up investigatory and sanctioning efforts, both on their own initiative and following individuals’ complaints. In this regard, it is noteworthy that the highest nine administrative fines since the GDPR became applicable in May 2018 were issued between July 2021 and December 2022.
Since the 2021 edition of the Report, most DPAs have published their 2021 and 2022 annual reports, as well as novel short or long-term strategies. These documents shed light on the areas that DPAs are likely to devote significant regulatory efforts and resources for guidance creation, awareness-raising, and enforcement actions.
For this year’s Report, FPF compiled and analyzed these novel strategic documents, describing where different DPAs’ priorities have common trends or notable deviations. The report also contains links to and translated summaries of strategic documents from nine DPAs in Belgium (BE), the Czech Republic (CZ), Denmark (DK), Estonia (ET), France (FR), Ireland (IE), Spain (ES), Sweden (SE), and the United Kingdom (UK). The analysis also includes documents published by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS). These documents should be read together with other complementary multi-year strategies that FPF included in its 2020 and 2021 reports.
Table 1 – Overview of strategic and operational topics per DPA/jurisdiction
Some of the main conclusions include:
The EDPB is expected to publish its harmonized views on topics that were included in its 2021-2022 Work Programme but that did not result in concrete guidance at the time of writing. Such themes include the “legitimate interests” lawful ground, scientific research, children’s personal data, and several emerging technologies, such as blockchain, novel techniques for anonymization and pseudonymization, cloud computing, AI/machine learning, digital identity, Internet of Things (IoT), and payment methods.
DPAs intend to clarify several data processing activities and aspects of the data protection framework through guidelines and public facing activities, including the completion of DPIAs, dissemination of information about data subjects’ rights, and upskilling of DPOs and other privacy professionals.
DPAs will also seek to promote the approval and adoption of Codes of Conduct (CoCs) and certification mechanisms as ways to enable organizations to easily demonstrate compliance with the EU’s privacy acquis.
DPAs like the French CNIL will seek to improve their collaboration with other sectoral regulators (such as competition authorities), something that DPAs will be expected to do in the context of emerging regulations, such as the EU’s Digital Markets Act (DMA).
Topics and sectors included in several DPAs’ agendas, from both guidance and enforcement angles, include international data transfers, the appointment and position of the Data Protection Officer (DPO), the processing of minors’ personal data (including in online schooling environments), the use of personal data-driven AI systems, advertising technology, and direct marketing activities.
Utah Considers Proposals to Require Web Services to Verify Users’ Ages, Obtain Parental Consent to Process Teens’ Data
Update: On March 23, Governor Spencer Cox signed SB 152 and HB 311. While amendments were made to both bills, the concerns raised in FPF’s analysis remain. SB 152 leaves critical provisions, such as methods to verify age or obtain parental consent, to be established in further rulemaking, but questions remain regarding whether these can be done in a privacy-preserving manner. SB 152’s requirement that social media companies provide parents and guardians access to their teenager’s accounts, including messages, has raised security concerns andquestions about what sort of parental access mandates are appropriate forteens online.
Utah lawmakers are weighing legislation that would require social media companies to conduct age verification for all users and extend a parental consent requirement to teens.
The Utah legislature has introduced two similar, competing bills that seek to regulate online experiences for Utah users. SB 152 would require social media companies to verify the age of all Utah users and require parental consent for users under 18 to have an account. The bill would also require social media companies to provide a parent or guardian access to the content and interactions of an account held by a Utah resident under the age of 18. On February 13, SB 152 was amended to replace the prescriptive requirements for age verification (e.g. requirements that companies obtain and retain a government-issued ID) with verification methods established through rulemaking, but concerns remain that a rulemaking process could nonetheless require privacy-invasive age verification methods.
Utah HB 311, as originally introduced, would have required social media companies to verify the age of all Utah residents, require parental consent before users under 18 create an account, and would also prohibit social media companies from using “addictive” designs or features. On February 9, the bill was amended to remove the age verification and parental consent provisions; the provisions regarding design features remain, as does a private right of action. The amended bill passed the Utah House and moved to the Senate, where it will be considered alongside Utah SB 152.
FPF shared our analysis of these bills last week, focusing on three areas:
Parental consent under COPPA has longstanding challenges:
FPF published an analysis and accompanying infographic regarding verifiable parental consent (VPC) under COPPA, which was informed by research and insights from parents, COPPA experts, industry leaders, and other stakeholders. The white paper and infographic highlight key friction points that emerge in the VPC process, including:
Efficacy: It can be difficult to distinguish between children and adults online, and it is harder still to establish whether a particular child is related to a particular adult. While the approved methods under VPC may confirm someone is an adult, they do not confirm whether that adult is a parent or guardian of a child.
Privacy and security: Parents often do not feel comfortable sharing sensitive information, such as their credit card or ID information, and having that information linked to their child’s presence online.
Age verification requires additional data collection:
As written, Utah’s proposed legislation would require companies to affirmatively verify the age of all Utah residents. A key pillar of privacy legislation and best practices is the principle of data minimization and not collecting information beyond what is necessary to provide a service. Requiring social media companies or their agents to collect this data would increase the risk of identity theft resulting from a data breach. We also note that since some social media companies are based outside of the United States (with some located in jurisdictions that have few effective privacy rules), there is an inherent security risk in the increased collection of sensitive data for age verification purposes.
Additionally, as written, Utah’s proposed legislation specifies that ages must be verified without a definition of what “verify” means. Companies would benefit from clarity on whether age verification or age estimation is required. An example of age estimation might include capturing a “selfie” of a user to estimate the user’s age range. Verifying someone’s exact age almost always requires increased data collection compared with estimating an age range or age bracket. Some of the current age estimation technology can accurately distinguish a 12 year old from someone over 25, resulting in a much smaller number of users that would be required to provide sensitive identity documentation. Although methods of verification and forms or methods of identification will be established by further administrative rulemaking, compliance with the proposed legislation as written may still necessitate companies to require government-issued ID to access their services.
Protecting children and teens online should include increasing privacy protections:
FPF knows that children and teens deserve privacy protections and has highlighted Utah’s leadership in this space, notably in the education context. However, a one-size-fits-all approach may not be appropriate given developmental differences between young children and teens. Similar to how children under 13 can access services with safeguards under COPPA, teens stand to derive benefit from online services such as socializing with peers, distant family, and communities. Utah’s legislation proposes to restrict access to services rather than enhancing privacy protections on these services. Enhanced privacy protections could not only benefit children, but could benefit adults as well. Because many parents may ultimately choose to provide consent, it is important to consider how privacy protections could be implemented on online services.
These youth-focused proposals follow last year’s passage of the Utah Consumer Privacy Act – a comprehensive privacy law that created some new rights for Utah residents, but provides fewer protections than other state frameworks. Adding privacy protections for young people would not just help Utah align with other states but also would address several of the privacy risks the social media bills would create. Examples of privacy protective provisions include:
Classifying children’s and teens’ data as sensitive data and restricting the sale or use of children and teens’ data for targeted advertising by default;
Adding provisions requiring data minimization, restrictions on secondary uses, or a prohibition against processing personal data in violation of state and federal anti-discrimination laws; and
Providing all consumers with a right to opt out of profiling.
Why monitoring cultural diversity in your European workforce is not at odds with GDPR
Author: Prof. Lokke Moerel*
The following is a guest post to the FPF blog from Lokke Moerel, Professor of Global ICT Law at Tilburg University and a lawyer with Morrison & Foerster (Brussels).
The guest blog reflects the opinion of the author only. Guest blog posts do not necessarily reflect the views of FPF.
“It has been said that figures rule the world. Maybe. But I am sure that figures show us whether it is being ruled well or badly.” – Johann Wolfgang von Goethe
I. Introduction
It is a known fact that discrimination persists in today’s labor markets,1 this despite EU anti-discrimination and equality laws—such as the Racial Equality Directive—specifically prohibiting practices that put employees at a particular disadvantage based on racial or ethnic origin.2 In a market where there is an acute scarcity of talent,3 we see HR departments struggle with how to eliminate workplace discrimination and create an inclusive culture in order to be able to recruit and support an increasingly diverse workforce. By now, many organizations have adopted policies to promote diversity, equity, and inclusion (DEI) in their organizations and the need has arisen to monitor and evaluate their DEI efforts.
Without proper monitoring, DEI efforts may well be meaningless or even counterproductive.4 To take a simple example, informal mentoring is known to be an important factor for internal promotions, and informal mentoring is less available for women and minorities.5 Organizations setting up a formal internal mentoring program to address this imbalance would like to monitor whether the program is attracting minorities to participate in the program and achieving its goal of promoting equity. If not, the program may unintentionally only exacerbate existing inequalities. Monitoring is therefore required to evaluate whether the mentoring indeed results in more equal promotions across the workforce or whether changes to the program should be made.
Organizations are hesitant to monitor these policies in the EU based on a seemingly persistent myth that the EU General Data Protection Regulation 2016/679 (GDPR) would prohibit such practices. This article shows that it is actually the other way around. Where discrimination, lack of equal opportunity, or pay inequity at the workplace is pervasive, monitoring of DEI data is a prerequisite for employers to be able to comply with employee anti-discrimination and equality laws, and to defend themselves appropriately against any claims.6
For historic reasons,7 the collection of racial or ethnic data is considered particularly sensitive in many EU member states. EU privacy laws provide for a special regime to collect sensitive data categories such as data revealing racial or ethnic origin, disability, and religion, based on the underlying assumption that collecting and processing such data increases the risk of discrimination.
However, where racial or ethnic background are ‘visible’ as a matter of fact to recruiters and managers alike, individuals from minority groups may be discriminated against without recording any data. It is therefore only by recording the data that potential existing discrimination may be revealed, and bias can be eliminated from existing practices.8
A similar issue has come to the fore where tools are used which are powered by artificial intelligence (AI). We often see in the news that the deployment of algorithms leads to discriminatory outcomes.9 If self-learning algorithms discriminate, it is not because there is an error in the algorithm, it is because the data used to train the algorithm are “biased.” It is only when you know which individuals belong to vulnerable groups that bias in the data can be made transparent and algorithms trained properly.10 Also here, it is not the recording of the sensitive data that is wrong, it is humans who discriminate, and the recording of the data detects this bias. Organizations should be aware that the “fairness” principle under the GDPR cannot be achieved by unawareness. In other words, race blind is not race neutral, and unawareness does not equal fairness. That sensitive data may be legitimately collected for these purposes under European data protection law11 is explicitly provided for in the proposed AI Act.12
It is therefore not surprising, that minority interest groups that represent the groups whose privacy is actually at stake, actively advocate for such collection of data and monitoring. Equally, EU and international institutions unequivocally consider collection of DEI data indispensable for monitoring and reporting purposes in order to fight discrimination. EU institutions further explicitly confirm that the GDPR should not be considered an obstacle preventing the collection of DEI data, but instead establishes conditions under which collecting and processing of such data are allowed.
From 2024 onwards, large companies in the EU will be subject to mandatory disclosure requirements for compliance with environmental, social, and governance (ESG) standards under the upcoming EU Corporate Sustainability Reporting Directive (CSRD). The CSRD requires companies to report on actual or potential adverse impacts on their workforce with regard to equal treatment and opportunities, which are difficult to measure without collecting and monitoring DEI data.
Currently, the regulation of collection and processing of DEI data is mainly left to the Member States. EU anti-discrimination and equality laws do not impose an obligation on organizations to collect DEI data for monitoring purposes, but neither do they prohibit collecting such data. In the absence of a specific requirement or prohibition, the processing of DEI data is regulated by the GDPR. The GDPR provides for ample discretionary powers for the Member States to provide for legal bases in their national laws to process DEI data for monitoring purposes. In practice, most Member States, however, have not used the opportunity under the GDPR to provide for a specific legal basis in their national laws for processing racial or ethnic data for monitoring purposes (with notable exceptions).13 As a consequence, collection and processing of DEI data for monitoring purposes is taking place on a voluntary basis, whereby employees are asked to fill out surveys based on self-identification. This is in line with the GDPR, which provides for a general exception allowing organizations to process DEI data based on the explicit consent of the individuals concerned, provided that the Member States have not excluded this option in their national laws. In practice, Member States have not used this discretionary power; they have not excluded the possibility of relying on explicit consent for the collection of DEI data. This leaves explicit consent as a valid, but also the only practically viable, option to collect DEI data for monitoring purposes.14 Both human rights frameworks and the GDPR itself facilitate such monitoring, provided there are safeguards to protect abuse of the relevant data in accordance with data minimization and privacy-by-design requirements.15 We now see best practices developing as to how to monitor DEI data while limiting the impact on the privacy of employees, and rightfully so. In literature, collecting racial or ethnic data for monitoring is rightfully described as “a problematic necessity, a process that itself needs constant monitoring.”16
2. Towards a positive duty to monitor for workplace discrimination
Where discrimination in the workplace is pervasive, monitoring of DEI data for quantifying discrimination in those workplaces is essential for employers to be able to comply with anti-discrimination and equality laws. As indicated above, there is no general requirement under the Racial Equality Directive to collect, analyze, and use DEI data. This Directive, however, does provide for a shift in the burden of proof.17 Where a complainant establishes facts from which a prima facie case of discrimination can be presumed, it will fall to the employer to prove that there has been no breach of the principle of equal treatment. Where workplace discrimination is pervasive, a prima facie case will be easy to make, and it will fall to the employer to disprove any such claim, which will be difficult without any data collection and monitoring. The argument that the GDPR does not allow for processing such data will not relieve the employer of its burden of proof. See, in a similar vein, European Committee of Social Rights, European Roma Rights Centre v. Greece18:
Data collection
27. The Committee notes that, in connection with its wish to assess the allegation of the discrimination against Roma made by the complainant organisation, the Government stated until recently that it was unable to provide any estimate whatsoever of the size of the groups concerned. To justify its position, it refers to legal and more specifically constitutional obstacles. The Committee considers that when the collection and storage of personal data [are] prevented for such reasons, but it is also generally acknowledged that a particular group is or could be discriminated against, the authorities have the responsibility for finding alternative means of assessing the extent of the problem and progress towards resolving it that are not subject to such constitutional restrictions.”
Since as early as 1989,19 all relevant EU and international institutions have, with increasing urgency, issued statements that the collection of DEI data for monitoring and reporting purposes is indispensable to the fight against discrimination.20 See, for example, the EU Anti-racism Action Plan 2020‒202521 (the “Action Plan”) in which the European Commission explicitly states:
Accurate and comparable data is essential in enabling policy-makers and the public to assess the scale and nature of discrimination suffered and for designing, adapting, monitoring and evaluating policies. This requires disaggregating data by ethnic or racial origin.22
In the Action Plan, the European Commission notes that equality data remains scarce, “with some member states collecting such data while others consciously avoid this approach.” The Action Plan subsequently provides significant steps to ensure collection of reliable and comparable equality data at the European and national level.23
On the international level, the United Nations (UN) takes an even stronger approach, considering collection of DEI data that allow for disaggregation for different population groups to be part of governments’ human rights obligations. See the UN 2018 report, “A Human Rights-based approach to data, leaving nobody behind in the 2030 agenda for sustainable development” (the “UN Report”):24
Data collection and disaggregation that allow for comparison of population groups are central to a HRBA [human rights-based approach] to data and form part of States’ human rights obligations.25 Disaggregated data can inform on the extent of possible inequality and discrimination.
The UN Report notes that this was implicit in earlier treaties, but that “more recently adopted treaties make specific reference to the need for data collection and disaggregated statistics. See, for example, Article 31 of the Convention on the Rights of Persons with Disabilities.”26
Many of the reports referred to above explicitly state that the GDPR should not be an obstacle for collecting this data. For example, in 2021 the EU High Level Group on Non-discrimination Equality and Diversity issued “Guidelines on improving the collection and use of equality data,”27 which explicitly state:
Sometimes data protection requirements are understood as prohibiting collection of personal data such as a person’s ethnic origin, religion or sexual orientation. However, as the explanation below shows the European General Data Protection Regulation (GDPR), which is directly applicable in all EU Member States since May 2018, establishes conditions under which collection and processing of such data [are] allowed.
The UN Special Rapporteur on Extreme Poverty and Human Rights even opined that the European Commission should start an infringement procedure where a Member State continues to misinterpret data protection laws as not permitting data collection on the basis of racial and ethnic origin.28
In light of the above, employers referring to GDPR requirements to avoid collecting DEI data for monitoring purposes are starting to appear to be driven more by a wish to avoid workplace scrutiny than by genuine concerns for the privacy of their employees.29 The employees whose privacy is at stake are exactly those who are potentially exposed to discrimination. At the risk of stating the obvious, invoking the GDPR as a prohibition on DEI data collection with the outcome that organizations avoid or are constrained from detecting discrimination of these groups, runs contrary to the GDPR’s entire purpose. The GDPR is about preserving the privacy of the employees while protecting them against discrimination.
Not surprisingly, the minority interest groups, who represent the groups whose privacy is actually at stake, actively advocate for such collection of data and monitoring.30 If anything, their concerns as to collection of DEI data for DEI monitoring purposes is that these groups often do not feel represented in the categorization of the data collected.31 If these are too generic or do not allow for splitting out of intersecting inequalities (like being female and from a minority), specific vulnerable groups may well fall outside the scope of DEI monitoring and therefore outside the scope of potential DEI policy measures. It is widely acknowledged that already the setting of the categories may represent bias, and the core principle of relevant human rights frameworks for collecting DEI data is to involve relevant minority stakeholders in a bottom-up process of indicator selection (the human rights principle of participation),32 and further ensure data collection is based on the principle of self-identification, which requires that surveys should always allow for free responses (including no responses) as well as indicating multiple identities (see section 4 on the human rights principle of self-identification).
3. ESG reporting
Under the upcoming CSRD,33 large companies34 will be subject to mandatory disclosure requirements on ESG matters from 2024 onwards (i.e., in their annual reports published in 2025).35 The European Commission is tasked with setting the reporting standards and has asked the European Financial Reporting Advisory Group (EFRAG) to provide recommendations for these standards. In November 2022, EFRAG published the first draft standards. The European Commission is now consulting relevant EU bodies and Member States, before adopting the standards as delegated acts in June 2023.
One of the draft reporting standards provides for a standard on reporting on a company’s own workforce (European Sustainability Reporting Standard S1 (ESRS S1)).36 This standard requires a general explanation of the company’s approach identifying and managing any material, actual, and potential impacts on its own workforce in relation to equal treatment and opportunities for all, including “gender equality and equal pay for work of equal value” and “diversity.” From the definitions in ESRS S1, it is clear that “equal treatment” requires that there shall be “no direct or indirect discrimination based on criteria such as gender and racial or ethnic origin; “equal opportunities” refers to equal and nondiscriminatory access to opportunities for education, training, employment, career development and the exercise of power without any individuals being disadvantaged on the basis of criteria such as gender and racial or ethnic origin.
ESRS S1 provides a specific chapter on metrics and targets, which requires mandatory public reporting metrics on a set of specific characteristics of a company’s workforce, which does include gender, but not racial or ethnic origin.37 Reading all standards, however, it is difficult to imagine how companies could report on the standards without collecting and monitoring DEI data internally.
For example, the general disclosure requirements of ESRS S1 require the company to disclose all of its policies relating to equal treatment and opportunity,38 including:
d) Whether and how these policies are implemented through specific procedures to ensure discrimination is prevented, mitigated, and acted upon once detected, as well as to advance diversity and inclusion in general.
It is difficult to see how companies can comply with reporting the information in clause (d) which requires reporting how the policies are implemented to ensure discrimination is prevented, mitigated, and acted upon once detected, without collecting DEI data.
ESRS S1 further clarifies how disclosures under S1 relate to disclosures under ESRS S2, which includes disclosures where potential impacts on a company’s own workforce have an impact on the company’s strategy and business model(s). See:
Based on the reporting requirements above, collecting and monitoring of DEI data will be required for mandatory disclosures, which also provides for a legal basis under the GDPR for collecting such data, provided the other provisions of GDPR are complied with as well as broader human rights principles. Before setting out the GDPR requirements, a brief summary is provided of the broader human rights principles that apply to data collection of DEI data for monitoring purposes.
4. Human rights principles
The three main human rights principles in relation to data collection processes are self-identification, participation, and data protection.39 The principle of self-identification requires that people should have the option of self-identifying when confronted with a question seeking sensitive personal information related to them. As early as 1990, the Committee on the Elimination of Racial Discrimination held that identification as a member of a particular ethnic group “shall, if no justification exists to the contrary, be based upon self-identification by the individual concerned.”40 A personal sense of identity and belonging cannot in principle be restricted or undermined by a government-imposed identity and should not be assigned through imputation or proxy. This entails that all questions on personal identity, whether in surveys or administrative data, should allow for free responses (including no response) as well as multiple identities.41 Also, owing to the sensitive nature of survey questions on population characteristics, special care is required by data collectors to demonstrate to respondents that appropriate data protection and disclosure control measures are in place.42
5. EU data protection law requirements
Collecting racial or ethnic data for monitoring is rightfully described in literature as “a problematic necessity, a process that itself needs constant monitoring.”43 The collection and use of racial and ethnic data to combat discrimination is not an “innocent” practice.44 Even if performed on an anonymized or aggregated basis, it can contribute to exclusion and discrimination. An example is when politicians argue, based on statistics, that there are “too many” people of a certain category in a country.45
Collection and processing of racial and ethnic data is not illegal in the EU. In general, no Member State imposes an absolute prohibition on collecting this data.46 There is also no general requirement under the Racial Equality Directive to collect, analyze, and use equality data. Obligations to collect racial or ethnic data also do not generally seem to be codified in law in the Member States, with notable exceptions in Finland, Ireland and (pre-Brexit) the UK.47
In the absence of specific prohibitions and specific requirements in EU and Member State law, processing of racial and ethnic data is governed by the GDPR, which provides a special regime for processing “special categories of data” such as data revealing racial or ethnic origin.48/49
Article 9 of the GDPR prohibits the processing of special categories of data, with notable exceptions. The prohibition does not apply (insofar as relevant here)50 when:
The individual has given explicit consent to the processing of personal data for one or more specified purposes, except where EU or Member State law provides that the prohibition may not be lifted by the individual (Article 9(2) sub. (a) GDPR).
The processing is necessary for reasons of substantial public interest on the basis of EU or Member State law (Article 9(2) sub. (g) GDPR).
The processing is necessary for statistical purposes in accordance with Article 89(1) GDPR based on EU or Member State law (Article 9(2) sub. (j) GDPR).
For the conditions to apply, provisions must be made in EU or Member State law which permit processing where necessary for substantial public interest or statistical purposes. In practice, most Member States have not used their discretionary power under the GDPR to provide a specific legal basis in their national law for processing racial or ethnic data for these purposes.51 Member States have, however, also not used the possibility under GDPR to provide in their national law that the prohibition under Article 9 may not be lifted by consent. This leaves explicit consent as a valid, but also the only practically viable, option to collect DEI data for monitoring purposes.52 This is in line with human rights principles, provided reporting is based on self-identification.
Once the CDRD has been implemented in the national laws of the Member States, collecting DEI data will be required for mandatory ESG disclosures, which will be permitted under Article 9(2) sub. (g) GDPR (reason of substantial public interest). Where organizations collect this data the human rights principles set out above should be observed, in particular that reporting should be based on self-identification. In practice, the legal basis of substantial public interest, will therefore very much mirror the legal basis of explicit consent and the safeguards and mitigating measures set out below will equally apply.
5.1 Explicit consent
The requirements for valid consent are strict, especially in the workplace.53 For instance, consent must be ‘freely given’, which is considered problematic in view of the imbalance of power between the employer and the individual.54 The term ‘explicit’ refers to the way consent is expressed by the individual. It means that the data subject must give an express statement of consent. An obvious way to make sure consent is explicit would be to expressly confirm consent in a written statement, an electronic form or in an email.55
For employee consent to be valid, employees need to have genuine free choice as whether to provide the information or not without any detrimental effects. This includes no downside whatsoever to an employee refusing to provide consent, which would be the case if refusal of consent would, for example, exclude the employee from any positive action initiatives.56 To ensure that consent is indeed freely given, the voluntary nature of the reporting for employees should be twofold: (1) the act of completing a survey or questionnaire related to one’s racial or ethnic background should be voluntary and (2) the survey or questionnaire should include options for the employee to respond with (an equivalent of) “I choose not to say.” The individual status of a survey or questionnaire (i.e., completed or not completed), as well as the provided answers, should not be visible to the employer on an individual basis. This is in practice realized by privacy-by-design measures (see further below).
Note that for consent to be valid, it needs to be accompanied with clear information as to why it is being collected and how it will be used (consent needs to be “specific and informed”).57 In addition, employees should be informed that consent can be withdrawn at any time and that any withdrawal of consent will not affect the lawfulness of processing prior to the withdrawal.58
When consent is withdrawn, any processing of personal data (to the extent it is identifiable) will need to stop from the moment that the consent is withdrawn. However, where data are collected and processed in the aggregate (see section 5.2 below on privacy-by-design requirements), employees will no longer be identifiable or traceable, and, therefore, any withdrawal of consent will not be effective in relation to data already collected and included in such reports.
5.2 General data protection principles
Obtaining consent does not negate or in any way diminish the data controller’s obligations to observe the principles of processing enshrined in the GDPR, especially Article 5 of the GDPR with regard to fairness, necessity, and proportionality, as well as data quality.59 Employers will further have to comply with principles of privacy-by-design.60 In practice, this means that employers should only process the personal data that they strictly need for their pursued purposes and in the most privacy-friendly manner. For example, employers can collect DEI data without reference to individual employees (i.e., without employees providing their name, or other unique identifier, such as a personnel number or email address). In this manner, employers will comply with data minimization and privacy-by-design requirements, limiting any impact on the privacy of their employees. In practice we see also that employers involve a third-party service provider, and request employees to send the information directly to the third-party provider. The third-party service provider subsequently only shares aggregate information with the employer.
From a technical perspective, it is possible to achieve a similar segregation of duties within the company’s internal HR system (like Workday or SuccessFactors), whereby data are collected on a de-identified basis and only one or two employees within the diversity function have access to de-identified DEI data for statistical analysis and subsequently report to management on an aggregate basis only (ensuring individual employees cannot be singled out or re-identified).61 This requires customization of HR systems, which is currently underway. Where employers have a works council, the works council will need to provide its prior approval for any company policy related to the processing of employee data. As part of the works council approval process, the privacy-by-design measures can be verified.
For the sake of completeness, note that where data collection and processing are on a truly anonymous basis, the provisions of the GDPR would not apply.62 The threshold under the GDPR for data to be truly anonymous is, however, very high and is unlikely to be met where employers collect such data from their employees. Any application of anonymization techniques, such as pseudonymization (e.g., removal or replacement of unique identifiers and names) therefore do not take the data processing outside the scope of the GDPR, but are rather necessary measures to meet data minimization and privacy-by-design requirements.
6. The way forward
It is no longer possible to hide behind the GDPR to avoid collecting DEI data for monitoring purposes. The direction of travel is towards meaningful DEI policies, monitoring, and reporting (such as under CSRD). Collecting data relating to racial and ethnic origin has been labeled “a problematic necessity, a process that itself needs constant monitoring.” This is the negative way of qualifying DEI data collection and monitoring. A positive human rights-based approach is that data-collection processes should be based on self-identification, participation, and data protection. Where all three principles are safeguarded, the process will be controlled and can be trusted without being inherently problematic or in need of constant monitoring.The path forward revolves around building trust with the workforce (and their works councils and trade unions). If trust is not already a given, the recommendation is to start small (in less sensitive jurisdictions), engage with works councils or the workforce at large, and in light of the upcoming CRSD, start now.63
Self-identification: A company requires the full trust of its employees to be able to collect representative DEI data from them based on self-identification. If introduction of DEI data collection is perceived by employees as abrupt and countercultural, or a box-ticking exercise unlikely to result in meaningful change, surveys will not be successful. For employees to fill out surveys disclosing sensitive data, trust is required that their employer is serious about its DEI efforts and that data collection and monitoring complements these efforts based on the aphorism “We measure what we treasure.” Practice shows that when a certain tipping point is reached, employees are proud to self-identify and contribute to the DEI statistics of their company.
Trust will be undermined if employees do not recognize themselves in any pre-defined categories. Proper self-identification entails that any pre-defined categories are relevant to a country’s workforce, allow for free responses (including no response) as well as allow for identifying with multiple identities. Trust of employees will be enhanced, if the company has put careful thought into the reporting metrics, ensuring that reporting can actually inform where the company can focus interventions to bring about meaningful change. For example, is important to ensure reporting metrics are not just outcome-based (tracking demographics, without knowing where a problem exists), but are also process-based. Process-based metrics can pinpoint problems in employee-management processes such as hiring, evaluation, promotion, and executive sponsorship. If outcome metrics inform a company that it has limited percentages of specific minorities, process metrics may show in which part of its processes (or part of a process, e.g., which part of the hiring process) a company needs to focus to bring about meaningful change. Examples of these metrics include the speed at which minorities move up the corporate ladder and salary differentials between different categories in comparable jobs.
Participation: Trust requires an inclusive bottom-up process whereby employees (and their works councils) have a say in the data collection and monitoring procedure. For example, in setting the categories in a survey to ensure minority employees can ‘recognize’ themselves in those categories, in setting the reporting metrics to ensure these may bring about meaningful change as well as in setting the data protection safeguards (see below).
Data protection:To gain employees’ trust, data protection principles, such as data security, data minimization and privacy-by-design, must be fully implemented. A company will need to submit a data collection and processing protocol to its works council and receive its approval, specifying all organizational, contractual and technical measures ensuring that data are collected on a de-identified basis, and access controls are in place to ensure access to the data is limited to one or two employees of the diversity team in order to generate statistics only.
Country Reports
Below we provide a summary of the legal basis available under the laws of France, Germany, Italy, Spain and The Netherlands, and available to collect racial and ethnic background data of their employees for purposes of monitoring their DEI policies (DEI Monitoring Purposes). Note that in all cases also the general data processing principles apply (such as privacy-by-design requirements) as set out in section 5.2, but are not repeated here.
Belgium
Olivia Vansteelant & Damien Stas de Richelle, Laurius
Summary requirements for processing racial and ethnic background data
Under Belgian law, there is neither a specific legal requirement for employers to collect data revealing racial or ethnic origin of their employees, nor is there a general prohibition for employers to collect such data.
Companies with their registered office or at least one operating office in the Brussels-Capital Region can lawfully process personal data on foreign nationality or origin for DEI Monitoring Purposes on the basis of necessity to exercise their labour law rights and obligations. All companies can lawfully process racial or ethnic background data for DEI Monitoring Purposes based on explicit consent of their employees. Employers with a works council should consult their works council before implementing any policy related to processing data revealing racial or ethnic origin of their employees, but no approval from the works council is required by law.
Necessity to exercise labour law rights and obligations (art. 9(2) b) GDPR). The basis for this exception can be found in the Decision of the Government of the Brussels-Capital Region of 7 May 2009 regarding diversity plans and the diversity label and the Ordonnance of the Government of the Brussels-Capital Region of 4 September 2008 on combatting discrimination and promoting equal treatment in the field of employment.
According to this Decision, companies with their registered office or at least one operating office in the Brussels-Capital Region are entitled to draft a diversity plan to address the issue of discrimination in recruitment and develop diversity at the workplace. No similar regulations currently exist in the Flemish or Walloon regions. Many Flemish NGOs are urging the Flemish Government to work towards a sustainable and inclusive labour market with monitoring and reporting as an important basis for evaluation of diversity. They are asking the Flemish Government to put its full weight behind this before the 2024 elections.
Under this Decision, employers are permitted to analyze the level of diversity amongst their personnel by classifying their workforce into categories, including that of foreign nationality or origin. To classify employees in this category and, hence, collect data on foreign nationality or origin. It is possible that employers may indirectly collect data revealing racial or ethnic origin due to a possible link with, or inference drawn from, information on nationality or origin. However, the Decision does not cover data revealing racial or ethnic origin and there would be no condition permitting such collection under Article 9(2)(b) GDPR.
Explicit consent. The Belgian Implementation Act does not expressly exclude the possibility of processing racial or ethnic data based on employees’ consent. For all other purposes of processing racial and ethnic background data, employers can therefore rely on explicit consent and voluntary reporting by employees. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Belgium. To ensure that consent is indeed freely given, the voluntary nature of the reporting for employees should be twofold: (1) the act of completing a survey or questionnaire related to one’s racial or ethnic background should be voluntary and (2) the survey or questionnaire should include options for the employee to respond with (an equivalent of) “I choose not to say.”
France
Héléna Delabarre & Sylvain Naillat, Nomos, Société D’Avocats
Summary requirements for processing racial and ethnic background data
Under French law, the processing of race and ethnicity data is prohibited in principle under (i) a general provision of the French Constitution, and (ii) some specific provisions of French data protection laws, which is also the public position of the French Data Protection Authority’s (CNIL). French law does not recognize any categorization of people based on their (alleged) races or ethnicity and the prohibition of processing race and ethnicity data has been reaffirmed by the French Constitutional Court in a decision related to public studies whose purpose was to measure diversity/minority groups. However, while race and ethnicity data may not be collected or processed, objective criteria relating to geographical and/or cultural origins, such as name, nationality, birthplace, mother tongue, etc., can be considered by employers in order to measure diversity and to fight against discrimination.
In a public paper from 201264 (that has not been contradicted since) the CNIL confirmed that employers may collect and process data about objective criteria relating to “origin,” such as the birthplace of the respondent and his/her parents, his/her mother tongue, his/her nationality and that of his/her parents, etc., if such processing is necessary for the purpose of conducting statistical studies aiming at measuring and fighting discrimination. The CNIL also considers that questions about self-identification and how the respondent feels perceived by others can be asked if necessary, in view of the purpose of the data collection and any other questions asked. See the CNIL’s paper:
In accordance with the decision of the Constitutional Court of 15 November 2007 and the insights of the Cahiers du Conseil, studies on the measurement of diversity cannot, without violating Article 1 of the Constitution, be based on the ethnic or racial origins of the persons. Any nomenclature that could be interpreted as ethno-racial reference must therefore be avoided. It is nevertheless possible to approach the criterion of “origin” on the basis of objective data such as the place of birth and the nationality at birth of the respondent and his or her parents, but also, if necessary, on the basis of subjective data relating to how the respondent self-identifies or how the person feels perceived by others.65
Based on the guidance from the CNIL, several public studies have been conducted relying on the collection of information considered permissible by the CNIL, i.e., (i) whether or not respondents felt discriminated against based on their origins or skin colour; (ii) how the respondent self-identifies; and (iii) statistics about the geographical and/or cultural origins of the respondents.66 The provision of any information should be entirely voluntary and the rules regarding explicit consent in section 5.1 above apply in the same manner to France. Any questions relating to the collection of data regarding geographical and/or cultural origins should be objective, and in the absence of the need to identify (directly or indirectly) the individuals, then the collection process should be entirely anonymous.
Germany
Hanno Timner, Morrison & Foerster
Legal basis for processing racial and ethnic background data
Under German law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data.
Employers in Germany can lawfully process racial and ethnic background data for DEI Monitoring Purposes on the basis of (i) necessity to exercise their labour law rights and obligations or (ii) based on explicit consent of their employees. If the employer has a works council, the works council has a co-determination right for the implementation of diversity surveys and questionnaires in accordance with Section 94 of the Works Council Act (Betriebsverfassungsgesetz – “BetrVG”) if the information is not collected anonymously and on a voluntary basis. If the information is collected electronically, the works council may have a co-determination right in accordance with Section 87(1), no. 6 BetrVG.
Necessity to exercise labour law rights or obligations. According to Section 26(3) of the German Federal Data Protection Act (Bundesdatenschutzgesetz – “BDSG”), the processing of racial and ethnic background data in the employment context is only permitted if the processing is necessary for the employer to exercise rights or comply with legal obligations derived from labour law, social security, and social protection law, and there is no reason to believe that the data subject has an overriding legitimate interest in not processing the data. One of the rights of the employer derives from Section 5 of the German General Equal Treatment Act (Allgemeines Gleichbehandlungsgesetz – “AGG”), according to which employers have the right to adopt positive measures to prevent and stop discrimination on the grounds of race or ethnicity. As a precondition to the adoption of such measures, employers may collect data to identify their DEI needs.
Explicit consent. For all other purposes of processing racial and ethnic background data, employers will have to rely on explicit consent and voluntary reporting by employees. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Germany. Further, Section 26(2) BDSG specifies that the employee’s level of dependence in the employment relationship and the circumstances under which consent was given have to be taken into account when assessing whether an employee’s consent was freely given. According to Section 26(2) BDSG, consent may be freely given, in particular, if it is associated with a legal or economic advantage for the employee, or if the employer and the employee are pursuing the same interests. This can be the case if the collection of data also benefits employees, e.g., if it leads to the establishment of comprehensive DEI management within the employer’s company.
Ireland
Colin Rooney & Alison Peate, Arthur Cox
Summary requirements for processing racial and ethnic background data
Under Irish law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data.
Explicit consent: Employers in Ireland can lawfully process race and ethnicity data for their own specified purpose based on the explicit consent of employees. It should be noted that the Irish Data Protection Commission has said that in the context of the employment relationship, where there is a clear imbalance of power between the employer and employee, it is unlikely that consent will be given freely. While this does not mean that employers can never rely on consent in relation to the processing of employee data, it does mean that the burden is on employers to prove that consent is truly voluntary, as explained in section 5.1 above. In the context of collecting data relating to an employee’s racial or ethnic background, employers should ensure that employees are given the option to select “prefer not to say”.
Statistical purposes: If the employer intends to process race and ethnicity data solely for statistical purposes, it could rely on Article 9(2)(j) of the GDPR and section 54(c) of the Irish Data Protection Act 2018 (the “2018 Act”), provided that the criteria set out in section 42 of the 2018 Act are met. This allows for race and ethnicity data to be processed where it is necessary and proportionate for statistical purposes and where the employer has complied with section 42 of the 2018 Act. Section 42 requires that: (i) suitable and specific measures are implemented to safeguard the fundamental rights and freedoms of the data subjects in accordance with Article 89 GDPR; (ii) the principle of data minimisation is respected; and (iii) the information is processed in a manner which does not permit identification of the data subjects, where the statistical purposes can be fulfilled in this manner.
Italy
Marco Tesoro, Tesoro and Partners
Summary requirements for processing racial and ethnic background data
The Italian Data Protection Code (IDPC) regulates the processing of personal data under Article 9 of the GDPR, stating that the legal basis for the processing of such data must be to comply with a law or regulation (art. 2-ter(1), IDPC), or for reasons of relevant public interest. Under Italian law, no specific legal basis has been implemented to process racial or ethnic data for reasons of public interest.
Explicit consent. We refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Italy. The IDPC does not expressly exclude the possibility of processing racial and ethnicity data based on employees’ consent. Employers wanting to collect and process racial and ethnicity data on the basis of employees’ consent under Art. 9 of the GDPR, however, should ensure that the consent is granted on a free basis and, where possible, involve the trade unions they are associated with (as well as their Works Council, where relevant). The trade unions should be able to i) ascertain and certify that the employees’ consent has been freely given; and ii) ensure that employees are fully aware of their rights and of the consequences of providing such data. In the absence of associated trade unions, employers may inform the local representative of the union associations who signed the collective bargaining agreement (CBA) that applies (if any). Furthermore, employers should ensure that employees are given the option to “prefer not to say.”
Statute of Workers. It is also worth noting that under Italian law, there is a general prohibition on the collection of information not strictly related or needed to assess the employee’s professional capability. Per Article 8, Law 23 May 1970, no. 300 “Statute of Workers,” race and ethnicity data should not be collected or used by employers to impact in any way the decision to hire a candidate or to manage any of the terms of the employment relationship.
Spain
Laura Castillo, Gómez-Acebo & Pombo
Summary requirements for processing racial and ethnic background data
Under the Organic Law 3/2018 of 5th December on the Protection of Personal Data and Guarantee of Digital Rights (SDPL), there is a general prohibition on collecting racial and ethnic background information unless: (i) there is a legal requirement to do so (per Article 9 of the SDPL); or (ii) the employees have provided their explicit consent (although the latter is not without risk).
Fulfilment of a legal requirement. The Comprehensive Law 15/2022 of 12th July, for Equal Treatment and Non-Discrimination (the “Equal Treatment Law”) guarantees and promotes the right to equal treatment and non-discrimination. This Law expressly states that no limitations, segregations, or exclusions may be made based on ethnicity or racial backgrounds, i.e., nobody can be discriminated against on grounds of race or ethnicity. In this context, any positive discrimination measures that have been implemented as a result of the Equal Treatment Law have been included in collective bargaining agreements (CBA) or collective agreements as agreed with the unions or the relevant employee representatives. Where there is a requirement in the CBA to collect race and ethnicity data from employees, employers can do so, as this would constitute a legal requirement. In circumstances where the CBA does not specifically require the collection of this type of information, employers can either seek to include such a provision in the terms of the CBA or a collective agreement and work with the unions or legal representatives to do so, or take an alternative approach and rely on explicit consent, as set out immediately below.
Explicit consent. In principle, an employee’s consent on its own is not sufficient to lift the general prohibition on the processing of sensitive data under the SDPL. However, one of the main aims of the prohibition pursuant to the SDPL is to avoid discrimination. Therefore, if the purpose of collection is to promote diversity, it is arguable (although this has not yet been tested in Spain) that employers can rely on explicit consent, and we refer to the conditions for explicit consent set out above in section 5.1, as they apply in the same manner to Spain. In addition to the conditions in section 5.1, Spanish case law has determined that the employee’s level of dependence within the employment relationship and the circumstances under which consent is given should be considered when assessing whether an employee’s consent is freely given. It is therefore not recommended that employers obtain or process race and ethnicity data of its employees during the recruitment or hiring process, or before the end of the probationary period, unless a CBA regulates this issue in a different manner. Employers should also ensure that employees are given the option to “prefer not to say” and ensure that they are able to prove that consent is genuinely voluntary, as explained in section 5.1 above.
The Netherlands
Marta Hovanesian, Morrison & Foerster
Summary requirements for processing racial and ethnic background data
Under Dutch law, there is neither a specific legal requirement for employers to collect racial and ethnic background data of their employees, nor is there a general prohibition for employers to collect such data.
Employers in the Netherlands can lawfully process racial and ethnic background data of their employees for DEI Monitoring Purposes on the basis of (i) the substantial public interest on the basis of Dutch law or (ii) the explicit consent of the employees. Employers with a works council need to ensure their works council approves any policy related to processing Equality Data.
Substantial public interest. The Netherlands has implemented the conditions of Article 9(2) GDPR for the processing of racial and ethnic background data in the Dutch GDPR Implementation Act (the “Dutch Implementation Act”). More specifically, Article 25 of the Dutch Implementation Act provides that racial and ethnicity background data (limited to country of birth and parents’ or grandparents’ countries of birth) may be processed (on the basis of substantial public interest) if processing is necessary for the purpose of restoring a disadvantaged position of a minority group, and only if the individual has not objected to the processing. Reliance on this condition requires the employer to, among other things, (i) demonstrate that certain groups of people have a disadvantaged position; (ii) implement a wider company policy aimed at restoring this disadvantage; and (iii) demonstrate that the processing of race and ethnicity data is necessary for the implementation and execution of said policy.
Explicit consent. Employers can collect racial and ethnicity background data of their employees for DEI Monitoring Purposes based on explicit consent and voluntary reporting by employees. The conditions for consent set out above in section 5.1 apply in the same manner to the Netherlands.
Cultural Diversity Barometer. Note that Dutch employers with more than 250 employees have the option to request DEI information from Statistics Netherlands about their own company. Statistics Netherlands, upon the Ministry of Social Affairs and Employment’s request, created the “Cultural Diversity Barometer”. The Barometer allows employers to disclose certain non-sensitive personal data to Statistics Netherlands, which, in turn, will report back to the relevant employers with a statistical and anonymous overview of the company’s cultural diversity (e.g., percentage of employees with a (i) Dutch background, (ii) western migration background, and (iii) non-western migration background). Statistics Netherlands can either provide information about the cultural diversity within the entire organization or within specific departments of the organization (provided that the individual departments have more than 250 employees).
United Kingdom
Annabel Gillham, Morrison & Foerster (UK) LLP
Summary requirements for processing racial and ethnic background data
Under UK law, there is no general prohibition on the collection of employees’ racial or ethnic background data by employers, provided that specific conditions pursuant Article 9 of the retained version of the GDPR (UK GDPR) are met. It is fair to say that the collection of such data is increasingly common in the UK workplace, with several organizations electing to publish their ethnicity pay gap.[1] In some cases, collection of racial or ethnic background data is a legal requirement. For example, with respect to accounting periods beginning on or after April 1, 2022 certain large listed companies are required to include in their published annual reports a “comply or explain” statement on the achievement of targets for ethnic minority representation on their board [2] and a numerical disclosure on the ethnic background of the board.[3]
Employers in the UK can lawfully process racial and ethnic background data of their employees for DEI Monitoring Purposes where the processing is (i) necessary for reasons of substantial public interest on the basis of UK law [4]; or (ii) carried out with the explicit consent of the employees [5]. Employers should ensure that they check and comply with the provisions of any agreement or arrangement with a works council, trade union or other employee representative body (e.g., relating to approval or consultation rights) when collecting and using such data.
Substantial public interest. Article 9 of the retained version of the UK GDPR prohibits the processing of special categories of data, with notable exceptions similar to those set out in section 5 above. Schedule 1 to the UK Data Protection Act 2018 (DP Act 2018) sets out specific conditions for meeting the “substantial public interest” ground under Article 9(2)(g) of the UK GDPR. Two conditions are noteworthy in the context of the collection of racial and ethnic background data.
The first is an “equality of opportunity or treatment” condition. This is available where processing of personal data revealing racial or ethnic origin is necessary for the purposes of identifying or keeping under review the existence or absence of equality of opportunity or treatment between groups of people of different racial or ethnic origins with a view to enabling such equality to be promoted or maintained.[6] There are exceptions – the data must not be used for measures or decisions with respect to a particular individual, nor where there is a likelihood of substantial damage or substantial distress to an individual. Individuals have a specific right to object to the collection of their information.
The second condition covers “racial and ethnic diversity at senior levels of organisations”.[7] Organisations may collect personal data revealing racial or ethnic origin where as part of a process of identifying suitable individuals to hold senior positions (e.g., director, partner or senior manager), is necessary for the purposes of promoting or maintaining diversity in the racial and ethnic origins of individuals holding such positions andcan reasonably be collected without the consent of the individual. When relying on this condition, organisations should factor in any risk that collecting such data may cause substantial damage or substantial distress to the individual.
In order to rely on either condition set out above, organisations must prepare an “appropriate policy document” outlining the principles set out in the Article 9 UK GDPR conditions and the measures taken to comply with those principles, along with applicable retention and deletion policies.
Explicit consent [8]The conditions for consent set out in section 5.1 above apply in the same manner to the UK. Consent must be a freely given, specific, informed and unambiguous indication of an employee’s wishes. Therefore, any request for employees to provide racial or ethnic background data should be accompanied with clear information as to why it is being collected and how it will be used for DEI Monitoring Purposes.
[1] Ethnicity pay gap reporting – Women and Equalities Committee (parliament.uk)
[2] At least one board member must be from a minority ethnic background (as defined in the Financial Conduct Authority, Listing Rules and Disclosure Guidance and Transparency Rules (Diversity and Inclusion) Instrument 2022, https://www.handbook.fca.org.uk/instrument/2022/FCA_2022_6.pdf.
[6] Paragraph 8, Part 2 of Schedule 1 to the DP Act 2018.
[7] Paragraph 9, Part 2 of Schedule 1 to the DP Act 2018.
[8] Article 9(2)(a) UK GDPR.
* Ms. Moerel thanks Annabel Gillham, a partner at Morrison & Foerster in London, for her valuable input on a previous version of this article.
1 For recent statistics, see “A Union of equality: EU anti-racism action plan 2020-2025,” https://ec.europa.eu/info/sites/default/files/a_union_of_equality_eu_action_plan_against_racism_2020_-2025_en.pdf, p. 2, referring to wide range of surveys conducted by the EU Agency for Fundamental Rights (FRA) pointing to high levels of discrimination in the EU, with the highest level in the labor market (29%), both in respect of looking for work but also at work.
2 See, specifically, Council Directive 2000/43/EC of June 29, 2000, implementing the principle of equal treatment between persons irrespective of racial or ethnic origin (“Racial Equality Directive”), Official Journal L 180, 19/07/2000 P. 0022–0026. Action to combat discrimination and other types of intolerance at the European level rests on an established EU legal framework, based on a number of provisions of the European Treaties (Articles 2 and 9 of the Treaty on European Union (TEU), Articles 19 and 67(3) of the Treaty on the Functioning of the European Union (TFEU), and the general principles of non-discrimination and equality, also reaffirmed in the EU Charter of Fundamental Rights (in particular, Articles 20 and 21).
4 For a list of examples why diversity policies may fail: https://hbr.org/2016/07/why-diversity-programs-fail. See also Data-Driven Diversity (hbr.org): “According to Harvard Kennedy School’s Iris Bohnet, U.S. companies spend roughly $8 billion a year on DEI training—but accomplish remarkably little. This isn’t a new phenomenon: An influential study conducted back in 2006 by Alexandra Kalev, Frank Dobbin, and Erin Kelly found that many diversity-education programs led to little or no increase in the representation of women and minorities in management.”
5 Research shows that lack of social networks and mentoring and sponsoring is a limiting factor for the promotion of women, but this is even stronger for cultural diversity, due to the lack of a “social bridging network,” a network that allows for connections with other social groups, see Putnam, R.D. (2007) “E Pluribus Unum: Diversity and Community in the Twenty-first Century,” Scandinavian Political Studies, 30 (2), pp. 137‒174. While white men tend to find mentors on their own, women and minorities more often need help from formal programs. Introduction of formal mentoring shows real results: https://hbr.org/2016/07/why-diversity-programs-fail.
6 Quote is fromhttps://hbr.org/2022/03/data-driven-diversity.
7 Historically, there have been cases of misuse of data collected by National Statistical Offices (and others), with extremely detrimental human rights impacts, see Luebke, D. & Milton, S. 1994, “Locating the Victim: An Overview of Census-Taking, Tabulation Technology, and Persecution in Nazi Germany.” IEEE Annals of the History of Computing, Vol. 16 (3). See also W. Seltzer and M. Anderson, “The dark side of numbers: the role of population data systems in human rights abuses,” Social Research, Vol. 68, No. 2 (summer 2001), the authors report that during the Second World War, several European countries, including France, Germany, the Netherlands, Norway, Poland, and Romania, abused population registration systems to aid Nazi persecution of Jews, Gypsies, and other population groups. The Jewish population suffered a death rate of 73 percent in the Netherlands. In the United States, misuse of population data on Native Americans and Japanese Americans in the Second World War is well documented. In the Soviet Union, micro data (including specific names and addresses) were used to target minority populations for forced migration and other human rights abuses. In Rwanda, categories of Hutu and Tutsi tribes introduced in the registration system by the Belgian colonial administration in the 1930s were used to plan and assist in mass killings in 1994.
8 The quote is from the Commission for Racial Equality (2000), Why Keep Ethnic Records? Questions and answers for employers and employees (London, Commission for Racial Equality).
9 In the U.S., for example, “crime prediction tools” proved to discriminate against ethnic minorities. The police stopped and searched more ethnic minorities, and as a result this group also showed more convictions. If you use this data to train an algorithm, the algorithm will allocate a higher risk score to this group. Discrimination by algorithms is therefore a reflection of discrimination already taking place “on the ground”. https://www.cnsnews.com/news/article/barbara-hollingsworth/coalition-predictive-policing-supercharges-discrimination.
11 See also the guidance of the UK Information Commissioner (ICO) on AI and data protection, Guidance on AI and data protection | ICO. In earlier publications, I have argued that the specific regime for processing sensitive data under the GDPR is no longer meaningful. Increasingly, it is becoming more and more unclear whether specific data elements are sensitive. Rather, the focus should be on whether the use of such data is sensitive. Processing of racial and ethnic data to eliminate discrimination in the workplace is an example of non-sensitive use, provided that strict technical and organizational measures are implemented to ensure that the data are not used for other purposes. See https://iapp.org/news/a/gdpr-conundrums-processing-special-categories-of-data/ and https://iapp.org/news/a/11-drafting-flaws-for-the-ec-to-address-in-its-upcoming-gdpr-review/.
12 Article 10(2) sub. 5 of the draft AI Act allows for the collection of special categories of data for purposes of bias monitoring, provided that appropriate safeguards are in place, such as pseudonymization.
13 A notable exception is the UK, which long before its exit from the EU, legislated for the collection of racial and ethnic data to meet the requirements of the substantial public interest condition for purposes of both “Equality of opportunity or treatment” and “Racial and ethnic diversity at senior levels” and further provided for criteria for processing ethnic data for statistical purposes, see Schedule 1 to the UK Data Protection Act 2018 (inherited from its predecessor, the Data Protection Act 1998), and the Information Commissioner’s Office Guidance on special category data, 2018. Schedule 1 also provides for specific criteria to meet the requirements of the statistical purposes condition. See here. Another exception is the Netherlands, which allows for limited processing of racial and ethnic data (limited to country of birth and parents’ or grandparents’ countries of birth) for reason of substantial public interest.
18 Complaint No. 15/2003, decision on the merits, Dec. 8, 2004, § 27.
19 See, e.g., ECRI General Policy Recommendation No. 4 on national surveys on the experience and perception of discrimination and racism from the point of view of potential victims, adopted on Mar. 6, 1998.
20 See the Report prepared for the European Commission, “Analysis and comparative review of equality data collection practices in the European Union Data collection in the field of ethnicity,” 2017, data_collection_in_the_field_of_ethnicity.pdf (europa.eu), https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1#:~:text=The%20European%20Handbook%20on%20Equality,to%20achieve%20progress%20towards%20equality. For example, the European Handbook on Equality Data initially dating from 2007, already stated that “Monitoring is perhaps the most effective measure an organisation can take to ensure it is in compliance with the equality laws.” The handbook was updated in 2016 and provides a comprehensive overview of how equality data can be collected, https://op.europa.eu/en/publication-detail/-/publication/cd5d60a3-094d-11e7-8a35-01aa75ed71a1/language-en.
23 EU Anti-racism Action Plan 2020-2025, p. 21, under reference to: Niall Crowley, Making Europe more Equal: A Legal Duty? https://www.archive.equineteurope.org/IMG/pdf/positiveequality_duties-finalweb.pdf, which reports on the Member States that already provide for such positive statutory duty. See p. 16 for an overview of explicit preventive duties requiring organizations to take unspecified measures to prevent discrimination, shifting responsibility to act from those experiencing discrimination to employers. They can stimulate the introduction of new organizational policies, procedures and practices on such issues.
25 For instance, target 17.18 in the 2030 Agenda requests that Social Development Goals indicators are disaggregated by income, gender, age, race, ethnicity, migratory status, disability, geographic location, and other characteristics relevant in national contexts.
30 See policy statement on the website of the European Network Against Racism (ENAR); as well as the statement dated 28 September 2022 issued by Equal@Work Partners calling on the EU to implement conducive legal frameworks that will bring operational and legal certainty to organizations willing to implement equality data collection measures on the grounds of race, ethnicity and other related categories., https://www.enar-eu.org/about/equality-data. See also here.
31 For an excellent article on all issues related to ethnic categorization, see Bonnett, A., & Carrington, B. (2000), “Fitting into categories or falling between them? Rethinking ethnic classification,” British Journal of Sociology of Education, 21(4), pp. 487‒500.
33 See Corporate Sustainability Reporting Directive (Nov. 10, 2022), at point (4) of Article 1, available here.
34 The new EU sustainability reporting requirements will apply to all large companies that fulfill two of the following three criteria: more than 250 employees, €40 million net revenue, and more than €20 million on the balance sheet), whether listed or not. Lighter reporting standards will apply to small and medium enterprises listed on public markets.
35 See Article 1 (Amendments to Directive 2013/34/EU) sub. (8), which introduces new Chapter 6a Sustainability Reporting Standards, pursuant to which the European Commission will adopt a delegated act, specifying the information that undertakings are to disclose about social and human factors, which include equal treatment and opportunity for all, including diversity provisions.
36 See European Financial Reporting Advisory Group, Draft European Sustainability Reporting Standard S1 Own Workforce, Nov. 2022, here
37 The initial working draft of ESRS S1 published by EFRAG for public consultation did include a public reporting requirement also of the total number of ‘employees belonging to vulnerable groups, where relevant and legally permissible to report’ (see disclosure requirement 11), Download (efrag.org). This requirement was deleted from the draft standards presented by EFRAG to the European Commission.
45 See older examples in Bonnett and Carrington 2000.
46 European Commission, Analysis and comparative review of equality data collection practices in the European Union Data collection in the field of ethnicity, p. 14, https://commission.europa.eu/system/files/2021-09/data_collection_in_the_field_of_ethnicity.pdf. Even in France, often seen as a case of absolute prohibition, ethnic data collection is possible under certain exceptions. The same applies to Italy. Italian Workers’ Statute (Italian Law No. 300/1970), Article 8, expressly forbids employers from collecting and using ethnic data to decide whether to hire a candidate and to decide any other aspect of the employment relationship already in place (like promotions). Collecting such data for monitoring workplace discrimination and equal opportunity falls outside the prohibition (provided it is ensured such data cannot be used for other purposes).
48 Data revealing racial or ethnic origin qualifies as a “special category” of personal data under Article 9 of the GDPR. Data on nationality or place of birth of a person or their parents do not qualify as special categories of data and can as a rule be collected without consent of the surveyed respondent. However, if they are used to predict ethnic or racial origin, they become subject to the regime of Article 9 GDPR for processing special categories of data.
50 Note that Article 9(2)(b) of the GDPR provides a condition for collecting racial and ethnic data where it “is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment … in so far as it is authorised by Union or Member State law….” There is currently no Union or Member State law that provides for an employer obligation to collect racial or ethnic data for monitoring purposes. However, EU legislators considered this to be a valid exception for Member States to implement in their national laws. Art. 88 of the GDPR states that Member States may, by law or collective agreements, provide for more specific rules to ensure the protection of the rights and freedoms of employees with respect to the processing of employees’ personal data in the employment context. In particular, these rules may be provided for the purposes of, inter alia, equality and diversity in the workplace.
53 Article 4(11) of the GDPR defines consent as “any freely given, specific, informed, and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”
54 Guidance of the European Data Protection Board has made it explicit in a series of guidance documents that, for the majority of data processing at work, consent is not a suitable legal basis due to the nature of the relationship between employer and employee. See also Opinion 2/2017 on data processing at work (WP249), paragraph 3.3.1.6.2, at https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf.
61 See Article 29 Working Party Opinion 05/2014 on Anonymization Techniques, adopted on 10 April 2014.
62 Recital 26 of the GDPR states the principles of data protection should not apply to anonymous information which does not relate to an identified or identifiable natural person, or which relates to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. Recital 26 further states that the GDPR does not concern the processing of such anonymous information, including for statistical or research purposes.
63 For an informative article on the practicalities of implementing data-driven diversity proposals, see Data-Driven Diversity (hbr.org). The distinction between outcome-based and process-based metrics is based on this article.