Report Outlines Key Privacy Considerations for Video-Based Safety Systems in Vehicles
Despite fewer vehicle miles traveled as a result of the COVID-19 pandemic, an estimated 38,680 individuals died in motor vehicle accidents in 2020 — the largest projected number of fatalities in such accidents in over a decade. Washington, D.C.-based non-profit Future of Privacy Forum (FPF) released a report detailing the data usage and privacy implications of video-based safety systems in vehicles. The report, co-authored with Samsara Inc. (NYSE: IOT), the pioneer of the Connected Operations Cloud, describes how Advanced Driver Assistance Systems (ADAS) work in commercial fleets, identifies the data used by these systems and urges the adoption of privacy best practices that go beyond compliance with existing privacy and data processing laws.
As vehicle safety technologies grow more sophisticated and affordable to deploy, vehicle manufacturers are increasingly adopting AADAS in vehicles. ADAS technologies utilize cameras and sensors to enable adaptive cruise control, emergency braking systems, and other measures — all with the aim of increasing driver safety.
Although these technologies are increasingly commonplace, the report describes how ADAS may create privacy risks for drivers, passengers, and other road users. Privacy risks involving location data, in-cabin video, and audio recordings can be particularly acute when drivers routinely eat, sleep, or talk in their vehicles.
Recent actions by the Department of Transportation, including initiatives such as FMCSA’s Tech-Celerate Now program, anticipate that ADAS will become increasingly common in the commercial transportation industry. The Report identifies key data flows and privacy risks while emphasizing that privacy safeguards must be implemented along with ADAS tech.
Policymakers, commercial fleet operators, and their technology partners must recognize these risks and weigh data protection considerations when assessing the broader use of ADAS and related technologies.
“Policymakers, technology vendors, and commercial fleet managers must recognize and mitigate privacy risks to individuals when assessing the broader use of ADAS and related technologies,” said John Verdi, Senior Vice President of Policy at FPF. “Just as the technology will continue to develop, privacy and data processing laws must evolve as well.”
The Future of Privacy Forum and Samsara urge the adoption of privacy best practices that go beyond compliance with existing privacy and data processing laws, including:
Implementation of privacy by design principles, privacy impact assessments, data minimization strategies, and privacy-enhancing technologies;
Provision of enhanced transparency mechanisms to individuals;
Implementation of practical security safeguards appropriate for the sensitivity of the relevant data; and
Use of robust written policies and contracts to ensure that privacy protections remain attached to the data and that all parties with access to data understand their obligations.
“Technology needs to be designed and used with privacy and security in mind – it is no longer good enough to provide lip service to it,” said Lawrence Schoeb, Legal Director and Data Protection Officer at Samsara. “This is one of many reasons we strongly encourage the operation of any video-based safety systems to be consistent with and reflective of privacy best practices.”
As the annual Computers, Privacy and Data Protection (CPDP) conference took place in Brussels between May 23 and 25, several Future of Privacy Forum (FPF) staff took part in different panels and events organized by FPF or other organizations before and during the conference. In this blogpost, we provide an overview of such events, with a particular focus on the panel which FPF hosted on May 24 at CPDP, on the topic of Mobility Data Sharing under the upcoming EU Data Act: what are the data protection implications and how should the risks be mitigated?
All the below sessions were recorded by the CPDP organizers, and we will include a link to the recordings as soon as they are made available.
May 20: ADM Report Launch Event – A Discussion with Experts
On May 17, FPF launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision-Making (ADM). The Report, authored by FPF’s Policy Counsel, Sebastião Barros Vale, and FPF’s Vice President for Global Privacy, Gabriela Zanfir-Fortuna, is informed by extensive research covering more than 70 Court judgments, decisions from Data Protection Authorities (DPAs), specific Guidance and other policy documents issued by regulators.
On May 20, the authors discussed with prominent European data protection experts some of the most impactful decisions analyzed in the Report during an FPF roundtable. Speakers included Gianclaudio Malgieri, Co-director of the Brussels Privacy Hub and Associate Professor of Law at EDHEC Business School (Lille), Mireille Hildebrandt, Research Professor on ‘Interfacing Law and Technology’ at Vrije Universiteit Brussels, and Brendan van Alsenoy, Deputy Head of Unit “Policy and Consultation” at the European Data Protection Supervisor (EDPS). The expert roundtable discussion was enriched by representatives from UK’s Department for Digital, Culture, Media and Sport (DCMS), the European Consumer Organization (BEUC), and the Brussels Privacy Hub. Watch a recording of the conversation here, and download the slides here.
May 22: CPDP Opening Night – Vulnerable Data Subjects
The day before the conference program started, Gabriela Zanfir-Fortuna was part of a stellar panel organized by the Brussels Privacy Hub for the Opening Night, on the topic of “Vulnerable Individuals in the Age of Artificial Intelligence (AI) Regulation”. The panel, which was moderated by Gianclaudio Malgieri, also counted on Mireille Hildebrandt, Louisa Klingvall (European Commission), Ivana Bartoletti (University of Oxford), and Brando Benifei (co-rapporteur of the AI Act at the European Parliament). It explored how the AI Act draft proposed to protect vulnerable individuals, by prohibiting the exploitation of some forms of vulnerability (based on age, disability, economic and social conditions): could the definition of vulnerability under the text be broadened?
The occasion also served as an opportunity to announce that FPF and the Brussels Privacy Hub will set upan International Observatory on Vulnerable People in Data Protection (the ‘VULNERA’ project) and offered a preview of its website. More details to follow in the following months.
May 23: Global panel on post-COVID data protection; AI Act in the employment context
The first day of the CPDP conference was a busy one for FPF’s Gabriela Zanfir-Fortuna. She started early in a speaking role on a panel about ‘Data Protection Regulation Post-COVID: the Current Landscape of Discussions in Europe, the US, India and Brazil’, organized by Data Privacy Brasil (DPB) and moderated by DPB’s Bruno Bioni. The session, during which Gabriela offered the US perspective on the matter, also counted on the valuable inputs of FPF’s Senior Fellow for India, Malavika Raghavan, the European Data Protection Board (EDPB)’s Head of Secretariat, Isabelle Vereecken, and the Executive Director of the Africa Digital Rights Hub LBG, Teki Akuetteh Falconer. Panelists reflected on new questions for data sharing and protection that had arisen in their regions in areas such as public health (including the design of contact tracing and health passport apps), education, and welfare/social security. View a recording of the session here.
During the last slot of the day, Gabriela moderated a panel on ‘The AI Act and the Context of Employment’, which saw a lively debate on the extent to which the draft Regulation protects workers against AI-powered workplace monitoring and decisions. The panelists in this instance were Aida Ponce del Castillo (European Trade Union Institute), Diego Naranjo (Head of Policy at European Digital Rights), Paul Nemitz (Principal Advisor at the European Commission’s DG JUST), and Simon Hania (Data Protection Officer at Uber). You can read about the main points raised by the speakers in this short thread.
May 24: Cross-Continental privacy compliance and FPF’s panel on Mobility Data
The second day was packed with interesting discussions on topics such as GDPR enforcement conundrums, privacy class actions, and how data protection law can tackle manipulative web design (or ‘dark patterns’). FPF staff were involved in some of the most exciting panels and events of the day.
FPF’s Policy Analyst, Mercy King’ori, was a speaker at a panel at the CPDP Global track, on ‘Corporate Compliance with a Cross Continental Framework: the State of Global Privacy in 2022’. As Mercy elaborated on African regulatory developments, the remaining speakers focused on different jurisdictions, such as the EU, Brazil, China, and Israel. The debate, which was moderated by FPF’s Senior Fellow, Omer Tene, also counted on contributions from Renato Leite Monteiro (DPB), Barbara Li (PwC), and Anna Zeiter (Chief Privacy Officer at eBay).
Later that evening, the conference hosted the session organized by FPF, on the topic of ‘Mobility Data for the Common Good? On the EU Mobility Data Space and the Data Act’. The panel was moderated by FPF’s Managing Director for Europe, Rob van Eijk, and aimed to answer several questions, including whether the draft Data Act and the upcoming EU Mobility Data Space could address cities’ innovation and sustainability goals, while still safeguarding citizens’ privacy. The expert speakers around the table were Maria Rosaria Coduti (Policy Officer at the European Commission’s DG CNECT), David Wagner (German Research Institute for Public Administration, or “FÖV”), Laura Cerrato (DPO at the Centre d’Informatique pour la Région de Bruxelles), and Arjan Kapteijn (Senior Inspector for the department of Systemic Oversight within the Dutch DPA). View a recording of the session here.
Maria Rosaria Coduti explained that the combination of different pieces of the EU’s Data Strategy – notably, the Data Governance Act (DGA), the Data Act and the Common European Data Spaces – seeks to remove barriers to the access and sharing of data. This can be achieved through incentivizing private and public sector players, as well as data subjects, to share data on a voluntary basis (e.g., through data intermediation services and data altruism organizations), as well as compelling entities to share data where there is an imbalance of power between data holders and users, or where public interest grounds exist. An example of the latter case is the use of mobility data held by telco providers to help mapping the spread of COVID-19. However, the Data Act defines strict rules for data access requests made by public bodies to private players, and a limitation on the use of such data to public emergency situations. With regards to Business-to-Business data sharing under the Data Act, Coduti underlined that the text’s provisions on cloud switching and interoperability may force designers of connected products (such as cars, planes, and trains) to design them in a way that makes the data they generate easily accessible for users and the data recipients the latter choose.
Laura Cerrato explained that, in her role as DPO for the IT services provider of the Region of Brussels’ public authorities, she invests efforts in explaining the legal intricacies of data sharing to such authorities. According to Cerrato, the Data Act will open new possibilities for government bodies to access privately-held data, but that this requires transparency and accountability toward citizens. Moreover, as her office is piloting a Mobility-as-a-Service project in the city, there was a need to discuss the appropriate legal basis for personal data processing in that context, as public authorities cannot rely on the legitimate interest ground under Article 6 GDPR. In that respect, Cerrato underlined that the public interest legal basis can only be used if it is provided under national or EU law, which for Smart City development in Belgium was lacking.
Arjan Kapteijn closely followed Cerrato’s remarks, pointing toward the recent Dutch DPA guidance on Smart Cities. In the lead up to such recommendations, the DPA investigated the records of processing activities (ROPAs) and data protection impact assessments (DPIAs) of 12 Dutch cities carrying out Smart City-related projects and asked why they did not consult the DPA prior to the data processing, as per Article 36 GDPR. Among the DPA’s findings, there were some misconceptions among municipalities regarding the concept of “personal data” when applied to mobility datasets, and a belief that the GDPR did not apply to pilot projects, which may have led to lack of transparency toward citizens. Kapteijn stressed that data collected through sensors is often covered by the GDPR, such as data collected by connected vehicles and smart traffic lights, and in the context of wi-fi tracking in public spaces. Lastly, the speaker warned about the difficulties of making location data truly anonymous according to GDPR standards, and that certain hashing techniques, privacy-by design, and data minimization may play a valuable role in retaining data utility while protecting the data.
Lastly, David Wagner focused on the concept of anonymization under Recital (26) GDPR, and how it applies to location and mobility data. He explained that anonymising this data is hard because individual movement patterns can identify persons. To work towards anonymising the data, there is a need to suppress some data points (e.g., by adding noise), with losses for data utility. The anonymization test in Recital (26) GDPR, which considers the “means reasonably likely to be used” to identify a natural person, arguably invites controllers to evaluate potential attackers’ cost-benefit calculations, making it hard to determine a reasonable re-identification attempt. Thus, Wagner argued that the GDPR defines a threshold for anonymity, but that controllers and regulators need an effective and reliable scale to assess it. The upcoming update from the EDPB to the 2014 Article 29 Working Party guidelines on anonymization may provide such evaluation criteria.
May 25: FPF’s De-Identification Masterclass and Data Protection in China
In the morning of the conference’s closing day, FPF hosted an engaging and well-attended Masterclass on the ‘State-of-Play of De-Identification Techniques’ as an official side event. The session’s moderator, FPF’s Rob van Eijk, kicked off the discussion by presenting the 2016 FPF Infographic on data de-identification, and how this fared against the GDPR’s updated concept on anonymization. Then, expert speakers Sophie Stalla-Bourdillon (Immuta), Naoise Holohan (IBM), and Lucy Mosquera (Replica Analytics) presented on cutting-edge techniques, notably – and respectively – on Homomorphic Encryption, Differential Privacy, and Synthetic Data. View a recording of the session here.
New Report on Limits of “Consent” in China’s Data Protection Law – First in a Series for Joint Project with Asian Business Law Institute
The Future of Privacy Forum (FPF) and Asian Business Law Institute (ABLI) are publishing today the first in a series of 14 detailed jurisdiction reports that will explore the role and limits of consent in the data protection laws and regulations of 14 jurisdictions in Asia Pacific (Australia, China, Hong Kong SAR, India, Indonesia, Japan, Macau SAR, Malaysia, New Zealand, the Philippines, South Korea, Singapore, Thailand, and Vietnam), as part of FPF and ABLI’s ongoing joint research project: “From Consent-Centric Data Protection Frameworks to Responsible Data Practices and Privacy Accountability in Asia Pacific.”
The first report focuses on the status of “consent” and alternatives to consent as lawful bases for processing personal data in the People’s Republic of China. Over the coming weeks, FPF and ABLI will continue publishing these reports, which will inform a forthcoming comparative review paper with detailed recommendations to promote legal convergence around requirements for processing personal data in the Asia Pacific region.
Background on the ABLI/FPF Project
In August 2021, ABLI and FPF concluded a cooperation agreement to understand, analyze, and support the convergence of data protection regulations and best data protection practices in the Asia Pacific region through joint research, publications, and events. This collaboration builds on the substantial work done by ABLI and FPF on data protection and privacy laws and frameworks in the Asia Pacific (APAC) region.
The starting point for FPF’s collaboration with ABLI is the understanding that as personal data protection frameworks in Asia are at a critical stage in their development – whether they are in the process of adoption or reform, or are at the early stages of their implementation – there is an urgent need for understanding where they differ and for identifying opportunities for convergence of key data protection rules and principles at the regional level.
Previous work by ABLI has demonstrated the collective benefits of legal certainty and convergence in the area of cross-border flows of personal data in APAC. As this work has proven useful for policymakers as they address these issues, ABLI and FPF launched a joint project with the same philosophy and methodology, entitled “From Consent-Centric Data Protection Frameworks to Responsible Data Practices and Privacy Accountability in Asia Pacific,” to promote legal convergence around principled, accountability-based requirements for processing personal data in Asia Pacific.
In APAC as elsewhere, there is a growing conversation around the limitations of “notice and consent” and how to address them. Notice and consent requirements have long been used to justify the collection and processing of personal data. However, in recent years, this justification has increasingly been called into question:
Over-reliance on consent has led to the development of a “tick-the-box” approach to data protection for organizations and “consent fatigue” for individuals, which contradict the original purpose of data protection laws.
The requirement to obtain consent (especially where consent must be given explicitly) is increasingly proving inadequate in the era ambient of computing, the Internet of Things (IoT), and multi-stakeholder digital ecosystems and platforms.
Consent requirements are also increasingly complex for organizations to apply, and legal fragmentation has made operations across jurisdictions even more challenging, leading to unnecessary compliance costs.
Many APAC jurisdictions have already come to recognize the limitations of consent, especially in the digital space. To highlight a few examples:
In 2018, a report of a Committee of Experts on a Free and Fair Digital Economy in India described the operation of notice and consent on the internet as “broken” and questioned whether consent alone could be an effective method for protecting personal data and preventing individual harm.
In 2019, New Zealand’s then-Privacy Commissioner, John Edwards, declared in a much-cited blog post that click-to-consent was “not good enough anymore” and called for consumers and businesses alike to rethink consent and move towards Privacy by Design.
In 2020, Singapore restructured its Personal Data Protection Act from a primarily consent-based framework to permitting collection, use, and disclosure of personal data without consent in a wide range of situations, including ”vital interests of individuals,” “matters affecting the public,” “legitimate interests [of organizations],” “business asset transactions,” “business improvement purposes,” and “research.”
However, this trend is not shared by all jurisdictions. Many data protection laws in APAC (and elsewhere) still require consent by default for the collection and processing of personal data. “Tick-the-box” compliance habits or reluctance to change user experience often lead organizations to fall back on consent. In APAC, these problems are reinforced by the fragmentation of regional laws – for all its limitations, consent is still often perceived as a common denominator and the “easiest” or “safest” way to comply across borders in APAC—even where consent is not necessary or even justifiable or where accountability-focused options like legitimate interests could apply and would be better suited to the needs of both organizations and individuals.
The ABLI and FPF project aims to guide the development of data protection frameworks in APAC away from consent-centric, “tick-the-box” compliance requirements and towards responsible data practices and accountability for privacy when processing personal data. At the same time, the project recognizes that effective policies need to balance the interests of individuals in protecting their personal data and organizations in using personal data, while also promoting the interests of broader society, such as developing a vibrant digital economy and preventing crimes and fraud.
This requires frameworks to realign the role of consent by returning consent to the position that it occupied in the very first data protection frameworks as one of several, equal legal bases for processing of personal data, rather than as the default or even sole basis for processing personal data.
First Report: Consent in China’s Data Protection Law
In the first stage of this collaboration, FPF and ABLI have undertaken a comprehensive review of the role and position of “notice and consent”’ in 14 APAC jurisdictions: Australia, China, Hong Kong SAR, India, Indonesia, Japan, Macau SAR, Malaysia, New Zealand, the Philippines, South Korea, Singapore, Thailand, and Vietnam.
These reports draw on insights provided by thought leaders, regulators, and practitioners during the first event co-organized by FPF and ABLI: a virtual panel entitled “Exploring Trends: From ‘Consent-Centric’ Frameworks to Responsible Data Practices and Privacy Accountability in Asia Pacific” which was co-hosted by Singapore’s Personal Data Protection Commission in September 2021.
To that end, FPF and ABLI are delighted to announce the first publication in this joint project: a detailed jurisdiction report on the status of consent inChina’s data protection framework.
China’s data protection law has been evolving in recent years. Though China’s personal data protection framework has traditionally prioritized consent, the adoption of the Personal Information Protection Law last year was a paradigm shift which repositioned consent as one of seven equal legal bases for processing personal data in a model likely inspired by the GDPR.
This report provides a detailed overview of relevant laws and regulations in China on:
notice and consent requirements for processing personal data;
alternative legal bases for processing personal data which permit processing of personal data without consent if the data controller undertakes a risk impact assessment (e.g., legitimate interests); and
statutory bases for processing personal data without consent and exceptions or derogations from consent requirements in general and sector-specific laws and regulations.
The reports draw from the professional knowledge, experience, and opinions of a wide range of expert contributors from across the APAC region. ABLI and FPF are grateful for the invaluable contributions of these contributors, who have kindly shared detailed information, comments, and clarifications on the legal frameworks in their respective jurisdictions.
Upcoming for the ABLI/FPF Project
Over the coming weeks, FPF and ABLI will publish these reports as part of an ABLI-FPF Series on Convergence of Data Protection and Privacy Laws in APAC.
The findings presented in the reports will also inform the second stage of ABLI and FPF’s collaboration: a comparative review paper which sets out proposals as to how policymakers can not only promote legal convergence in the APAC region but also help organizations to move away from overreliance on lengthy privacy policies and often artificial consent and towards responsible data practices that strike a balance between the needs of organizations that collect and process data, the rights of individuals in protecting their data, and the interests of society at large.
FPF hopes that these publications will prove useful to lawmakers, governments, and regulators in APAC (and beyond) who are currently drafting, reviewing, or implementing data protection laws in their respective jurisdictions.
In October 2021, the White House Office of Science and Technology (OSTP) published a Request for Information (RFI) regarding uses, harms, and recommendations for biometric technologies. Over 130 entities responded to the RFI, including advocacy organizations, scientists, experts in healthcare, lawyers, and technology companies. While most commenters agreed on core concepts of biometric technologies used to identify or verify identity (with differences in how to address it in policy), there was clear division as to what extent the law should apply to emerging technologies used for physical detection and characterization (such as skin cancer detection or diagnostic tools). These comments reveal that there is no general consensus on what “biometrics” should entail and thus what the applicable scope of law should be.
Using the OSTP comments as a reference point, this briefing explores four main points regarding the scope of “biometrics” as it relates to emerging technologies that rely on human body-based data but is not used to identify or track:
Biometrics technologies range widely in purpose and risk profile (including 1:1, 1:Many, tracking, characterization, and detection).
Current U.S. biometric privacy laws and regulatory guidance largely limit the scope of “biometrics” to identification and verification (with the exception of Texas and a few caveats surrounding ongoing litigation in Illinois).
Many academics and civil society members argue the existing framework should be expanded to other emerging technologies such as detection and characterization tools because these uses can still pose risks to individuals, particularly in exacerbating discrimination against marginalized communities.
Many in industry disagree with expanding the definitional scope, and posit that identification/verification technologies pose very different types and levels of risks from detection and characterization, and thus there should be a regulatory distinction.
As policymakers consider how to regulate biometric data, they should understand the different technologies, risks associated with each, existing laws and frameworks, and take into account the policy arguments for how the law should interact with these emerging technologies (such as “characterization” and “detection”) that rely on an individual’s physical data but are not used to identify or track.
1. Types of Biometric and Body-Based Technologies
In Future of Privacy Forum’s 2018 Privacy Principles for Facial Recognition Technology in Commercial Applications, FPF distinguished between five types of facial recognition technologies: detection, characterization, unique persistent tracking, 1:1 verification, and 1:many identification. The same distinctions apply readily to the broader world of biometric data. Most notably, unlike identification and verification, detection and characterization technologies are developed to detect or infer bodily characteristics or behavior but the subject is not identifiable (meaning PII is typically not retained), unless the user actively links the data to a known identity or unique profile. A brief explanation of terms and distinctions are provided in Table 1.
In considering these distinctions, the OSTP responses broadly showcased two contrasting frameworks for which technologies should fall under the scope of “biometrics”:
Biometric data should be defined by the source of the data itself– i.e. it’s biometrics because it’s data derived from an individual’s body–therefore, any processing activity dependent on data from an individual’s body, including detection and characterization, should be regulated under biometric privacy laws.
Biometric data should be defined by the processing activity–i.e. it’s biometrics because it’s unique physical data used to identify or verify individuals–therefore only those uses should be regulated under biometric privacy laws. Since detection and characterization are not used for identification, they should not fall within the scope of the law.
2. “Biometrics” Under Existing Legal Frameworks in the U.S.
Definitions in U.S. state biometric privacy laws and comprehensive data privacy laws largely limit the scope of “biometric information” or “biometric data” to data collected for purposes related to identification, with some exceptions (including Texas, and emerging case law in Illinois) (see Table 2). Biometric data privacy laws in the U.S. were mainly passed to mitigate privacy and security risks associated with individuals’ biometric data since the data is inherently unique and cannot be altered. For example, Section 14/5 in the Illinois Biometric Privacy Act states:
Biometrics are unlike other unique identifiers that are used to access finances or other sensitive information. For example, social security numbers, when compromised, can be changed. Biometrics, however, are biologically unique to the individual; therefore, once compromised, the individual has no recourse, is at heightened risk for identity theft, and is likely to withdraw from biometric-facilitated transactions.
How U.S. policymakers and the courts have thought about the scope of these laws is heavily dependent on the Illinois Biometric Privacy Act (BIPA)–the seminal biometric privacy law in the U.S. Importantly, BIPA contains a private right of action that has allowed courts to decide the boundaries of what technologies should and should not be within the scope of the law. The technologies most targeted under the law include social media photo tagging features, employee timekeeping verification systems, and facial recognition. Thus far, there appears no state or federal court that has conclusively held that BIPA applies to purely detection-based or characterization technology that is not used for identification or verification purposes. Ongoing litigation, however, appears to be raising important questions on whether and when detection and characterization technologies overlap with terms used to define biometrics in the law. For example, in Gamboa v. The Procter & Gamble Company, the Northern District of Illinois must decide to what extent “facial geometry” can apply to uses such as the detection of a toothbrush position in your mouth.
Jurisdiction
Definition
Illinois* 740 ILCS 14
“Biometric information” means any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual. Biometric information does not include information derived from items or procedures excluded under the definition of biometric identifiers.
“Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.*
“Biometric identifier” does not include: – writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color. – donated organs, tissues, or parts as defined in the Illinois Anatomical Gift Act or blood or serum stored on behalf of recipients or potential recipients of living or cadaveric transplants and obtained or stored by a federally designated organ procurement agency. – biological materials regulated under the Genetic Information Privacy Act. – information captured from a patient in a health care setting or information collected, used, or stored for health care treatment, payment, or operations under the federal Health Insurance Portability and Accountability Act of 1996. – X-ray, roentgen process, computed tomography, MRI, PET scan, mammography, or other image or film of the human anatomy used to diagnose, prognose, or treat an illness or other medical condition or to further validate scientific testing or screening.
*The Illinois Biometric Privacy Law defines both “biometric information” and “biometric identifier,” with the substantive requirements of the law applying to both. Some emerging case law is finding that BIPA applies to processing of biometric identifiers even when a specific individual is not being identified but nonetheless is being used for facial recognition software. See, e.g. Monroy v. Shutterfly, Inc., Case No. 16 C 10984 (N.D. Ill. Sep. 15, 2017); In re Facebook Biometric Info. Privacy Litig., 185 F. Supp. 3d 1155 (N.D. Cal. 2016).
Washington Wash. Rev. Code Ann. §19.375.020
“Biometric identifier” means data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual. “Biometric identifier” does not include a physical or digital photograph, video or audio recording or data generated therefrom, or information collected, used, or stored for health care treatment, payment, or operations under the federal health insurance portability and accountability act of 1996.
Texas RCW §19.375.020
“Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.”
California (“CPRA”) §1798.140(c)
“Biometric information” means an individual’s physiological, biological or behavioral characteristics, including information pertaining to an individual’s deoxyribonucleic acid (DNA), that is used or is intended to be used singly or in combination with each other or with other identifying data, to establish individual identity. Biometric information includes, but is not limited to, imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns, and voice recordings, from which an identifier template, such as a faceprint, a minutiae template, or a voiceprint, can be extracted, and keystroke patterns or rhythms, gait patterns or rhythms, and sleep, health, or exercise data that contain identifying information.
Virginia §59.1-571
“Biometric data” means data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual.
“Biometric data” does not include a physical or digital photograph, a video or audio recording or data generated therefrom, or information collected, used, or stored for health care treatment, payment, or operations under HIPAA.
Utah S.B.0227
“Biometric data” includes data described in Subsection (6)(a) that are generated by automatic measurements of an individual’s fingerprint, voiceprint, eye retinas, irises, or any other unique biological pattern or characteristic that is used to identify a specific individual.
“Biometric data” does not include: (i) a physical or digital photograph; (ii) a video or audio recording; (iii) data generated from an item described in Subsection (6)(c)(i) or (ii); (iv) information captured from a patient in a health care setting; or (v) information collected, used, or stored for treatment, payment, or health care operations as those terms are defined in 45 C.F.R. Parts 160, 162, and 164.
Connecticut SB6
“Biometric data” means data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, a voiceprint, eye retinas, irises or other unique biological patterns or characteristics that are used to identify a specific individual.
“Biometric data” does not include (A) a digital or physical photograph, (B) an audio or video recording, or (C) any data generated from a digital or physical photograph, or an audio or video recording, unless such data is generated to identify a specific individual
Table 2. Definitions of Biometric Data in U.S. Laws
3. Arguments for Expanding Biometric Regulations to Include Detection and Characterization Technologies
The OSTP RFI demonstrates how the scope of biometric privacy laws could be expanded beyond identification-based technologies. The RFI used “biometric information” to refer to “any measurements or derived data of an individual’s physical (e.g., DNA, fingerprints, face or retina scans) and behavioral (e.g., gestures, gait, voice) characteristics.” The OSTP further noted that “we are especially interested in the use of biometric information for: Recognition…and Inference of cognitive and/or emotion state.”
Many respondents, largely from civil society and academia, discussed the risks of technologies that collected and tracked an individual’s body-based data for detection, characterization, and other inferences. Specific use cases identified in the responses included: counting the number of customers in a store (by detecting and counting faces in video footage), diagnosing a skin condition, tools used to infer human emotion, disposition, character, intent (EDCI), eye and head movement tracking, and vocal biomarkers (or medical deductions based on inflections in your voice).
In all of these examples, respondents emphasized that bodily detection and characterization technologies carry significant risks of inaccuracy and discrimination. Even if not used to identify or track, respondents argued that detection and characterization technologies are still harmful and unreliable, largely because they are built upon unverified assumptions and pseudoscience. For instance, respondents noted that:
EDCI tooling (using facial characterization to infer emotional or mental states) is not reliable because there is no reliable or universal relationship between emotional states and observable biological activity.
Video analytics that claim to detect lies or deception through eye tracking are unreliable because the link between high-level mental states such as “truthfulness” and low-level, involuntary external behavior is too ambiguous and unreliable to be of use.
The real-world performance of models used to diagnose patients based on speech and language (vocal biomarkers) are not properly validated.
As a result, many experts argued that these systems exacerbate discrimination and existing inequalities against protected classes, most notably people of color, women, and the disabled. For example, Dr. Joy Buolamwani, a leading AI ethicist, points to her peer-reviewed MIT study demonstrating how commercial facial analysis systems used to detect skin cancer exhibit lower rates of accuracy for darker-skinned females. As a result, women of color have a higher rate of misdiagnosis. In another example, the Center for Democracy and Technology notes, in examining the use of facial analysis for diagnoses:
“. . . facial analysis has been used to diagnose autism by analyzing facial expressions and repetitive behaviors, but these attributes tend to be evaluated relative to how they present in a white autistic person assigned male at birth and identifying as masculine. Attributes related to neurodivergence vary considerably because racial and gender norms cause other forms of marginalization to affect how the same disabilities present, are perceived, and are masked. Therefore, people of color, transgender and gender nonconforming people, and girls and women are less likely to receive accurate diagnoses particularly for cognitive and mental health disabilities…” (citations omitted).
Accordingly, many respondents recommended expanding the existing biometrics framework to a broader set of technologies that collect and track any data derived from the body, including detection and characterization, because they similarly carry risk that could be mitigated by federal guidelines and regulation. Specifically, some of the policy proposals set forth in RFI comments included:
Banning or severely limiting the government’s use of biometric technologies, including detection and characterization using an individual’s physical features;
Prohibiting all use of of biometric data collection without express consent; and
Requiring private entities collecting any form of biometric data to demonstrate that the system does not disparately impact marginalized communities through rigorous testing, auditing, and oversight.
4. Arguments Against Expanding Biometric Regulations to Equally Apply to Detection and Characterization Technologies
Many respondents from the technology and business communities argued against the OSTP’s broad scope of “biometrics” to include all forms of bodily measurement as inconsistent with existing laws and scientific standards. Accenture, SIIA, and MITRE cited definitions set forth by the National Institute of Science and Technology (NIST), National Center for Biotechnology Information (NCBI), Federal Bureau of Investigation, and Department of Homeland Security, as well as the U.S. state biometric and comprehensive privacy laws, which all limit “biometrics” to recognition or identification of an individual. As a result, most businesses have relied on this framework, and the guidance set forth by these entities, in developing their internal practices and procedures for processing such data.
Respondents also argued that there are distinct uses, processes, and risk-profiles between systems used for identification and verification versus those used for detection and characterization. Where identification, verification, and tracking technologies are directly tied to an individual’s identity or unique profile, and thus carry specific privacy concerns, detection and characterization technologies do not necessarily carry such risks when not employed against known individuals. Therefore, many respondents argued, implementing a horizontal standard for biometrics, such as blanket bans, that conflates all technologies may cause unintended consequences, largely by hindering the progress of low-risk and valuable applications that are relied upon in our society. Some examples presented of lower-risk use cases for bodily detection and characterization include:
Face detection for a camera to provide auto-focus features;
Skin characterization to diagnose skin conditions from images;
Video analytics to determine the number of humans in a store to abide by COVID restrictions.
Assistive technology, such as speech transcription tools or auto-captioning;
With these technologies, industry experts emphasize that applications do not necessarily identify individuals, but process physical characteristics to deliver beneficial products or services. Though many companies acknowledged risks related to accuracy, bias, and discrimination against marginalized populations, they argued that such risks should be addressed outside the framework of “biometrics” – instead addressed through a risk-based approach that distinguishes the types, uses, and levels of risk with different technologies. Because identifying-technologies often pose a higher risk of harm, respondents noted that it is appropriate that they incorporate more rigorous safeguards, however, those safeguards may not be equally necessary or valuable to other technologies.
Some examples of policy proposals provided by respondents that tailor the regulation to the specific use case or technology include:
Requiring that biometric systems operate with certain allowable limits of demographic differentials for the specified use-case(s);
Requiring that automated decisions be human adjudicated for certain intended use case(s);
Establishing heightened requirements for law enforcement use of biometric information used for identification purposes;
What’s Next?
At the end of the day, all technologies relying on data derived from our bodies carry some form of risk. Bodily characterization and detection technologies may not always carry privacy risks, but may nonetheless lead to invasive forms of profiling or discrimination that could be addressed through general AI regulations – such as requirements to conduct impact assessments or independent audits. Meanwhile, disagreements over definitions of “biometrics” may be overshadowing the key policy questions to be addressed, such as how to differentiate and mitigate current harms caused by unfair or inaccurate profiling.
FPF Report: Automated Decision-Making Under the GDPR – A Comprehensive Case-Law Analysis
On May 17, the Future of Privacy Forum launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision Making (ADM). The Report is informed by extensive research covering more than 70 Court judgments, decisions from Data Protection Authorities (DPAs), specific Guidance and other policy documents issued by regulators.
The GDPR has a particular provision applicable to decisions based solely on automated processing of personal data, including profiling, which produces legal effects concerning an individual or similarly affects that individual: Article 22. This provision enshrines one of the “rights of the data subject”, particularly the right not to be subject to decisions of that nature (i.e., ‘qualifying ADM’), which has been interpreted by DPAs as a prohibition rather than a prerogative that individuals can exercise.
However, the GDPR’s protections for individuals against forms of automated decision-making (ADM) and profiling go significantly beyond Article 22. In this respect, there are several safeguards that apply to such data processing activities, notably the ones stemming from the general data processing principles in Article 5, the legal grounds for processing in Article 6, the rules on processing special categories of data (such as biometric data) under Article 9, specific transparency and access requirements regarding ADM under Articles 13 to 15, and the duty to carry out data protection impact assessments in certain cases under Article 35.
This new FPF Report outlines how national courts and DPAs in the European Union (EU)/European Economic Area (EEA) and the UK have interpreted and applied the relevant EU data protection law provisions on ADM so far – before and after the GDPR became applicable -, as well as the notable trends and outliers in this respect. To compile the Report, we have looked into publicly available judicial and administrative decisions and regulatory guidelines across EU/EEA jurisdictions and the UK. It draws from more than 70 cases – 19 court rulings and more than 50 enforcement decisions, individual opinions, or general guidance issued by DPAs, – from a span of 18 EEA Member-States, the UK, and the European Data Protection Supervisor (EDPS). To complement the facts of the cases discussed, we have also looked into press releases, DPAs’ annual reports, and media stories.
Some examples of ADM and profiling activities assessed by EU courts and DPAs and analyzed in the Report include:
School access and attendance control through Facial Recognition technologies
Online proctoring in universities and automated grading of students
Automated screening of job applications
Algorithmic management of platform workers
Distribution of social benefits and tax fraud detection
Automated credit scoring
Content moderation decisions in social networks
FPF Training: Automated Decision-Making under the GDPR
Ready to get an in-depth understanding of the GDPR’s Automated Decision-Making requirements? Register for our upcoming virtual training session on November 9, where FPF experts will cover the critical elements of Article 22, recent DPA decisions, consent requirements, and more.
Our analysis shows that the GDPR as a whole is relevant for ADM cases and has been effectively applied to protect the rights of individuals in such cases, even in situations where the ADM at issue did not meet the high threshold established by Article 22 GDPR. Among those, we found detailed transparency obligations about the parameters that led to an individual automated decision, a broad reading of the fairness principle to avoid situations of discrimination, and strict conditions for valid consent in cases of profiling and ADM.
Moreover, we found that when enforcers are assessing the threshold of applicability for Article 22 (“solely” automated, and “legal or similarly significant effects”), the criteria they use are increasingly sophisticated. This means that:
Courts and DPAs are looking at the entire organizational environment where ADM is taking place, from the controller’s organizational structure, to reporting lines and the effective training of staff, in order to decide whether a decision was “solely” automated or had meaningful human involvement; and
Similarly, when assessing the second criterion for the applicability of Article 22, enforcers are looking at whether the input data for an automated decision includes inferences about the behavior of individuals, and whether the decision affects the conduct and choices of the persons targeted, among other multi-layered criteria.
A recent preliminary ruling request sent by an Austrian court in February 2022 to the Court of Justice of the European Union (CJEU) may soon help clarify these concepts, as well as other related to the information which controllers need to give data subjects about ADM’s underlying logic, significance and envisaged consequences for the individual.
The findings of this Report may also serve to inform the discussions about pending legislative initiatives in the EU that regulate technologies or business practices that foster, rely on,or relate to ADM and profiling, such as the AI Act, the Consumer Credits Directive, and the Platform Workers Directive.
On May 20, the authors of the report discussed with prominent European data protection experts some of the most impactful analyzed decisions during an FPF roundtable. These include cases related to the algorithmic management of platform workers in Italy and the Netherlands, the use of automated recruitment and social assistance tools, and creditworthiness assessment algorithms. The discussion also covered pending questions sent by national courts to the CJEU on matters of algorithmic transparency under the GDPR. View a recording of the conversation here, and download the slides here.
Diverging fining policies of European DPAs: is there room for coherent enforcement of the GDPR?
The European Union’s (EU) General Data Protection Regulation (GDPR) puts forward a non-exhaustive list of criteria in Article 83 that Data Protection Authorities (DPAs) need to consider when deciding whether to impose administrative fines and in determining their amount in specific cases. Notoriously, the ceiling for administrative fines put forward by the GDPR is high – up to 20M EUR or 4% of a company’s worldwide annual turnover for breaching specific rules (e.g. the rights of the data subject), and up to 10M or 2% of the same turnover for breaching the rest of the provisions (e.g. data security requirements), leaving ample room to calibrate the fines to the facts of a case.
While it was expected that independent DPAs would give the criteria different weight in their enforcement proceedings, depending on their own legal and cultural context, the past four years of enforcement experience have shown that fining policies and practices vary considerably among EU DPAs.
Some DPAs decided to formulate fining policies and publish them, while others merely built their own body of case-law and created practice around how these criteria are applied without formalizing such policies. The DPA of the German State of Bavaria was one of the first to publish non-binding guidance on the matter: in September 2016, it revealed it would devote particular attention to previous data protection infringements and the degree of collaboration the investigated parties offer during the proceedings.
To avoid having DPAs taking diverging approaches to setting fines under the new framework, the old Article 29 Working Party published its 2018 guidelines early on, before the GDPR became applicable, which were later endorsed by the European Data Protection Board (EDPB). They quote directly Recital 11 of the GDPR, stating that “it should be avoided that different corrective measures are chosen by the [DPAs] in similar cases.” Indeed, administrative fines are only one among several corrective measures in DPAs’ toolbox, which also includes the issuance of reprimands, compliance orders, suspension of data flows to recipients in third countries, and even temporary or definitive limitations or bans on data processing (Article 58(2) GDPR). Thus, the EDPB further clarified that fines should not be seen as the last resort available to DPAs and that it is not always necessary to supplement fines with other corrective measures.
While the EDPB guidelines provided inspiration for a few fining policies adopted by national DPAs, the authorities do not shy away from taking innovative approaches to standardize their fining procedures, as will be shown below. Other regulators (such as the Irish and the Belgian DPAs) have announced they plan to provide clarity and predictability to organizations about their sanctioning standards by publishing their own methodologies in this regard. Nonetheless, it is possible that some DPAs are waiting for the approval of upcoming EDPB guidance on the calculation of administrative fines to adopt their stance. As the publication is bound to happen in the coming days, FPF’s new piece outlines the similarities and differences between the few national fining policies published by European DPAs since 2018. This analysis and the differences it outlines show why it was necessary for the EDPB to adopt new guidance on this matter.
This blog post provides an overview of the only comprehensive fining methodologies that were published so far by EU DPAs (specifically, by the Dutch, Danish, and Latvian DPAs), as well as the relevant draft Statutory guidance issued by the UK DPA (ICO) in 2020. Therefore, this analysis will also show how the approach of the ICO in this matter will likely continue to differ from that of the EDPB and EU DPAs. It is divided into two sections that take a deep dive into (i) how those DPAs propose to apply the criteria set out in Article 83(1) to (3) GDPR in practice – highlighting where they diverge from the 2018 EDPB guidelines – and (ii) how they propose to standardize the amounts of the fines imposed against controllers or processors in their jurisdictions.
1. Balancing the same criteria with different scales
According to Article 83(2) GDPR, all DPAs need to consider the same non-exhaustive list of criteria when deciding whether to sanction controllers or processors with administrative fines for breaches of the GDPR, instead of or in addition to other corrective measures. These criteria also guide DPAs’ decisions on the determination of the amounts of the fines they impose in individual cases.
However, the analysis of the published DPA fining methodologies shows that different regulators attribute varying degrees of importance to these factors in both those exercises, sometimes deviating from the EDPB guidelines.
The Dutch DPA guidance does not generally provide indications as to how the regulator proposes to weigh in such factors nor the financial circumstances of the infringer in specific cases.
Article 83(2)(a) GDPR:Nature, gravity and duration of the infringement, including the nature, scope or purpose of the processing, the number of data subjects affected and the damage suffered by them
On this criteria, the EDPB guidelines from 2018 state that, in case of “minor infringements” or infringements carried out by natural person controllers, DPAs may generally opt for reprimands instead of fines as suitable corrective measures. The guidelines add that damages suffered by data subjects and the long duration of infringements should count as aggravating circumstances in DPAs’ assessments regarding the need for imposing and increasing the value of fines.
The ] recent (September 2021) Danish DPA (Datatilsynet) guidance on the determination of fines for natural persons seems to contradict the EDPB approach that fines may be dismissed by DPAs in these cases. The Datatilsynet guidance complements its January 2021 guidelines on the determination of fines for legal persons and proposes a table of standardized fines for natural person controllers who commit certain GDPR breaches (such as publishing others’ sensitive personal data in social media outlets). Regarding this criterion, the guidelines applicable to the sanctioning of legal persons establish that the DPA must have due regard to several factors, including whether the processing purpose is purely profit-seeking (e.g. marketing) or benevolent (e.g. calculating an early retirement pension), and whether data subjects’ rights have been breached (i.e., the concept of “damage” should be interpreted broadly).
Concerning the latter criterion mentioned by the Danish DPA, the ICO’s Regulatory Action Policy illustrates that the UK regulator takes a different approach. The Policy states that, for “damages” to count as an aggravating circumstance, a degree of damage or harm (which may include distress and/or embarrassment) must have been suffered by data subjects.
Lastly, the Latvian DPA’s guidance stresses that the criteria listed under Article 83(2)(a) GDPR may carry more weight than others when it comes to determining fines. As an example, the Latvian watchdog states that the duration of the breach and the number of data subjects affected is generally more important than the financial benefits obtained by the controller. This is also reflected in the table of points that the DPA shall use to determine the amounts of fines in individual cases, which is explored below.
Article 83(2)(b) GDPR:the intentional or negligent character of the infringement
Again, DPAs consider this factor in different ways and at different moments of the process of determining a fine. Nonetheless, they seem to agree that the higher the degree of imputation (negligence -> gross negligence -> intent), the higher the fine should be. It also surfaces from EDPB guidance that controllers and processors cannot justify breaches of data protection law by claiming a shortage of resources.
When assessing the infringer’s degree of culpability, the ICO takes into account the technical and organizational measures that had been implemented by the controller of processor, notably whether a lack of appropriate measures may reveal gross negligence. Additionally, the UK regulator will consider more severely cases of wilful action or inaction of the infringer with a view to obtain personal or financial gains.
The EDPB and the Danish DPA, on the other hand, are quite aligned when it comes to giving examples of negligent and intentional infringements, including:
Negligent breaches: non-compliance with existing policies, human error, lack of control of published information, lack of timely technical updates; and
Intentional breaches:decisions taken by the company’s Board against a DPO’s correct advice/despite existing internal policies, purposely amending personal data to make it inaccurate, and selling personal data without consent.
Of note, after the UK left the EU in 2020 the ICO is not bound anymore by EDPB guidance. Therefore, this may be one area where divergence in approaches to implement the GDPR and the UK GDPR will remain.
Article 83(2)(c) GDPR:any action taken by the controller or processor to mitigate the damage suffered by data subjects
The EDPB stresses that DPAs should look into whether the infringer did everything it could to reduce the consequences of a breach for data subjects. In that case – and also where the infringer admits its infringements and commits to limit its impacts -, this should count as a mitigating factor when determining the fine. As for national DPAs:
For the Datatilsynet, collecting evidence that unauthorized recipients of personal data have deleted the information is an example of relevant mitigating action.
The Latvian DPA highlights that it shall only consider damage control actions taken by the controller as a mitigating factor where such actions have been taken in due time (i.e. if they were actually effective).
Failure to adopt any such measures could also be considered as an aggravating circumstance by the ICO.
Article 83(2)(d) GDPR:the degree of responsibility of the controller or processor taking into account technical and organizational measures implemented by them
Once again, the EDPB and the Danish DPA seem to be in sync regarding the interpretation and the application of this criterion. In essence, DPAs must ask themselves whether the infringer has implemented the protective measures that it was expected to, considering the nature, purposes, and extent of the processing, but also current best practices (industry standards and codes of conduct). If so, this should be taken as a mitigating circumstance in the fine’s calculation.
Article 83(2)(e) GDPR:any relevant previous infringements by the controller or processor
On this criterion, there is some degree of divergence between approaches of the DPAs. The EDPB has tried to set the baseline by recommending DPAs focus on whether the entity committed the same infringement earlier or different infringements in the same manner, whereas prior breaches which are different in nature may still be included in the assessment.
The Danish DPA commits to a deeper analysis under this criterion, stating that it shall weigh breach findings made by other DPAs against the infringer, as well the latter’s breaches of the data protection framework which was in place prior to the GDPR. However, it also stresses that:
breaches of the GDPR should be considered more relevant than breaches of the previous law;
the longer the time that has elapsed between a previous infringement and the current one, the less weight it must have in determining the fine; and that
infringements that occurred more than 10 years prior to the infringement at stake become irrelevant.
In a rare indication of the degree of importance it attributes to specific factors listed under Article 83(2) GDPR, the Dutch DPA reveals that, in case the infringer breaches the same provision that it previously had, the DPA should increase the standard fine by 50% (see the DPAs’ methodology below).
The ICO mentions that it is more likely to impose a higher fine in case of a failure by the infringer to rectify a problem which was previously identified by the regulator, or to follow previous ICO recommendations. Lastly, for the Latvia DPA, the existence of past breaches counts as an aggravating circumstance when determining the fine, whereas a lack of past offenses does not put the infringer in a more favorable position.
Article 83(2)(f) GDPR:the degree of cooperation with the supervisory authority, in order to remedy the infringement and mitigate the possible adverse effects of the infringement
Under this criterion, the EDPB guidelines invite DPAs to consider whether the entity responded in a particular manner (that was not strictly required by law) to their requests during the investigation phase, in a manner that has significantly limited the impact on individuals’ rights. The Danish DPA’s take on the matter is again very aligned with the EDPB, adding that an admission or confession of the infringement by the infringer should count as a mitigating circumstance.
It should be noted that a refusal to cooperate can, in itself, constitute a breach of the GDPR, as it is an obligation applicable to controllers and processors alike, under Article 31. In this regard, the Danish DPA considers such failure to cooperate as one of the less serious infringements that fall under Article 83(4) GDPR, while the Dutch DPA frames it as one of the gravest. In Chapter 2 below, we analyze how this framing translates into standardized basic amounts for each of both DPAs’ fines.
Article 83(2)(g) GDPR:the categories of personal data affected by the infringement
According to the EDPB, DPAs should carefully look into whether the GDPR infringement at stake affected special categories of data or other particularly sensitive data that could cause damage or distress to individuals. Such sensitive data could include data subjects’ social conditions and personal identification numbers, as stated by the Danish DPA. For the Datatilsynet and the Dutch DPA, unlawful processing of special categories of data counts as one of the gravest infringements under Article 83(5) GDPR, which may lead the former to maximize the basic amount of the fine in a given case.
For the EDPB, it is also important that DPAs understand the format in which the data was compromised: was it identified, identifiable, or subject to technical protections (such as encryption or pseudonymisation)? The ICO intends to issue higher fines in cases involving a high degree of privacy intrusion. With a different focus, the Latvian DPA highlights that a significant number of affected data categories can justify imposing a higher fine, in particular when it comes to children’s data.
Article 83(2)(h) GDPR:the manner in which the infringement became known to the supervisory authority, in particular whether, and if so to what extent, the controller or processor notified the infringement
In this context, DPAs should in principle more critically assess infringements of which they become aware through means other than a notification from the infringer. According to the EDPB, the fact that a breach is uncovered via an investigation, a complaint, an article in the press or an anonymous tip should not aggravate the fine. However, the fact that an infringer actively tries to conceal a breach can increase the amount of the fine set by the DPA in Denmark, according to the latter’s policy.
On the other hand, a notification delivered by the infringer to the DPA to make it aware of an infringement may count as a mitigating circumstance, as stressed by the EDPB. The Latvian DPA’s list of criteria mentions that the more timely and encompassing the notification of the infringer is, the more it will help decrease the amount of the fine.
Article 83(2)(i) GDPR:compliance with previously-ordered measures against the controller or processor concerned with regard to the same subject-matter
The Danish DPA has stressed that it shall assess infringements more severely where it has previously warned the perpetrator that its conduct constituted a violation of data protection law or ordered it to align its practices with legal standards. The Latvian DPA may issue an aggravated fine in case the infringer refused to correct its data processing pursuant to a DPA order.
Article 83(2)(j) GDPR:adherence to approved codes of conduct pursuant to Article 40 or approved certification mechanisms pursuant to Article 42
Both the Danish and the Latvian DPAs mention that adherence to those frameworks could demonstrate the willingness of the infringer to comply with data protection law. In some cases, adherence to codes of conduct or certification mechanisms may even exclude the need of imposing an administrative fine altogether: the EDPB stresses that DPAs may find that enforcement or action taken by monitoring or certification bodies in certain cases is effective, proportionate, and dissuasive enough.
Lastly, it should be noted that DPAs have the power to sanction codes of conduct’s monitoring bodies for a failure to properly monitor and enforce compliance with such codes, under Article 83(4) GDPR. In this respect, the analyzed fining policies show that the Danish DPA views such failure as one of the less serious breaches listed under the provision, while the Dutch DPA frames it as one of the gravest.
Article 83(2)(k) GDPR: any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement
The wording of this criterion opens the door for DPAs to consider any other factors in their fine determination exercise, also hinting that the criteria set out under Article 83(2) are not exhaustive. As an example of how to consider the criterion in a specific case, the EDPB guidelines state that the fact that the infringer profited from the conduct “may constitute a strong indication that a fine should be imposed.” The ICO also commits to focus on removing any financial gains obtained from data protection infringements.
In Denmark, this may entail confiscating the profits which were illegally obtained as a result of the data protection breach, or the inclusion of such profits in the amount of the imposed fine. However, the Danish DPA’s policy states that it may be challenging and quite resource-intensive to determine such profits, regardless of the chosen avenue.
Fines must be effective, proportionate and dissuasive: How do DPAs make sure they are as such?
Article 83(1) GDPR requires DPAs to ensure that administrative fines are effective, proportionate, and dissuasive in each individual case. The EDPB specifies that, although this exercise requires a case-by-case assessment, a consistent approach should eventually emerge from DPA enforcement practice and case law.
Regardless of the fact that this criterion shows up first in the text of the GDPR, both the UK, Latvian and Danish DPAs prefer to check their determined fine amounts against those factors only at the end of the process. For example, the Datatilsynet states that a fine that would jeopardize the finances of the infringer and leave it close to bankruptcy could be considered effective and dissuasive, but likely not proportionate. In such circumstances, the DPA may consider whether to impose more moderate payment terms (e.g., deferral of payment) or even to reduce the amount of the fine.
The ICO also considers the infringer’s financial means when deciding on the fine: if the determined fine would cause financial hardship for the infringer, the regulator may reduce the fine. Additionally, the ICO is bound by national law to assess the fine’s broader economic impact, as it must consider the desirability of promoting economic growth. Thus, before issuing a fine and when deciding on its amount, it will consider its economic impact on the wider sector where the infringer is positioned.
With regards to the fine’s effectiveness, proportionality, and dissuasiveness, the Latvian DPA’s list of criteria mentions that the watchdog will have due regard to elements such as the infringer’s profits, number of employees, special status (e.g. as a social enterprise), and its wider role in society.
Article 83(3) GDPR as the fine’s ceiling: Towards a common interpretation?
The provision reads that “if a controller or processor intentionally or negligently, for the same or linked processing operations, infringes several provisions [of the GDPR], the total amount of the administrative fine shall not exceed the amount specified for the gravest infringement”. Seeking to resolve any possible interpretation issues, the EDPB’s guidelines stress that “The occurrence of several different infringements committed together in any particular single case means that the DPA is able to apply the administrative fines at a level which is effective, proportionate and dissuasive within the limit of the gravest infringement.”
Coming up with further clarification, the Danish DPA’s policy states that Article 83(3)’s ceiling should cover both situations where controllers breach different GDPR provisions with a single act, as well as others where controllers breach a single provision multiple times with a single action (e.g. sending unsolicited marketing emails to multiple recipients once). The Latvian DPA adds that, in those cases, all breaches must be considered together and the corresponding administrative fine should be calculated considering the gravest infringement.
This seems to be different from limiting DPAs to sanction infringers only for the gravest among several infringements they have committed, as the EDPB has recently clarified in its binding decision 1/2021 under Article 65(1) GDPR on the Irish DPC’s draft decision against Whatsapp Ireland. In that context, the EDPB stated that “Although the fine itself may not exceed the legal maximum of the highest fining tier, the infringer shall still be explicitly found guilty of having infringed several provisions and these infringements have to be taken into account when assessing the amount of the final fine that is to be imposed” (para. 326).
2. Making fines predictable: A road to harmonizing enforcement standards?
All the European DPAs that have published fining policies in the last two years have tried to pave the way towards a standardization of the assessments that lead to the determination of an administrative fine. This is well demonstrated by the formulas and tables that DPAs have created to guide such a determination process. But are the DPAs’ approaches consistent, in a way that may lead to harmonized enforcement of data protection rules in Europe, so as to avoid forum shopping?
Dutch DPA (AP)
As the first of its kind among EU DPAs’ guidelines, the Dutch DPA’s fining policy is groundbreaking in the way it proposes that the AP should determine the fines it decides to impose for GDPR infringements. It starts by splitting infringements into 4 different categories in accordance with their seriousness, as illustrated below with some examples:
Then, it uses the below table to determine the standard (basic) fine that corresponds to the infringement(s) at stake:
Category 1
Fine range: 0€ to 200.000€
Basic fine: 100.000€
Category 2
Fine range: 120.000€ to 500.000€
Basic fine: 310.000€
Category 3
Fine range: 300.000€ to 750.000€
Basic fine: 525.000€
Category 4
Fine range: 450.000€ to 1.000.000€
Basic fine: 725.000€
To determine the fine’s final amount, the AP uses the basic fine as a starting point and may move it upwards or downwards until the top or bottom of the fine bandwidth for the respective category of infringement. In doing this, the DPA must assess the factors listed under Article 83(2) GDPR, as well as the financial circumstances of the infringer.
However, the DPA may decide to go above or below the bandwidth if it finds that the fine in the default bandwidth would not be appropriate in the specific case. It can then go up to the legal limit of the fine for the respective infringement (10/20M EUR or 2/4% of annual turnover). In case or reduced financial capacity of the infringer, the DPA can choose to go below the immediately lower fine bandwidth when determining the fine.
Danish DPA (Datatilsynet)
First of all, it is important to note that the Danish DPA is one of only two EU DPAs (together with the Estonian DPA) that does not have the legal attributions under its national law to issue administrative fines. It needs to report an infringement of data protection law to the police, along with a determined fine recommendation, so that the infringer is investigated and prosecuted and that a court may ultimately sentence the infringer to pay the fine.
For guiding prosecutors and courts in this regard – which shall also consider the Danish Public Prosecutor’s guide on criminal liability for legal persons -, the Danish DPA thus favors a “standardization” of the levels of fines for specified breaches of data protection law. This should be complemented by a case-by-case assessment, considering the criteria under Article 83(1) to (3) GDPR and the infringer’s ability to pay.
With regards to fines issued against legal persons, the Datatilsynet advises prosecutors to start with determining the fine’s baseline amount, considering the provision of the GDPR which was infringed: it thus separates between infringements leading to 10M EUR/2% of annual turnover fines (first three categories in the table below) and others leading to 20M EUR/4% of annual turnover fines (last three), which we illustrate with some examples:
This division into 6 categories was made by the Danish DPA according to its own assessment of the GDPR provisions at stake, notably of their importance, place in the Regulation and their underlying protection objectives. Categories 1 and 4 are the less serious infringements within their respective provisions. Categories 2 and 5 are more serious infringements and Categories 3 and 6 are the most serious ones.
The “dynamic ceiling” of the fine (2% or 4% of annual turnover) only applies to companies with an annual global (net) turnover – as defined in Article 2(5) of Directive 2013/34/EU – exceeding 3.75M DKK (around 504M EUR). Once the maximum fine in the individual case has been determined, the standard basic amount of the fine may be set as follows:
5% of the maximum amount for infringements falling under Categories 1 and 4
10% of the maximum amount for infringements falling under Categories 2 and 5
20% of the maximum amount for infringements falling under Categories 3 and 6
The basic amount must also consider the size of the breaching company: it should be adjusted in case of SMEs (according to the EU definition). Thus, for the latter, the basic amount of the fine should be adjusted as follows:
Micro-enterprises: down to 0.4% of the standard basic amount
Small enterprises: down to 2% of the standard basic amount
Medium-size enterprises: down to 10% of the standard basic amount
The infringer’s market share should also be taken into account (e.g., an infringement by a company with a low revenue but a significant market share may affect a large number of data subjects).
Once the basic amount has been determined, the Danish DPA recommends the prosecutor to adjust the fine to the criteria set out in Article 83(1) to (3) GDPR – in the manner we have outlined before – and to the infringer’s ability to pay (should the latter so request it).
UK DPA (ICO)
The UK’s DPA also proposes to standardize the sums of administrative fines with its own formula and table. Those serve as the basis for calculating fines included in the ICO’s Penalty Notices, but also for the regulator’s preliminary notices of intent (NOI). Through the NOI, the ICO warns the infringer that it intends to issue an administrative fine, laying out the circumstances of the established breaches, the ICO’s investigation findings and the proposed level of penalty, along with its respective rationale. The infringer is allowed to make representations within 21 calendar days of receipt of the NOI, following which the ICO decides whether or not to issue a Penalty Notice.
In its draft Statutory Guidance on Regulatory Action, the ICO discloses that the process of determining the amount of a fine is a multi-step one, which starts with assessing some of the criteria set out in Article 83(2) GDPR – including the infringer’s degree of culpability and the breach’s seriousness -, as well as the infringer’s turnover after review of its accounts. Then, the ICO determines the fine’s starting point as follows:
After that, the ICO considers aggravating and mitigating factors listed under Article 83(2) GDPR to adjust the amount of the fine upwards or downwards within the previously-defined fine band. It then assesses the amount of the fine against the infringer’s financial means, the economic impact of the sanction, and the criteria of effectiveness, proportionality, and dissuasiveness. Lastly, the ICO commits to reduce the amount of the fine in 20% if it is paid within 28 days unless the infringer decides to judicially appeal the fine.
Latvian DPA (DVI)
The Latvian DPA’s process for determining the amounts of administrative fines seems to be the most complex, namely because it outlines in a very detailed fashion how the DPA should weigh each of the factors listed under Article 83(2) GDPR in specific cases.
According to the list of criteria published by the DVI in 2021, the DPA starts by determining the infringer’s relevant turnover or income: for individuals, this is the average salary in the country, multiplied by 12; for companies, this is the annual turnover, divided by 365.
Then, the DPA selects the appropriate multiplier, which it will later apply to such turnover or income to obtain the basic amount of the fine. This serves to reflect the gravity of the infringement (low, average, high, very high). To determine the multiplier, the DPA will consider the criteria listed under Article 83(2)(a) to (j), as well as aggravating and mitigating circumstances under Latvian law. This is done in a standardized fashion, by resorting to a table, of which we provide some excerpts below:
With regards to some criteria, the DVI prefers to detail with added precision how it will apply its points attribution system, as the excerpt below demonstrates:
These tables illustrate how, for the Latvian DPA, not all criteria are inherently equally important when assessing data protection breaches, as each criterion must be given an appropriate weight. Such tables provide clarity on the way the criteria are weighed in by the DPA in cases of GDPR infringements.
Then, the DVI multiplies the relevant infringing company’s daily turnover or the individual’s annual income by a multiplier to obtain the basic amount of the administrative fine. To determine such a multiplier, the DPA considers whether there was a procedural or material breach of the GDPR, i.e., one that is covered by Article 83(4) or (5), respectively. In this regard, it uses different tables to determine the multipliers for procedural and material infringements. In case more than one procedural or material breach occurred, the DVI limits the amount of the fine in line with Article 83(3) GDPR. The sum obtained after this calculation is then checked by the DPA against the criteria laid out in Article 83(1) GDPR and the fine ceilings set out in Article 83(4) or (5) – depending on the nature of the infringement – to reach a final amount for the fine.
A Comparative Analysis of Methodologies for Fine Calculation
As we have seen, DPAs who have published their policies with regards to administrative fines under the GDPR diverge substantially on a number of matters. These range from the importance they attribute to given GDPR infringements, to the weight they give to certain criteria that the GDPR prescribes for the determination of fines, under Article 83.
Crucially, the standard fine amounts that DPAs have published in their policies, considering the nature of the infringement at stake and the contribution of the elements listed under Article 83(2) GDPR, also have noteworthy differences:
The Dutch DPA’s standard fine for the most serious infringements (e.g., unlawful automated decision-making) is set at 725.000 EUR;
The Danish DPA establishes a standard fine ceiling for the most serious infringements of 20% of the maximum fine. For companies with an annual global turnover below 504M EUR, this amounts to 4M EUR;
For intentional infringements falling under Article 83(5) GDPR and having a very high degree of seriousness, the ICO establishes that the basic amount of the fine should correspond to 3% of the maximum value defined by law. For companies with an annual global turnover below 504M EUR, this amounts to 600.000 EUR;
Under the Latvian DPA fining framework, a company with a 504M EUR annual turnover could be bound to pay a maximum standard fine of 17.9M EUR for a “material” GDPR infringement (i.e., one that falls under Article 83(5) GDPR).
However, it is clear that this apparent gap between the DPAs’ standard fines can be closed on a case-by-case basis through consideration of additional factors when determining the final amount of administrative fines. Such elements include the infringer’s ability to pay, financial situation, annual turnover, status, societal role, and any detected recidivism.
There may be questions around the extent to which DPAs, in practice, substantially deviate from the fine bandwidths that their fining policies establish to make their fines effective, proportionate, and dissuasive. Those could only be answered through benchmarking each of the DPA’s sanctioning history under the GDPR. This is not the goal of the blogpost, which rather focuses on comparing how DPAs plan to structure their approach to fining in individual cases.
While we could not detect significant alignment in such approaches – despite the common criteria laid down in Article 83 GDPR -, it is possible that the upcoming EDPB guidance on the calculation of administrative fines could lay the ground for more harmonized sanctioning practices in the EU.
Further reading:
FPF Report: “Insights into the future of data protection enforcement: Regulatory strategies of European Data Protection Authorities for 2021-2022”
EDPS upcoming conference: “The Future of Data Protection: Effective Enforcement in the Digital World”
Access Now’s 2021 Report: “Three Years Under the GDPR: An implementation progress report”
FPF Report: A Look into DPA Strategies in the African Continent
Today, the Future of Privacy Forum released a Report looking into the Strategic Plans for the coming years of seven African Data Protection Authorities (DPAs). The Report gives insight into the activity and plans of DPAs from Kenya, Nigeria, South Africa, Benin, Mauritius, Côte d’Ivoire, and Burkina Faso. It also relies on research conducted across several other African jurisdictions who have adopted data protection laws in recent years but have not yet established a DPA, or whose DPAs have not published strategic documents in the past two to three years.
Since the 2001 enactment of Africa’s first data protection law by Cape Verde, many other African countries have followed suit. Two decades later, 33 African countries boast comprehensive data protection laws. This growth in legislation has received well-deserved attention as the continent continues to articulate its position on privacy and data protection matters.
Until now, most publications on the state of data protection in Africa have focused on the processes of creating and enacting comprehensive laws. As a result, other important aspects of the data protection machinery including implementation and enforcement have received little attention. This has hampered efforts to obtain a comprehensive picture on the state of data protection in Africa. Particularly, despite their important role in shaping data protection discourse on the continent, the activities of the Data Protection Authorities (DPAs) entrusted with implementing the laws are not well known or documented. Even with comprehensive data protection laws, not all countries have operational DPAs due to factors such as lack of political will, competing priorities, and financial constraints.
This report seeks to address this gap and shed light on notable activities of established DPAs in select African countries. It analyzes various DPA strategy documents including the annual reports and national data protection plans from seven key jurisdictions and provides a brief overview of the key developments and trends in administrative enforcement. These documents provide important insights into the priority areas of DPAs as well as their current status. While there is significant variation between the seven countries’ plans, key findings indicate common themes.
FPF Weighs in on Automated Decisionmaking, Purpose Limitation, and Global Opt-Outs for California Stakeholder Sessions
This week, Future of Privacy Forum policy experts provided testimony in California public Stakeholder Sessions to provide independent policy recommendations for the California Privacy Protection Agency (CPPA). The Agency heard from a variety of speakers and members of the public, on a broad range of issues relevant to forthcoming rulemaking on the California Privacy Rights Act (CPRA).
Specifically, FPF weighed in on automated decisionmaking (ADM), purpose limitation, and global opt-out preference signals. As a non-profit dedicated to advancing privacy leadership and scholarship, FPF typically weighs in with regulators when we identify opportunities to support meaningful privacy protections and principled business practices with respect to emerging and socially beneficial technologies. In California, the 5th largest economy in the world, the newly established California Privacy Protection Agency is tasked with setting standards that will impact data flows across the United States and globally for years to come.
Automated Decision-making (ADM). The subject of “automated decision-making” (ADM) was discussed on Wednesday, May 4th. Although the California Privacy Rights Act does not provide specific statutory rights around ADM technologies, the Agency is tasked with rulemaking to elaborate on how the law’s individual access and opt-out rights should be interpreted with respect to profiling and ADM.
FPF’s Policy Counsel Tatiana Rice raised the following issues for the Agency on automated decision-making:
Consumers’ rights of access for ADM should center on systems that directly and meaningfully impact individuals’ lives, such as those that affect financial opportunities, housing, or employment. The standard “legal or similarly significant effects” has the benefit of capturing high-risk use cases, while encouraging interoperability with global frameworks, such as existing guidance and case law under Article 22 of the General Data Protection Regulation (GDPR).
Explainability is a crucial principle for developing trustworthy automated systems, and information about ADM should be meaningful and understandable to the average consumer. As a starting point, the Agency should draw from the National Institute of Science and Technology’s Principles for Explainable Artificial Intelligence, which describe ways in which explainable systems should (1) provide an explanation; (2) be understandable to its’ intended end-users; (3) be accurate; and (4) operate within its knowledge limits, or the conditions for which it was designed.
All consumer rights of access should be inclusive and reflective of California’s diverse population, including those who are non-English speaking, differently abled, and lack consistent access to broadband.
Purpose Limitation. The California Privacy Rights Act requires businesses to disclose the purposes for which the personal information they collect will be used, and prohibits them from collecting additional categories of personal information, or using the personal information collected, for additional purposes that are “incompatible with the disclosed purpose for which the personal information was collected,” without giving additional notice. 1798.100(a)(1). As a general business obligation, this provision reflects the principle of “purpose limitation” in the Fair Information Practices (FIPs), and was discussed on Thursday, May 5th.
FPF’s Director of Legislative Research & Analysis Stacey Gray raised the following issues for the Agency on purpose limitation:
Purpose limitation is a fundamental principle of the Fair Information Practices (FIPs) that serves to protect individual and societal privacy interests without relying solely on individual consent management – as such, we encourage the Agency to ensure that it is respected and provide robust guidance on its provisions.
“Incompatible” secondary uses of information should be interpreted strictly and include those not reasonably expected by the average person – for example, invasive profiling unrelated to providing the product or service requested by the consumer; training high-risk algorithmic systems such as facial recognition; or voluntary sharing with law enforcement.
“Compatible” secondary uses of information should include scientific, historical, or archival research in the public interest, when subjected to appropriate privacy and security safeguards.
Opt-out preference signals. Finally, the California Privacy Rights Act envisions a new class of “opt-out preference signals,” sent by browser plug-ins and similar tools to convey an individual’s request to opt-out of certain data processing. As an emerging feature of several U.S. state privacy laws, there are open technical and policy questions for how to ensure that such ‘global’ signals succeed in lowering the burdens of individual privacy self-management.
FPF’s Senior Counsel Keir Lamont provided the following comments to the Agency on global opt-out preference signals on Thursday, May 5th:
Rulemaking should address the primary practical consideration for opt-out preference signals, which is how to address conflicts between different signals or separate, business-specific privacy settings.
The Agency should clarify the extent to which opt-out preference signals can be expected to, and should, apply to separate sets of personal data collected from different sources and in different contexts; and
The Agency should engage with regulators in other states, including Colorado and Connecticut, to establish a multistakeholder process to approve qualifying preference signals as they are developed and refined over time.
Following the public Stakeholder Sessions this week, the Agency is expected to publish draft regulations as soon as Summer or Fall 2022, which will then be available for public comments. Although the timeline could be delayed, the Agency’s goal is to finalize regulations prior to the CPRA’s effective date of January 1, 2023.
FPF Statement on Draft Roe v. Wade Decision
May 3, 2022— Privacy is a fundamental, deeply entrenched right in the United States and around the world. As technology evolves, individuals need more privacy protections, not fewer. This is particularly true when data and decisions about health and autonomy are at stake. Moreover, traditionally underserved communities need courts and lawmakers to elevate their voices, not drown them out. The draft decision overturning Roe v. Wade would reduce privacy protections at a time when individuals and lawmakers are demanding more.
Party of Five: Connecticut Poised to Pass Fifth U.S. State Privacy Law, Improving Upon Virginia, Colorado
This week, the Connecticut legislature passed Senate Bill 6, an ‘Act Concerning Personal Data Privacy and Online Monitoring.’ If SB 6 is enacted by Governor Lamont, Connecticut will follow California, Virginia, Colorado, and Utah as the fifth U.S. state to adopt a baseline regime for the governance of personal data. The law would come into effect on July 1, 2023.
Connecticut’s privacy bill goes beyond existing state privacy laws by directly limiting the use of facial recognition technology, establishing default protections for adolescent data, and strengthening consumer choice, including through requiring recognition of many global opt-out signals. Nevertheless, a federal privacy law remains necessary to ensure that all Americans are guaranteed strong, baseline protections for the processing of their personal information.
-Keir Lamont, Senior Counsel, Future of Privacy Forum
While SB 6 is similar to laws recently passed in Colorado and Virginia, it contains several significant expansions of consumer privacy rights. In addition to core requirements to obtain affirmative consent to process sensitive personal information; consumer rights to opt out of targeted advertising, data sales, and certain profiling decisions; and obligations for businesses to conduct risk assessments and meet purpose specification and data minimization standards, the bill includes:
Clear limits on facial recognition technology: SB 6 would designate biometric data generated from photographs or videos for the unique purpose of identifying a specific individual as a category of sensitive information subject to affirmative consent requirements. In contrast, other recently adopted comprehensive state privacy laws either do not require consent for facial recognition (California), do not define the term “biometric data” (Colorado), or contain ambiguous language (Virginia).
Default protections for adolescent data: Connecticut would join California as the only states to require consent for the monetization of the data of children aged 13 to 15.
Global opt-out signals and stronger consumer opt-out rights: SB 6 would strengthen individual controls by limiting the circumstances where businesses may reject consumer requests to opt out of data sales, targeted advertising, and profiling. Connecticut would also join Colorado as the only state laws to clearly, explicitly require the recognition of ‘global’ signals exercising these opt-out rights.
Explicit right to revoke consent: SB 6 goes beyond other state privacy laws by explicitly requiring companies to provide an easy-to-use mechanism allowing consumers to revoke consent for certain high-risk processing of personal data.
Like other state privacy laws, enforcement of SB 6 would be left to the exclusive discretion of the state Attorney General. However, the bill does not provide for future rulemaking, which may limit the ability of SB 6 to adapt to emerging technologies and business practices, and could prevent harmonization with other state approaches on complicated multi-jurisdictional compliance topics, such as global opt-out preference signals. Finally, along with the much weaker Utah Consumer Privacy Act enacted earlier this year, Connecticut’s SB 6 appears to solidify a trend of emerging state privacy laws iterating on the Virginia-Colorado legislative framework, rather than following the narrower regulatory model under development in California.