Red Lines under EU AI Act: Unpacking the prohibition of emotion recognition in the workplace and education institutions

Blog 6 | Red Lines under the EU AI Act Series

This blog is the sixth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

The sixth blog in the “Red lines under the EU AI Act” series focuses on unpacking the prohibition on emotion recognition in the workplace and educational institutions, as contained in Article 5(1)(f) AI Act and explored in the Commission’s Guidelines on the topic. This analysis revealed a number of key takeaways:

Article 5(1)(f) AI Act prohibits AI systems from inferring the emotions of a natural person in the workplace and education institutions based on biometric data, with specific exceptions for medical and safety purposes. Recital 44 AI Act invokes the lack of scientific basis for the functioning of such systems and the key shortcomings such as limited reliability, the lack of specificity and the limited generalisability, which may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons. 

Acknowledging the power imbalances in these environments which, combined with the intrusive nature of these systems, could lead to detrimental or unfavorable treatment of certain natural persons or whole groups, this prohibition aims to protect individuals from potentially invasive emotional surveillance. It is important here to note that AI systems for emotion recognition that are not put into use in the areas of workplace or educational institutions do not fall within the scope of this prohibition and qualify as ‘high risk’ under Annex III, paragraph 1, subparagraph c of the AI Act.

It is unclear whether AI systems that do not have as a primary aim the identification or inference of emotions, but have emotion identification or inference as a secondary functionality, are covered by the prohibition. For example, an AI system primarily intended for transcribing meetings that can also infer emotions or intentions, or an AI system that monitors students during a test, but at the same time also identifies emotions.  

1. Limited scope: The provision does not prohibit ‘emotion recognition systems’, but only ‘AI systems to infer emotions’

The Guidelines highlight that the prohibition in Article 5(1)(f) AI Act does not refer to emotion recognition systems more generally, but only to “AI systems to infer emotions of a natural person”. 

Article 3(39) AI Act defines an ‘emotion recognition system’ as “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”. Three elements can be identified in this definition:

  1. The AI system must be used for identifying or inferring
  2. Emotions or intentions of natural persons;
  3. Based on the biometric data of natural persons.

Hence, the target of these AI systems are emotions and intentions of individuals, which might be identified or inferred (a lower threshold). The identification or inference must be based on biometric data of the individuals. This definition seems to cover the use of emotion recognition on individuals, leaving out groups. 

The prohibition in Article 5(1)(f) AI Act does not refer to ‘emotion recognition systems’, but only to ‘AI systems to infer emotions of a natural person’. Recital 44 further clarifies that prohibition covers AI systems ‘to identify or infer emotions.’ ‘Intensions’ are not mentioned either in the Article 5(1)(f), nor in Recital 44. Hence, it appears that while the definition of ‘emotion recognition systems’ provided in Article 3(39) can serve as a reference point for this prohibition, it does not equate to what the prohibition covers, the prohibition being narrower in scope.

Certain cumulative conditions must be fulfilled for the prohibition to apply:

  1. The practice must constitute the “placing on the market”, “putting into service for this specific purpose,” or the “use” of an AI system;
  2. The AI system is used specifically to infer emotions;
  3. The AI system is in the area of the workplace or education and training institutions
  4. AI systems intended for medical or safety reasons are excluded from the prohibition.

All the cumulative conditions listed above must be met simultaneously to trigger the prohibition, a consistent approach of the AI Act’s full set of prohibited practices, an approach which narrows down the scope of the prohibition to very specific use cases.

Recital 18 AI Act provides a non-exhaustive list of emotions referred to in this definition, including happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement. The Recital clarifies that physical states, such as pain or fatigue, are not included in this definition. For example, systems used to detect fatigue in professional pilots or drivers for the purpose of preventing accidents are explicitly excluded from this definition. The mere detection of “readily apparent expressions, gestures, or movements” such as basic facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms, or head, or characteristics of a person’s voice, such as a raised voice or whispering, are not included either unless they are used for identifying or inferring emotions. The Guidelines further clarify that inferring emotions from a written text does not fall within the scope of the prohibition. 

What is left unclear with regard to the prohibition’s scope is that there is a thin line between emotions, other readily apparent expressions, and pain or fatigue, which also result in expressions that can be mistaken for emotions. The distinction between mood and emotion (both of which can manifest in different ways) is similarly not made, making it unclear whether mood detection falls within this prohibition. The distinction between these features would require a detailed analysis of many factors and circumstances on a case-by-case basis, rather than only biometric data, making the application of this prohibition in practice challenging and complex.

2. It is unclear whether the inference of intentions is also prohibited

The AI Act does not provide clarification on what it means by ‘intentions’ and how to differentiate them from ‘emotions’ in cases when the same system can identify both. This distinction is important because the prohibition applies only to emotions and does not refer to intentions, whereas the definition of ‘emotion recognition systems’ applies to both. 

None of the examples provided in Recital 18 seems to fall within the notion of intentions. While the emotions listed in this Recital represent reactions to situations or environments, the notion of ‘intention’ would have a predictive quality about the future. Additionally, the Commission Guidelines seem to focus solely on emotions, without providing any clarification or definition of ‘intentions’. However, in a non-exhaustive list of examples of emotion recognition, they include the example of “Systems inferring from voice or body gestures, that a student is furious and about to become violent”. While ‘furious’ seems to fall within the notion of ‘emotion’, ‘about to become violent’ makes a prediction about a future action based on the (automatically) detected emotion. This prediction might fall within the notion of ‘intention’ to commit an action in the future but it might also be considered as an identification of the transition from a passive state (emotions) to active (a combination of emotions and intentions), making it difficult to understand whether such a prediction falls within the prohibition. The Guidelines, however, do not seem to make such a distinction. 

An example of ‘intentions’ in the workplace could be the detection of an employee’s intention to resign from the job based on their facial expressions during meetings or videocalls. A wide range of intentions, such as intentions to commit a crime, intentions to drop out of school, or even suicidal intentions could fall within this prohibition when detected in the area of the workplace or education. The Guidelines also note that the concept of emotions or intentions should be understood in a broad sense, noting that attitude and emotion are equivalent for the purposes of this prohibition, thus preventing circumvention through changes in terminology.

Another distinction of the prohibition from the definition of ‘emotion recognition systems’ is that the prohibition refers only to the ‘inference of emotions’ as a prohibited practice, whereas the definition of ‘emotion recognition systems’ includes both ‘identification’ and ‘inference’ of emotions.

The Dutch Data Protection Authority (AP) interprets the prohibition as covering both the inference and the identification of emotions and intentions. The Guidelines distinguish between “identification” and “inferring”, clarifying that identification occurs where the processing of the biometric data (for example, of the voice or a facial expression) of a natural person allows to directly compare and identify an emotion with one that has been pre-programmed in the emotion recognition system. “Inferring” involves deduction through analytical processes, including machine learning approaches that learn from data how to detect emotions. 

3.  The prohibition refers only to emotions deriving strictly from biometric data

According to the Guidelines, this prohibition has a similar scope as the rules applicable to other emotion recognition systems1, while it should be limited to inferences based on a person’s biometric data, as defined in Article 3(39) AI Act. Hence, the prohibition in Article 5(1)(f) refers only to emotions deriving strictly from biometric data. 

Biometric data is defined in Article 3(34) AI Act as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. The relationship between the AI Act definition of biometric data and that provided in the GDPR is explored below in section 6 of this blog.

The limited scope of this ban on inference based on biometric data excludes AI systems that perform emotion recognition through other inferences not on the basis of biometric data such as AI systems for crowd control, and AI systems inferring physical states such as pain and fatigue.

4. A broad interpretation of the ‘workplace’ and ‘education institutions’ contexts

The prohibition in Article 5(1)(f) AI Act is limited to emotion recognition systems specifically in the “areas of workplace and educational institutions”. The Guidelines submit that the workplace context should be interpreted broadly, covering any physical or virtual space where the work is performed. The Guidelines also specifically mention training institutions as covered by this prohibition. Training institutions are not mentioned in the AI Act’s provision or any of the related Recitals. Besides simply listing training institutions alongside educational institutions, the guidelines do not provide any further details as to what constitutes training institutions.

Interestingly, the Guidelines clarify that hiring processes also fall within the workplace context for the purpose of this prohibition. Similarly, the Guidelines clarify that educational institutions should encompass all types and levels of education, including admissions procedures, and should also be interpreted broadly, without any limit, in terms of the types or ages of pupils or students or of a specific environment.

Based on the text of Recital 44, this prohibition also covers AI systems for emotion recognition in situations related to the workplace and education. This makes the area of applicability of the prohibition broader while leaving space for interpretation as to what “related to the workplace” might consist of. Such an interpretation may require a case-by-case analysis. The Dutch AP has interpreted this broad notion as including for example, home working environments, online or distance learning, and also the application of emotion recognition for recruitment and selection or application for education. Further clarifications with clearer guidelines might be necessary in this regard, for ensuring legal certainty, while keeping in mind the volatile nature of ‘workplace’.

It is important to note that emotion recognition systems installed in a work environment can also be used for emotion detection of customers rather than employees, such as, for example, to detect suspicious customers. The AI Act prohibition does not apply to such cases. However, the same system might detect the emotions of employees simultaneously with those of customers. The guidelines only superficially touch upon this scenario in an example stating that in such cases, it should be ensured that “no employees are being tracked and there are sufficient safeguards”.

5. Exception(s) to the prohibition: medical and safety reasons

In a Joint Opinion on the AI Act, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) state that the “use of AI to infer emotions of a natural person is highly undesirable and should be prohibited.” In this statement, and in a later EDPS Opinion, they further note that exceptions should be made for “certain well-specified use-cases, namely for health or research purposes”.

The Guidelines note that the exception granted in this prohibition should be narrowly interpreted, limited to what is strictly necessary and proportionate, including limits in time, personal application and scale, and should be accompanied by sufficient safeguards in order to ensure a high-level of fundamental rights protection. The recitals of the AI Act stress that the exception granted in this prohibition applies to AI systems used strictly for medical or safety reasons. For example, a system intended for therapeutic use. As such, therapeutic uses mentioned in Recital 44 AI Act as an exception, should only apply to CE-marked medical devices. The Guidelines note that this exception does not extend to general well-being monitoring, such as stress or burnout detection.

Additionally, the Guidelines note that safety exceptions should be limited to protecting life and health, excluding other interests such as property protection or fraud prevention. “Explicit need” is also mentioned as a requirement for the exception to apply. The Guidelines also highlight that data collected and processed in this context cannot be used for any other purpose, hence linking to the GDPR’s purpose limitation principle.

6. Interplay with the GDPR

The Guidelines mention, in a footnote, that there is a distinction between the definition of biometric data within the AI Act and the definition within the GDPR. The definition provided in the AI Act “does not include the wording ‘which allow or confirm the unique identification’ (the functional use of biometric data), contrary to the definition of biometric data in the GDPR that includes this requirement. As such, the Guidelines conclude that “the GDPR definition of biometric data will apply under data protection rules with regard to the processing of personal data ​​(and when for example Article 9(1) and 9(2) GDPR would be applicable)”. This would mean that the AI Act definition applies in AI contexts, whereas the GDPR definition in data protection contexts. 

When reaching this conclusion, the Commission appears to have disregarded recital 14 of the AI Act, according to which the concept of biometric data in the AI Act must be interpreted “in the light of” the concept of biometric data in the GDPR. The clarification of this gap is crucial given the high bar that needs to be met for unique identification. 

If we stick to the AI Act definition, the notion of ‘biometric data’ becomes broader, including most emotion recognition systems using biometric data that do not necessarily have the ability to identify the individual. These systems would be left out of the prohibition if the GDPR definition of biometric data is to be applied. This raises the question of whether there needs to be a categorization of biometric data for the purposes of Article 5(1)(f), to further clarify the notion of biometric data to be able to determine without doubt which practices count as emotion recognition under this prohibition.

Data protection regulators have already been treating emotion recognition systems, including those in the workplace, as high-risk under data protection law, even before the AI Act became applicable. In the Budapest Bank case, the Hungarian DPA submits that AI emotion recognition in the workplace poses fundamental rights risks and ordered the Bank to modify its data processing practices to comply with the GDPR, specifically by refraining from analyzing emotions during voice analysis. With regard to the emotion recognition of employees based on voice analysis, the DPA stresses the need for a separate balancing of interests, taking into account their vulnerable position resulting from their subordinate status. It can be noticed that the reasoning behind the ban imposed by Article 5(1)(f) of the AI Act is reminiscent of the reasoning in this case.

However, it is interesting to note that in the Budapest Bank case, the DPA did not explicitly classify the voice-derived data as biometric data under Article 9 GDPR. The DPA treated the data as ordinary personal data, attracting heightened scrutiny due to the AI risk, rather than as special category biometric data triggering the application of Article 9 outright. Nevertheless, the Hungarian DPA specifically referenced the EDPB-EDPS Joint Opinion 5/2021 on the AI Act proposal, highlighting that AI-based emotion recognition systems pose a high risk to the fundamental rights of data subjects.

7. Concluding Reflections

The AI Act’s prohibition is consistent with previous case law of DPAs, on the basis of the GDPR

The prohibition’s premise regarding the power imbalance and the vulnerable position of employees and students is reminiscent of a DPA’s fine in a similar case. As such, in the Budapest Bank case of 2022, the Hungarian DPA found that the use of AI for emotion recognition of employees and consumers breached several of the GDPR’s key principles and obligations.

The definition of ‘biometric data’ in the AI Act does not seem to be aligned with the definition of the same concept in the GDPR, creating confusion as to which definition applies in this prohibition.

The Guidelines give precedence to the definition of biometric data in the AI Act for the purpose of Article 5(1)(f) prohibition, whereas the AI Act itself, in its Recital 14, seems to give precedence to the concept of biometric data in the GDPR.

The prohibition expressly differentiates between emotions derived from biometric data and those derived from other types of analysis not involving biometric data, the latter practice being excluded from the prohibition.

Biometric data is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. The narrow scope of this ban on inference based on biometric data excludes AI systems that perform emotion recognition through other inferences not on the basis of biometric data.

  1.  See Annex III, point 1(c), and Article 50 AI Act. ↩︎

Privacy Protections Coming Sooner Rather Than Later to the Sooner State

Oklahoma has become the latest U.S. state to enact a comprehensive consumer privacy law after Governor Stitt signed SB 546 into law on March 20. This ends two long legislative droughts: First, this is the long-awaited 20th state comprehensive privacy law and the first since the Rhode Island Data Transparency and Privacy Protection Act was enacted in June 2024. Second, as the bill’s sponsor Rep. West (R) identified in House floor debate, this concluded Oklahoma’s multi-year journey to enacting a comprehensive consumer privacy law. 

SB 546 is a Virginia-style law with few deviations from that model, and it will go into effect on January 1, 2027. This resource provides an overview of the law’s scope, consumer rights, business obligations, and enforcement provisions.

Definitions and Scope

Covered Entities: This law applies to controllers and processors who conduct business in Oklahoma (or produce a product or service targeted to Oklahoma residents) and annually either (1) control or process the personal data of at least 100K consumers, or (b) control or process the personal data of at least 25K consumers and derive over 50% of gross revenue from selling personal data. (Section 15.) These numbers are consistent with the thresholds established in other states. 

Definitions: The law’s definitions are generally consistent with the Virginia model. For example, this law includes the narrower definition of “sale,” which is limited to exchanges of personal data only for monetary consideration (not other valuable consideration). One divergence from the Virginia-model is that the definition of “biometric data” includes data generated from a physical or digital photograph or a video or audio recording if such data is generated to identify a specific individual. (Section 1.)

Entity and Data-Level Exemptions: The law includes entity-level exemptions for state agencies and political subdivisions (and service providers acting on their behalf), financial institutions subject to Title V of GLBA, covered entities and business associates governed by HIPAA, nonprofits, and institutions of higher education. The law also includes data-level exemptions for data subject to GLBA, HIPAA, FCRA, and FERPA, personal data processed in the course of a purely personal or household activity, personal data collected and used for purposes of the federal policy under the Controlled Substances Act, and more. (Sections 15–16.)

Exceptions for Common Business Activities: The law includes many exceptions which are consistent with existing state comprehensive privacy laws, including: legal compliance (local, state, or federal laws, rules or regulations,and government subpoenas, summons, inquiries or investigations); providing a specifically requested product or service; preventing, detecting, protecting against or responding to security incidents, deceptive activities, or any illegal activity; engaging in public or peer-reviewed scientific or statistical research in the public interest; conducting “internal operations” that are reasonably aligned with the consumer’s expectations, existing relationship with the controller, or are otherwise compatible with processing data in furtherance of the provision of a specifically requested product or service; and more. (Section 19.)

Consumer Rights

Consumers have the standard rights to confirm whether a controller is processing their personal data and access that data, correct inaccuracies in their personal data, delete their personal data, obtain a copy of their personal data in a portable format (if technically feasible), and to opt-out of the processing of their personal data for targeted advertising, the sale of personal data, or profiling in furtherance of a decision that produces a legal or similarly significant effect concerning the consumer. Controllers must notify consumers within 45 days if they are declining to take action on a rights request and provide instructions on how to appeal that decision. The law does not include any provisions regarding authorized agents or opt-out preference signals. (Sections 2-3.)

Consistent with most other state laws, the rights of access, correction, deletion, and portability do not apply to pseudonymous data in cases where the controller can demonstrate that any information necessary to identify the consumer is kept separately and is subject to effective technical and organizational controls preventing the controller from accessing the information. (Section 11.)

Business Obligations

Controllers and processors have enumerated responsibilities under the law, including transparency, data minimization, data security, oversight of processors, and data protection assessments. 

Transparency: Controllers must provide consumers with a “reasonably accessible and clear” privacy notice including information such as categories of data processed, processing purposes, how to exercise rights and appeal decisions, and categories of personal data shared with third parties. (Section 8).

Data Minimization: The law includes procedural data minimization and purpose limitation requirements: A controller must limit the collection of personal data to what is “adequate, relevant, and reasonably necessary” for the purposes disclosed to the individual, obtain opt-in consent to process personal data for purposes that are “neither reasonably necessary to nor compatible with the disclosed purposes for which the personal data is processed,” and must obtain opt-in consent to process a consumer’s sensitive data. (Section 7.)

Data Security: Controllers must maintain “reasonable administrative, technical, and physical measures to protect the confidentiality, integrity, and accessibility” of personal data. (Section 7.)

Processors: Controllers must engage in oversight of processors by entering into a contract that meets statutory criteria, such as setting forth instructions for processing data, the nature and purpose of the processing, confidentiality, obligating the processor to cooperate with “reasonable assessments” by the controller or the controller’s designated assessor.” (Section 9.)

DPIAs: Controllers must conduct and document a data protection assessment for certain processing activities: processing personal data for targeted advertising; selling personal data; processing personal data for profiling that presents a reasonably foreseeable risk of substantial injury to consumers (e.g., unfair or deceptive treatment, financial or physical or reputational injury, intrusion on solitude, seclusion or private affairs), processing sensitive data, or other processing activities involving personal data that present a heightened risk of harm to consumers. (Section 10.) 

Enforcement

The law will go into effect on January 1, 2027 and will be enforced exclusively by the attorney general. (Section 22.) The law includes a mandatory cure period, requiring the AG to notify controllers or processors of alleged violations and allowing 30 days for them to resolve violations. (Sections 13–14.) The civil penalty for each violation is $7,500. (Section 14.)

* * *

At long last, Oklahoma takes its place on the privacy patchwork. Looking to get up to speed on the existing state comprehensive consumer privacy laws? Check out FPF’s 2025 report, Anatomy of a State Comprehensive Privacy Law: Charting the Legislative Landscape

image

Pictured: Oklahoma receiving its star on the FPF “Privacy Patchwork” quilt.

Navigating Autonomy and Privacy in Emerging AgeTech: Insights from the FPF Roundtable

As AgeTech expands into homes across the country—seeking to enable older adults to live independently longer—fundamental questions about autonomy, privacy, and trust are coming into sharper focus. How do we balance caregiver support with individual privacy? Should data pertaining to older adults be treated as “sensitive?” And in a fragmented privacy and consumer protection landscape, how do we build the trustworthiness necessary for adoption?

The Future of Privacy Forum recently convened a pivotal AgeTech Roundtable, supported by the Alfred P. Sloan Foundation, to delve into the ethical, legal, and policy challenges presented by the rapidly expanding field of AgeTech. The core discussions centered on balancing the growth of these tools without compromising the fundamental autonomy and privacy of the older adults they are intended to serve. This post highlights key themes from the roundtable and outlines next steps in FPF’s ongoing AgeTech initiative.

The Core Tension: Privacy vs. Autonomy

AgeTech devices, ranging from smart home sensors to AI-enabled companion devices, collect highly sensitive personal data, including health, location, voice, and behavioral patterns. A critical nuance explored during the roundtable was the unique privacy calculation older adults face: the willingness to accept continuous monitoring in exchange for the ability to live independently at home for longer.

However, this trade-off is complicated by several factors:

The Crisis of Trust: Fraud and AI

Scams targeting older adults are one of the top consumer protection concerns, and the rising phenomenon of fraud and financial theft severely undermines trust in AgeTech products.

Conclusion and Next Steps

The insights gathered at the roundtable—which centered on the ethical, legal, and policy challenges in the emerging AgeTech landscape—provide a robust foundation for FPF’s continued work to ensure that Agetech serves to enhance, not diminish, the dignity and independence of older adults.

Recommendations and next steps from roundtable attendees included:

Incentives or Obligations? The U.S. Regulatory Approach to Voluntary AI Governance Standards

By FPF Legal Intern Rafal Fryc

As artificial intelligence gets increasingly deployed across every sector of the economy, regulators find themselves grappling with a fundamental challenge: how to govern a technology that defies traditional regulatory frameworks and changes faster than legislation can keep pace. One increasingly common approach can be found outside the text of statutes, where state legislatures are pointing developers and deployers toward established voluntary governance frameworks like NIST’s AI Risk Management Framework or ISO 42001. This shift toward incorporating non-binding technical standards into legal requirements represents more than just regulatory convenience – it is creating a new legal regime in which voluntary industry guidelines are influencing everything from negligence determinations and punitive damage calculations to affirmative defenses for regulatory actions. Understanding how these soft law approaches are influencing legal expectations has become essential for anyone building, deploying, or governing AI systems.

This blog post highlights: 

The Growth of AI Laws Utilizing Voluntary Standards

Colorado’s AI Act (SB 205), prior to the revised policy framework, was the first in the U.S. to require deployers to implement a “risk management policy and program” that aligns with NIST’s AI RMF, ISO 42001, or another “nationally or internationally recognized risk management framework for artificial intelligence systems.”1 On top of required implementation, Colorado offered an affirmative defense to deployers and developers for compliance with these frameworks. Although Colorado was the first to introduce such provisions in the AI space, mentions of external standards in the AI Act were removed by the Colorado AI Policy Working Group in their latest proposed revisions to the Act. Texas later came into the scene with the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), also creating an affirmative defense for developers or deployers that comply with NIST’s AI RMF or another nationally or internationally recognized risk management framework for AI systems.2 Most recently, California’s Transparency in Frontier Artificial Intelligence Act (TFAIA) requires developers to disclose whether and to what extent they incorporate “national standards, international standards, and industry-consensus best practices.”3 New York’s RAISE Act (A 9449) takes a similar approach to California, requiring developers to disclose how they “handle” incorporating external standards. Montana (SB 212) also passed a narrow law, requiring deployers to implement a risk management framework that considers external standards when AI is deployed in critical infrastructure. 

While Texas takes an incentive-based approach, by offering compliance with a non-binding framework as an affirmative defense, California requires that deployers or developers consider and publish their approach to risk management frameworks that align with, or are substantially similar to nationally/internationally recognized standards. Colorado originally occupied a complicated middle ground, mandating adherence for deployers’ risk management while simultaneously offering an affirmative defense from state AG actions for developers, deployers and “other persons.” As with many laws surrounding emerging technologies, the approach is fragmented, where entities utilizing voluntary AI governance frameworks are subject to varying degrees of liability and protection.

screenshot 2026 03 16 153947

The Proposed Laws and Their Application

Nonetheless, the approach appears to be gaining momentum, particularly with proposed legislation regarding frontier models, liability, and automated decisionmaking. All take language from existing bills to require or encourage developers and deployers to have a written policy that takes into account NIST’s AI RMF or a similar framework.4 

Frontier Model Bills

Bills focusing on frontier models either copy or expand California’s approach. While California only required developers to publish their approach to external standards, some bills  require developers and deployers to implement a framework that incorporates the same standards. However, bills under this category notably do not refer to specific standards like NIST or ISO, instead opting for the more general terms “industry-consensus best practices” and “national/international standards.” Within this category, the bills fall under two approaches: either mandating a single framework that incorporates/considers external standards or mandating both a public safety and a separate child protection plan that does the same. Bills in the first category include Illinois SB 3312 and Illinois HB 4799. Interestingly, both bills require any amendments or rulemakings made pursuant to the statute to also consider the same external standards. Bills in the second category include Illinois SB 3261, Utah HB 286, Tennessee SB 2171, and Nebraska LB 1083. These bills not only focus on frontier model developers, targeting chatbot providers as well. 

Liability Bills

Bills focusing on liability typically follow Texas’ approach, where developers and deployers are given a safe harbor from product liability based litigation if they implement external standards in various places in the AI lifecycle. Bills in this category include Illinois SB 3502/SB 3590, Maryland HB 712, and Vermont H 792. The requirements to achieve safe harbor differ for developers and deployers, where developers must conduct “testing, evaluation, verification, validation, and auditing of that system consistent with industry best practices” and also submit a data sheet to the state Attorney General that includes:

  1. “Information on the intended contexts and uses of the artificial intelligence system in accordance with industry best practices;
  2. Information regarding the datasets upon which the artificial intelligence system was trained, including sources, volume, whether the dataset is proprietary, and how the datasets further the intended purpose of the product; 
  3. Accounting of foreseeable risks identified and steps taken to manage them consistent with industry best practices; and 
  4. Results of red-teaming testing and steps taken to mitigate identified risks, consistent with industry best practices.”

Deployer requirements for safe harbor are comparatively relaxed, where the bills mandate a risk management framework that incorporates external standards.

ADMT Bills

Bills focusing on ADMT follow Colorado’s original approach made prior to the revised policy framework, where there are both requirements to implement these standards and safe harbors from litigation to encourage doing so. With Washington’s HB 2157, the bill presumes conformity with the statute if the developer or deployer follows NIST’s AI RMF or ISO 42001. The bill also offers a rebuttable presumption to deployers if they implement a risk management framework that conforms to NIST, ISO, or another standard of similar rigor. New York’s S 1169, on the other hand, requires developers and deployers to implement a risk management policy that conforms with NIST’s AI RMF or another standard designated by the state’s Attorney General. This bill diverges from the others by granting the state Attorney General power to name which standards would qualify under the statute, as opposed to the majority of other bills which leave that determination unanswered. 

The Real Effects on Litigation

Industry standards like NIST’s AI RMF can also be used by courts to determine whether companies exercised reasonable care, even when no statute requires their adoption. This judicial reliance on voluntary standards follows established patterns from product liability and negligence cases, where courts have long looked to industry practices to define standards of care. In AI litigation, these frameworks can emerge as critical evidence in three areas: establishing the duty of care in negligence claims, determining defects in strict liability cases, and assessing good faith conduct when calculating punitive damages. The result is that compliance with non-binding standards can determine liability regardless of whether a jurisdiction has AI-specific legislation.

Product liability in AI litigation can be split into two categories: negligence cases and strict liability cases. Negligence cases depend on whether the defendant had a duty of care to the plaintiff and if that duty was breached. A duty of care’s existence depends on many factors, including industry standards. Many courts have recognized that “[e]vidence of industry standards, customs, and practices is ‘often highly probative when defining a standard of care.”5 Other courts have also concurred that “advisory guidelines and recommendations, while not conclusive, are admissible as bearing on the standard of care in determining negligence.”6 Openly complying with industry standards can provide an objective input into an otherwise subjective determination. 

Strict liability takes a different approach by automatically placing liability in cases of dangerous animals, ultrahazardous activities, and product defects. Although not yet established, harm caused by AI would most likely qualify under the product defect theory, which would make an AI developer liable if they failed to provide adequate warning or if the product is “defective.”7 The current wave of chatbot litigation points to this approach, with various plaintiffs pursuing this avenue.8 In determining what qualifies as “adequate warning” or whether a product is “defective,” courts often look to industry standards. Although a minority of states do not consider industry standards for strict liability,9 the majority of states and federal courts do.10 Following widely adopted established standards, like those by NIST and ISO, can prove dispositive in most strict liability cases relating to AI.

In terms of punitive damages, both state and federal courts have looked favorably on adherence to  industry standards when determining if the defendant acted in good faith; however, there have been exceptions:

  1. A manufacturer followed industry standards but actively resisted safer designs based on economic considerations.11
  2. A company followed industry standards but knew about a remaining risk and failed to warn and remedy the risk.12
  3. A manufacturer followed industry standards but knowingly engaged in conduct that endangered people.13

While following industry standards has been beneficial for determining “good faith” when assessing punitive damages, it mostly serves as the baseline for expected behavior. Readily demonstrating adherence to external standards is necessary to avoiding hefty fines.

Whether through statutory requirements in Colorado and California, affirmative defenses in Texas, or judicial interpretation of “reasonable care” and “good faith” in liability cases, frameworks like NIST’s AI RMF and ISO 42001 are transitioning from voluntary best practices to de facto legal requirements. For AI developers and deployers, the message is clear: application of these standards is becoming less optional and more essential for managing legal risk. Companies that wait for explicit regulatory mandates may find themselves already behind the curve, and organizations should begin implementing recognized risk management frameworks now, not because the law explicitly requires it everywhere, but because the legal system is already treating these standards as the baseline for reasonable conduct.

Beyond the defensive calculus of avoiding penalties and litigation exposure, there is an equally compelling case for adopting external standards: they are simply good governance. These frameworks are road-tested guides for building AI systems that produce accurate, ethical, and trustworthy outcomes. Organizations that internalize them are better positioned to gain customer trust, regardless of what any particular state legislature has done. Given the near impossibility of creating compliance programs that anticipate every nuance of every emerging AI law across fifty states, demonstrating a genuine, documented commitment to robust governance gives regulators reason to extend good faith when questions arise. An organization that can point to systematic, principled governance processes is better positioned in a regulatory conversation than one scrambling to reverse engineer compliance after the fact. The growing statutory references to NIST and ISO standards are a development worth watching closely but, the stronger argument for adoption may be the proactive one: that these frameworks represent a genuine commitment to getting AI governance right, not merely a hedge against enforcement risk.

Design elements of the above image was generated with the assistance of AI and reviewed by FPF.

  1. Colo. Rev. Stat. Ann. § 6-1-1703. ↩︎
  2. 2025 Tex. Sess. Law Serv. Ch. 1174, Sec. 552.105(e)(2)(D) (H.B. 149). ↩︎
  3. Cal. Bus. & Prof. Code § 22757.12. ↩︎
  4. H.B. 286, 13-72b-102(1)(a) (Utah 2026); H.B. 4705, Sec. 15(a)(3)(A), 104th Gen. Assem. (Ill. 2026); S.B. 2171, 68-107-103(a)(1)(G), 114th Gen. Assem. (Tenn. 2026); N.Y. S8828 § 1421(a) (2026). ↩︎
  5. Elledge v. Richland/Lexington Sch. Dist. Five, 341 S.C. 473 (Ct. App. 2000). ↩︎
  6. Cook v. Royal Caribbean Cruises, Ltd., No. 11-20723-CIV, 2012 WL 1792628, at *3 (S.D. Fla. May 15, 2012). ↩︎
  7. Restatement (Second) of Torts § 402A (Am. L. Inst. 1965). ↩︎
  8. Garcia v. Character Techs., Inc., No. 6:24-cv-1903-ACC-DCI, 2025 U.S. Dist. LEXIS 215157 (M.D. Fla. Oct. 31, 2025). ↩︎
  9. Sullivan v. Werner Co., 253 A.3d 730 (2021). ↩︎
  10. Joshua D. Kalanic et al., AI Soft Law and the Mitigation of Product Liability Risk, CTR. FOR L., SCIENCE, & INNOVATION, ARIZONA STATE UNIVERSITY (Jul. 2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3909089 (“Yet, many states still allow industry standards to be presented as evidence because such standards can help show the reasonableness and adequacy of the design.”) ↩︎
  11. Gen. Motors Corp. v. Moseley, 447 S.E.2d 302 (1994). ↩︎
  12. Flax v. DaimlerChrysler Corp., 272 S.W.3d 521 (Tenn. 2008). ↩︎
  13. Uniroyal Goodrich Tire Co. v. Ford, 461 S.E.2d 877, 880 (1995). ↩︎

Red Lines under the EU AI Act: Understanding the ban of the untargeted scraping of facial images and facial recognition databases 

Blog 5 | Red Lines under the EU AI Act Series

This blog is the fifth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

1. Introduction

The fifth blog in the “Red lines under the EU AI Act” series focuses on unpacking the Article 5(1)(e) prohibition to place on the market, put into service, or use AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage. It is notable how this provision targets specifically the acts necessary prior to engaging in facial recognition itself, which is tackled separately, under a different provision of the AI Act, Article 5(1)(h). A number of key takeaways emerge from our analysis: 

Following this brief introduction, Section 2 outlines the rationale behind the prohibition, while Section 3 notes its specific scope as defined in the differentiation between “targeted” and “untargeted” scraping. Section 4 outlines what falls outside the scope of the prohibition, potentially including use-cases of AI-driven deepfakes, while Section 5 explores the AI Act’s interplay with other relevant areas of EU law, including the GDPR and Law Enforcement Directive (LED). After noting significant cases on facial recognition by DPAs, Section 6 includes concluding reflections and key takeaways. 

2. Context and rationale: untargeted scraping of facial images as a particularly intrusive practice posing “unacceptable risk”, consistent with past case law under the GDPR 

Article 5(1)(e) AI Act prohibits the creation or expansion of facial recognition databases through the untargeted scraping of internet or CCTV footage. The European Commission’s Guidelines on Prohibited Artificial Intelligence Practices under the AI Act recognize that the untargeted scraping of facial images “seriously interferes with individuals’ right to privacy and data protection and deny those individuals the right to remain anonymous”. This is further supported by Recital 43 AI Act, which recognizes that the untargeted scraping of facial images can add to the feeling of mass surveillance and lead to gross violations of fundamental rights, including the right to privacy. 

The context and rationale of the AI Act’s prohibition is consistent with past case law by DPAs across the EU on the basis of the GDPR. Indeed, the expansion and creation of facial recognition databases on the basis of the untargeted scraping of data, including biometric data such as facial images, has been a continuous area of serious concern for DPAs. From 2022 to 2024, several DPAs imposed large fines on Clearview AI for GDPR violations due to practices related to facial recognition, as highlighted in Section 5 of this blog. 

3. Defining facial recognition databases and (targeted vs.) untargeted scraping

    Article 5(1)(e) AI Act states that the following practice shall be prohibited: “the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage” (emphasis added).

    Article 5(1)(e) AI Act states that the following practice shall be prohibited: “the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage” (emphasis added).

    Four cumulative conditions must be met for the prohibition to apply: 

    1. The practice must constitute market placement, putting into service for this specific purpose, or usage of the AI system; 
    2. Aim to create or expand facial recognition databases; 
    3. Employ AI tools for untargeted scraping methods; and 
    4. Source images from either the internet or CCTV footage.

    The Guidelines clarify that, similarly to the other Article 5 prohibitions, all four cumulative conditions above must be met simultaneously to trigger the prohibition. This approach, which is a consistent element of the AI Act’s full set of prohibited practices, seems to ensure a targeted approach to banning very specific uses of AI technologies. The prohibition applies to both providers and deployers who, in accordance with their responsibilities and placement in the value chain, have a responsibility not to place on the market, put into service, or use AI systems for this specific purpose. 

    The Guidelines stress that Article 5(1)(e) AI Act does not require that the sole purpose of the database is to be used for facial recognition; it is sufficient that the database can be used for facial recognition. The Guidelines define a “database” in this context as any collection of data or information that is specially organized for rapid search and retrieval by a computer, and may be temporary, centralized or decentralized. 

    An important distinction in the application of this provision is between targeted and untargeted scraping – the prohibition does not apply to any scraping tool with which a database for face recognition may be constructed or expanded, but only to tools for untargeted scraping. In this context, untargeted scraping is defined as a technique absorbing as much data and information as possible from different sources and without a specific focus to a given individual or group of individuals. This may be done using a variety of scraping tools and techniques, including web crawlers, bots, or other means that allow for the extraction of data from a variety of sources, including CCTV footage, social media, and other websites, in an automatic manner. 

    It is crucial to determine the precise scope of the scraping, since the Guidelines further note that the prohibition does not cover “targeted” scraping, such as the collection of images or videos of specific individuals or pre-defined groups of persons for law enforcement purposes. Furthermore, in more complex systems combining targeted with untargeted searches, only the untargeted scraping is prohibited. 

    Notably, the Guidelines highlight that the publication of images on social media by natural persons does not constitute consent for inclusion in facial recognition databases, aligning with the GDPR notion of (valid) consent as a legal basis for processing personal data. 

    4. What falls out of the scope of the prohibition?

    While specifically targeted scraping is in some cases allowed, several other practices fall outside the prohibition’s scope, including the untargeted scraping of biometric data other than facial images (such as voice samples) and, importantly, non-AI scraping methods. The Guidelines also note that AI systems which harvest large amounts of facial images from the internet to build AI models that generate new images about fictitious persons, similarly fall outside the scope of the prohibition.

    While the logic behind this last use-case is seemingly to permit the effective training of AI models, and it explicitly falls outside the scope of the prohibition, attention should be paid to the compliance of this practice with both copyright and data protection laws. Indeed, AI systems that scrape large amounts of facial images to build AI models may trigger the dual application of EU copyright rules, which protect the images themselves to the extent they are under copyright protection, and the application of the GDPR, which protects facial images as personal data, or even as special category biometric data where they are processed with the purpose of uniquely identifying a person. While the scope of this prohibition was agreed upon by co-legislators during final negotiations for the AI Act, this particular use-case may not account for the increasing sophistication of AI-driven deepfakes. 

    In fact, at the time of writing, the European Parliament reportedly reached a political agreement on the AI Act Omnibus wherein the latest compromise text includes a new addition to the list of prohibited practices. Namely, once adopted, non-consensual sexual deepfakes will be banned under the revised AI Act. 

    It is also worth noting that while this new ban will allow for further protection, it will not cover all use-cases of AI-driven deepfakes, potentially marking an area of continuous, ongoing review by regulators and legislators alike. For this purpose, outside of the Omnibus procedure, the AI Act’s Article 112 empowers the Commission to assess and review the list of prohibited practices on a yearly basis, with the resulting assessment report having to be submitted to the EU Parliament and Council. 

    5. Interplay with other EU laws: From the GDPR to the LED

    5.1. Facial recognition as a long-standing regulatory priority for DPAs across the EU

    The creation and expansion of facial recognition databases on the basis of untargeted scraping of facial images has been a prominent area of regulatory intervention on the basis of the GDPR. In February 2022, the Italian DPA (Garante) fined Clearview AI €20 million and imposed a ban on the company’s further collection and processing of data, including biometric data, and ordered the erasure of such data relating to citizens on Italian territory. 

    In October 2022, the French DPA (CNIL) similarly imposed a fine of €20 million on Clearview AI, recognizing the very serious risk to individuals’ fundamental rights posed by their facial recognition software. In September 2024, in an ex officio investigation, the Dutch DPA (AP) fined Clearview AI €30 million for “illegal data collection for facial recognition.” 

    In their investigations, DPAs found breaches of the GDPR’s Article 6 (lawfulness of processing), Article 9 (processing of special categories of personal data), and a failure to fulfil data subject rights, particularly those found in Article 15 (right of access) and Article 17 (right to erasure). The Garante also found breaches of key principles of data protection, in particular of lawfulness, fairness and transparency (Article 5(1)(a) GDPR), the purpose limitation principle (Article 5(1)(b) GDPR), and the storage limitation principle (Article 5(1)(e)). As such, in addition to constituting a prohibited practice under the AI Act, the untargeted scraping of facial images for the purposes of creating or expanding a facial recognition database also contravenes several obligations found in the GDPR.

    The Guidelines themselves similarly note that, in general, the processing of personal data via the untargeted scraping of the Internet or CCTV material to build up or expand face recognition databases is unlawful, and there is no legal basis under the GDPR for such activity. 

    5.2. Law Enforcement use of facial recognition databases

    Law Enforcement Authorities (LEAs) use facial recognition databases for identification purposes, allowing for the automated identification of individuals that may in some way be related to criminal events, such as suspects, wanted persons, victims, or witnesses. Among the different types of databases used for face matching by LEAs are also databases consisting of surveillance footage or private data sources and open-source data from the internet. 

    While the AI Act prohibits the creation or expansion of facial recognition databases through the untargeted image scraping from the internet or CCTV footage, the provision does not seem to prohibit the use of already existing databases that were previously created from untargeted scraping of internet or CCTV footage that are used by LEAs for face matching and identification purposes. Hence, there might be a legal gap between the prohibition of the creation of new databases and the expansion of existing databases from image scraping, and the use of such databases that were created prior to the entry into force of the AI Act prohibition. 

    The AI Act’s Article 5(1)(e) prohibition admits no exceptions for law enforcement use, unlike Article 5(1)(h) on real-time remote biometric identification (to be explored in the final instalment of this blog series), which has a carve-out for competent authorities in public spaces under strict conditions. The AI Act’s blanket ban seems intentional to prevent circumvention through law enforcement justifications. 

    The LED, the specific legal framework for data protection in law enforcement, takes a more balanced approach: it may permit particularly intrusive practices if proportionate, necessary, and legally grounded. Hence, if a biometric database is strictly necessary, sufficiently targeted (i.e., footage related to a specific investigation), and proportionate for law enforcement purposes, it passes the LED test. 

    Article 10 LED governs the processing of special categories of data, including biometric data processed for the purpose of uniquely identifying a natural person, and permits such processing only where it is strictly necessary, subject to appropriate safeguards, and authorized by Union or Member State law. Untargeted scraping does not seem to satisfy Article 10 conditions. 

    Hence, even though the LED does not explicitly prohibit the use of databases from untargeted scraping, it implicitly reaches to the same normative position as the AI Act due to its strict requirements. The primary difference is that the AI Act’s prohibition does not engage with that balancing at all: untargeted scraping is simply prohibited. The two legal instruments thus create overlapping and mutually reinforcing layers of prohibition. One question that remains is whether a database that was created outside of the EU can be used by LEAs in the EU in accordance with the LED or AI Act.

    6. Concluding Reflections and Key Takeaways

    The AI Act’s prohibition is consistent with previous case law of DPAs, on the basis of the GDPR, which remains the most comprehensive protection in facial recognition use-cases

    The prohibition’s differentiation between the targeted and untargeted scraping of facial images, and the subsequent ban of untargeted scraping, is reminiscent of several DPAs’ fines, particularly in the line of Clearview AI cases between 2022 and 2024. DPAs, including the Italian Garante, the Dutch AP, and the CNIL, found that Clearview AI’s facial recognition software breached several of the GDPR’s key principles and obligations.

    The prohibition expressly differentiates between “targeted” and “untargeted” scraping, thereby limiting the scope of its application and excluding qualified “targeted” scraping from its scope 

    The differentiation between targeted and untargeted scraping is also significant because the AI Act does not include a blanket ban on all scraping of facial images. Indeed, it acknowledges that in some cases, such as in law enforcement contexts, targeted scraping may be lawful when strictly necessary and proportionate. The LED sets out specific conditions for such use-cases, which are tightly regulated across the EU. An analysis of the interplay between the LED and AI Act shows an alignment between the two regulations, creating mutually reinforcing layers of prohibition. 

    Some use-cases, such as the harvesting of facial images for training AI models that generate new images of fictitious persons, may lead to increasingly complex compliance scenarios 

    When analyzing the practices or use cases that fall outside the scope of the prohibition, we also found that specific AI-driven deepfakes have so far not been captured by Article 5 AI Act. This seems to have similarly been recognized by legislators when, on 11 March 2026, it was reported that the European Parliament reached a political agreement on the AI Act Omnibus, which aims to include a new ban on non-consensual sexual deepfakes. It is worth noting that while this is a development that will allow for further protection, the (new) ban does not cover all AI-driven deepfakes. 

    FPF Privacy Papers for Policymakers: Impactful Privacy and AI Scholarship for a Digital Future

    FPF recently concluded its 16th Annual Privacy Papers for Policymakers (PPPM) events, hosting two dynamic virtual ceremonies on March 4 and March 11, 2026. This year’s program centered on the most pressing areas in privacy and AI governance, bringing together global awardees to discuss their research with leading discussants from industry, academia, and civil society.

    The awards highlighted important work that analyzes current and emerging privacy and AI issues and proposes achievable short-term solutions or analytical approaches that could lead to real-world policy outcomes. Seven winning papers, two honorable mentions, and one student submission were selected by a select group of FPF staff members and advisors based on originality, applicability to policymaking, and overall quality of writing.

    Papers were presented in two separate virtual sessions. 

    The first session featured awardees:

    With discussants Ed Britan (Salesforce), Christa Laser (Cleveland State University) Gabriel Nicholas (Anthropic), and Jevan Hutson (Grindr). 

    The second session featured awardees:

    With discussants Janis Kestenbaum (Perkins Coie), Theodore Christakis (Université Grenoble Alpes), and Hilary Wandall (Dun & Bradstreet).

    Africa’s Data Protection Reforms: A Continental Perspective on the Drivers of Change in Legal Frameworks

    1. Introduction

    Within an evolving digital landscape, several African jurisdictions have proposed a variety of reforms to existing and novel legal frameworks that regulate the processing of personal data, and the development and deployment of new technologies. Across the continent, there is a growing consensus among legislators on the need to create a regulatory environment that is responsive and adaptable to a changing technological landscape and a growing digital economy.

    This blog traces data protection legal and policy reforms across seven African countries, including Nigeria, Kenya, Angola, Ghana, Mauritius, Botswana, and Seychelles, to identify their scope, rationale, and common and diverging themes. The blog also briefly looks at the regional and sub-regional legal reforms to note the potential implications for other countries that might consider similar reforms and eventual harmonization. 

    While these developments are unfolding across Africa, they are occurring alongside broader global efforts to rethink data protection frameworks. As discussions around data protection policy reforms are intensifying in jurisdictions such as the European Union, which introduced a simplification package to reduce regulatory burdens and boost competitiveness; the UK, which finalized reforms to its data protection framework through the Data Use and Access Act (2025); and South Korea, which continues to explore legal reforms to its data protection law to facilitate the development of AI, data protection reforms across the African continent bring a different flavor to addressing their needs. Indeed, legislators across African jurisdictions agree that any reforms or amendments must first and foremost be reflective of local realities. 

    In the closing section, the blog considers the future of legal reforms on the continent by drawing from ongoing discussions and lessons learned in other key jurisdictions. In doing so, the following takeaways emerge: 

    Overall, most reforms are, for the time being, confined to national borders. However, legal reforms have also been proposed both continentally and at the sub-regional level, for example, through the Economic Community of West African States’ (ECOWAS) reform of its Supplementary Act on Personal Data Protection. While these regional reforms have not gained much traction compared to national efforts, they are nonetheless crucial as they can continue to inform ongoing debates for legal reforms within their respective Member States.

    2. A new task for data protection law? New obligations for digital platforms and developer accountability

    Despite being a relatively new law, proposals to amend the Nigeria Data Protection Act 2023 (NDPA) have already emerged through two separate legislative initiatives. The first, SB.650: Nigeria Data Protection Act (Amendment) Bill, 2024, seeks to amend the NDPA by introducing requirements for social media companies to establish physical offices in the country. At present, no other substantive changes to the NDPA have been outlined, making this the central focus of the reform proposal. The Bill, which is in its second reading, notes that while major social media platforms have significant Nigerian user engagement, they are yet to set up physical presence in Nigeria as they have done in other countries. 

    According to the sponsor of the Bill, Senator Ned Nwoko, the establishment of a company’s physical presence will contribute to the economy as well as ensure their compliance with the country’s legal framework. The Bill was referred to the Senate Committee on ICT & Cybersecurity and a report was expected within two months. 

    The second legislative proposal, HB.2436: Nigeria Data Protection (Amendment) Bill, 2025, focuses on strengthening accountability in the digital ecosystem by introducing obligations for application developers, regulating third-party data sharing, and expanding the enforcement powers of the Nigeria Data Protection Commission. Among other provisions, the bill proposes requirements for application developers to register with the Commission, maintain data processing registers, implement consent interfaces, and conduct annual data protection impact assessments, while also introducing stricter rules governing third-party data sharing and related enforcement measures.

    However, while updates regarding the progress of both Bills have been limited, the decision to amend the NDPA to cover social media companies has been criticized by civil society groups on grounds that the proposal to require social media companies to establish physical offices in the country may extend beyond the initial objectives of the country’s data protection framework. Indeed, the NDPA is a principles-based data protection law that focuses on regulating the processing of data across all sectors, rather than regulating specific entities such as social media companies. 

    A brief look at the history of social media regulation in Nigeria shows that it is intricately connected with state regulation of the freedom of expression. While past attempts to regulate the use of social media platforms have largely been led by ad-hoc bans on the basis of national security concerns, the proposed Bill to amend the NDPA signals a new approach: one that aims to progressively embed social media oversight within broader data governance frameworks, starting with data protection law. In this case, Nigeria’s approach to amending its NDPA uniquely highlights how national-level priorities and new technological realities converge under the umbrella of data protection.

    3. Processing of sensitive personal data by third parties in Kenya 

    Calls for amendments to Kenya’s Data Protection Act of 2019 (KDPA) began informally, largely owing to implementation challenges and gaps observed by controllers and processors. This was further solidified in the Parliamentary Report on the inquiry into the activities and operations of WorldCoin in Kenya, completed in 2023. The inquiry was set up to establish the legality of WorldCoin’s processing of sensitive personal data by an ad hoc Parliamentary Committee. The resulting Report included considerations on legal and regulatory gaps to provide safeguards for this type of data processing activity. While not legally binding, the Parliamentary Committee’s findings have nevertheless informed the push to amend the KDPA on grounds including:

    Negotiations on the amendment of the KDPA are ongoing and public consultations are expected to happen soon. Early contributions have been made by organizations such as the Data Protection and Governance Society of Kenya proposing amendments such as the creation of a data protection appeals tribunal that would hear appeals from the ODPC. This would reduce the burden of appeals at the High Court, which have been numerous. They also suggest repealing Section 54 of the KDPA that provides the Data Protection Commissioner with powers to exempt compliance with certain provisions of the Act, unless such exemptions are provided for under other regulations. This approach would provide more certainty on the conditions for exemption. Overall, Kenya’s approach to amending its data protection framework is driven by a growing interest to address specific procedural challenges as related to enforcement. 

    4. Angola leads the way in amending its data protection law to address the need for regulating AI

    Unlike Kenya and Nigeria, the discourse of data protection reform in Angola is driven by the need to regulate emerging technologies including AI. As African countries continue to carve out policy and legislative proposals aimed at regulating the development and deployment of AI, mostly in the form of national AI strategies, some countries are considering more specific legislation. In this respect, countries such as South Africa have proposed standalone AI legislation under its National AI Framework, while others such as Angola have opted to revise existing data protection laws to address privacy challenges posed by AI systems already in use. 

    Angola’s preparedness to regulate AI began with the recognition of privacy risks posed by AI. In March 2025, Angola’s data protection agency released a public consultation on the revision of its 2011 data protection law. Besides introducing numerous new sections, the draft revised law notably contains a section dedicated to AI. Its robust provisions on AI differentiate the law from other data protection laws in Africa, whose automated decision-making provisions mostly mimic Article 22 of the GDPR. Noteworthy aspects on the regulation of AI in the revised law include:

    What stands out about Angola’s approach to reforming its data protection law is the explicit specification of rules with regard to the use of AI for credit scoring. Article 23 provides nuance to the proposed legal reforms by identifying country-specific challenges introduced by the use of AI, and specifically the use of AI-enabled systems for credit scoring, thus moving away from the more general automated decision-making provisions seen continentally.

    The use of AI in credit scoring remains one of the earliest uses of AI continentally and has generated considerable data protection concerns, leading to several landmark enforcement decisions in some countries and necessitating specific guidelines on the use of personal data by digital lenders. For example, Kenya’s body of enforcement decisions consists of numerous such decisions including repeat offenders. The decision to specifically regulate the use of AI within the credit scoring industry points to the need to address subject-specific issues relating to the processing of personal data in Angola.Notably, Angola’s proposed reforms parallel the EU AI Act’s approach by specifically regulating AI-enabled credit scoring as a high-risk application, recognizing its widespread use and potential for harm. Like the EU AI Act’s Annex III(5)(b), which classifies credit scoring as high-risk, Angola moves beyond general provisions on automated decision-making to addressing country-specific risks to data subjects.

    5. Mauritius seeks to boost its growing business processing outsourcing industry

    National economic considerations such as Mauritius’ vision of becoming a preferred destination for business process outsourcing (BPO) and knowledge-based services have been central to its data protection reforms. The recently released National ICT Blueprint views legal and regulatory reforms, including to the data protection framework, as enablers of Mauritius’ goals for economic growth. According to the Blueprint, Mauritius intends to align its national frameworks with the AU Data Policy Framework as well as create regulatory conditions for pursuing an EU adequacy decision. These ongoing reforms aim to position Mauritius as a leader for outsourced services.


    Such economic considerations have been a major factor influencing the repeated data protection law amendments in Mauritius to date. Its first data protection law, enacted in 2004, was heavily influenced by the EU Data Protection Directive of 1995. The 2004 law was amended twice to bring the text of the law in closer alignment with the EU Data Protection Directive to provide Mauritius with better chances of accreditation by the European Commission as an adequate country, thus facilitating personal data transfers at a time when the country sought investments in its BPO sector, with the EU as its primary beneficiary. In 2017, the current data protection law of Mauritius was enacted, repealing the 2004 law but maintaining the initial aspirations of being a leader in outsourced BPO service providers. This further saw Mauritius ratify Council of Europe’s Convention 108, and Convention 108+ in 2020.

    6. Botswana’s path to filling in practical implementation gaps

    Botswana enacted its new data protection law in 2024, repealing the 2018 law and introducing new provisions to address implementation gaps in the latter. The 2018 framework, which had been in transition since 2021, did not provide sufficient clarity on certain provisions, including the institutional independence of the Information and Data Protection Commission, the scope of its enforcement powers, or the practical obligations of data controllers and processors. 

    For example, when compared to most data protection laws on the continent, Botswana’s 2018 data protection law did not provide modalities for responding to data subject rights, and its limited focus on data controllers with processors treated merely as agents created ambiguity in shared compliance responsibilities. It also lacked provisions on accountability, joint controllership, or clear rules governing relationships between controllers and processors, including the use of sub-processors. Similarly, there were no requirements for data protection impact assessments (DPIAs) or structured procedures for breach notification beyond informing the Commission, and sanctions were limited to fixed fines and criminal penalties rather than risk-based administrative measures.

    The 2024 Act responds to such uncertainty by clearly defining the Commission’s authority, strengthening accountability mechanisms, and introducing risk-based tools such as DPIAs. It distinguishes between controllers and processors as separate entities with direct statutory obligations, introduces concepts of joint controllership and data protection by design and default, and requires formalised contractual arrangements for processor relationships, including restrictions on the use of sub-processors. The Act further mandates breach notifications to both the Commission and affected data subjects, introduces proportionate administrative fines, and establishes structured compliance roles such as Data Protection Officers (DPOs). These reforms, alongside an expanded territorial scope and refined definitions of sensitive data, collectively close the significant regulatory and operational gaps left by the 2018 framework.

    7. Seychelles’ reforms reflect clearer provisions and expanded transfer mechanisms while retaining limited extraterritorial application

    Still on the shift from theoretical legal frameworks to practical and clearer provisions, Seychelles repealed its 2002 Data Protection Act which had never been implemented with the 2023 Data Protection Act. The overhaul of Seychelles’ data protection regime marked a move from a largely symbolic framework to one grounded in enforcement, accountability, and operational clarity. Unlike the earlier law, which relied on formal registration of “data users” and “computer bureaux” but imposed few operational duties, the 2023 Act abandons registration in favour of an accountability-based model requiring data controllers and processors to maintain internal records, demonstrate compliance, and cooperate with regulatory audits. Security obligations have also evolved from a general duty to prevent unauthorized disclosure to a detailed mandate for technical, organisational, and physical safeguards, including breach notification duties.

    Equally, the 2023 Act introduced explicit obligations for data processors including acting only on a controller’s instructions, maintaining security measures, and being jointly liable for breaches supported by mandatory written contracts between controllers and processors that define purpose, scope, and safeguards. The law also embeds governance mechanisms through the requirement for DPOs and DPIAs for high-risk processing, neither of which existed in the 2002 text. 

    With regard to cross-border data transfers, the 2023 regime replaces the earlier “transfer prohibition notice” system with a more flexible approach permitting international data flows where adequate protection or recognised safeguards exist. Notably, the 2023 Act expressly recognises participation in frameworks such as the Global Cross-Border Privacy Rules (CBPR) System, signalling Seychelles’ intention to align its transfer mechanisms with interoperable international privacy standards expanding mechanisms for transfers. 

    Finally, enforcement capacity has been strengthened with the 2023 Act empowering the Information Commission to conduct audits and inspections independently, issue enforcement notices, and impose administrative fines, enhancing oversight compared to the limited, warrant-based powers of the 2002 law. 

    While its territorial scope remains modest compared to broader extraterritorial models, these reforms collectively transform Seychelles’ data protection law into a more operational, risk-based, and globally interoperable framework.

    8. Ghana seeks to introduce a new Bill to strengthen enforcement and oversight, including broader data subject protections

    Ghana first enacted its data protection law in 2012, which also established the Data Protection Commission. However, implementation challenges soon emerged, including the absence of a clear framework for cross-border data transfers, limited protection for vulnerable groups such as children, and a narrow scope of application compared to new generation data protection laws that did not extend to foreign entities offering goods or services in Ghana. These gaps created practical and regulatory difficulties. On 17 October, the new Data Protection Bill, 2025, spearheaded by the Ministry of Communication, Digital Technology, and Innovations, was therefore introduced with the aim of addressing these shortcomings and modernizing the country’s data governance framework.

    Overall, the Bill aims to strengthen oversight by introducing clearer obligations, enhanced data subject rights, and a more robust regulatory structure. Particularly, it introduces key reforms by addressing emerging privacy challenges associated with new technologies, introducing data ownership rights, and refining exemptions for the processing of personal data.

    In contrast to Angola’s targeted approach to addressing privacy concerns in AI systems, Ghana seeks to adopt a broader stance by regulating all emerging technologies, including AI systems, insofar as they process personal data. For automated decision-making (ADM) systems, the Bill would require outcomes to be explainable, contestable, and subject to human oversight, obligations that were absent from the 2012 Act which only required notification when decisions involved ADM. The Bill also aims to introduce explicit requirements for the use of privacy-enhancing technologies in ADM systems, a novel provision not contained in the earlier law.

    On data ownership, the Bill would introduce a data ownership framework that recognises personal data as the property of the data subject, and establish a fiduciary-style relationship between data subjects and controllers. Under this model, controllers and processors are deemed custodians of personal data with a duty of care, and any form of processing does not confer ownership rights including for public authorities. If passed, Ghana would become one of the few jurisdictions globally to recognise the proprietary nature of personal data, with significant implications for secondary data use, AI development, and the application of rights such as the right to object to processing. The Bill was open for public consultation until 28 November 2025, and could be adopted as early as 2026.

    Regarding exemptions, the Bill aims to retain the broad exemption themes found in the 2012 Act, but significantly expand and refine them. While both instruments include exemptions for national security, the 2012 Act required a ministerial certificate to validate the exemption. The 2025 Bill removes this safeguard, a notable development given the increasing reliance on public-interest grounds to limit privacy protections across the continent.

    Crucially, the Bill would introduce a comprehensive regime for cross-border data transfers, which was absent from the 2012 Act. The new framework emphasizes data localization, unless such localization would impair business operations. Where transfers are necessary, the Bill would require data subject consent, approval from the Data Protection Authority, and compliance with additional conditions designed to safeguard personal data before it leaves Ghana.

    9. The patchwork challenge: emerging regional frameworks

    Even as countries unilaterally consider legal reforms, there are regional plans, led by the AU and the respective RECs, to amend or create new data protection frameworks for their Member States. Regional initiatives must navigate a complex landscape where many States already have distinct data protection regimes. At the continental level, the AU announced plans to revise the Malabo Convention. At the sub-regional level, ECOWAS is expected to revise the Supplementary Act on Data Protection, the East Africa Community (EAC) is developing its data governance framework, and the Southern Africa Development Community (SADC) has plans to revise the Model Law. Despite their minimal influence on national laws, legal reforms at the REC level could spur similar actions for Member States, especially in the ECOWAS region where the Supplementary Act on Personal Data Protection is legally binding on member states.

    As legal reforms continue, bigger questions of what will be the drivers of such reforms remain, especially considering that some African countries still maintain legal frameworks influenced by the now-defunct 1995 EU Data Protection Directive. 

    9.1. Development and deployment of AI in Africa

    Strongly tied to the aspect of responsible data use is the development of local AI systems as well as the general adoption of AI across the continent. Discussions of the former largely revolve around the lack of local datasets for training AI models, hence the emergence of targeted initiatives seeking to address this issue. The theory that effective data protection regimes can allow responsible local data collection and use has advanced, as seen in continental data governance frameworks such as the AU Data Policy Framework.

    Additionally, the risks posed by the general adoption of AI have been highlighted on the continent as drivers of legal reforms in countries such as Angola, as explored above. Data protection frameworks have been fronted as useful instruments for ensuring responsible development and deployment of AI as seen in the text of numerous national AI strategies, some of which note, however, that ADM provisions alone may not be sufficient for addressing AI harms. For example, the AI Policy Framework of South Africa considers a standalone AI Act to complement its national data protection law.

    While there is growing regulatory momentum on comprehensive AI specific laws, there are currently no AI specific laws that provide guidance on the development and deployment of high-risk AI systems. Nonetheless, some DPAs on the continent are grappling with the foundational questions of what privacy risks are unfolding in the use of AI systems. DPA activities related to regulating AI have included Senegal’s CDP rejecting an application for the use of facial recognition systems in the workplace requiring the controller to use less intrusive means of registering work attendance and Mauritius’ data protection authority’s Guide on Data Protection for Health Data and Artificial Intelligence. Such approaches signal that even though considerations towards stand-alone AI regulation on the continent are in their nascent stages, DPAs are nevertheless addressing new AI technologies on the basis of national data protection law, either in the form of guidance or through enforcement.

    10. Concluding reflections: The future of data protection legal reforms in Africa

    The EU, whose data protection legal framework has been relied on by many African countries, is currently considering amendments to its existing data protection framework through an Omnibus initiative. Amendments to laws that have largely informed legal frameworks across Africa could provide a moment of reflection for the “recipient” countries, some of which have already registered the challenges of implementing current data protection frameworks, especially for SMEs, and questioned the impact of the “Brussels effect” for their own national data protection laws.

    In addition to the shifts noted in the EU, legal reforms in Africa are also increasingly influenced by the growing recognition of data as a national asset and the subsequent need for autonomy on its protection and governance. There are already new sector-specific regulations that place emphasis on balancing data use and protection, as well as explicitly designating governments as custodians of such data. Implementation of these sector-specific laws has revealed gaps in foundational data protection frameworks, prompting legal reforms towards frameworks that not only safeguard rights but also enable responsible data access and re-use.

    As data protection reforms take shape across the continent, the question is not whether change will come but, rather, what form it will take. 

    The Chatbot Moment: Mapping the Emerging 2026 U.S. Chatbot Legislative Landscape

    Special thanks to Rafal Fryc, U.S. Legislation Intern, for his research and development of the resources referenced.

    If there is one area of AI policy that lawmakers seem particularly eager to regulate in 2026, it’s chatbots. As state legislative sessions ramp up across the country, policymakers at both the state and federal levels have introduced dozens of bills aimed at chatbots from so-called “AI companions” to “mental health chatbots.” The Future of Privacy Forum (FPF) is currently tracking 98 chatbot-specific bills across 34 states, as well as three federal proposals.

    Yet despite the shared concern driving these proposals (often tied to safety risks, youth protections, and several high-profile incidents involving chatbots and self-harm) the bills themselves look very different from one another. Definitions of “chatbot” vary widely across legislation. The result is the early contours of a potential regulatory patchwork, where different tools may fall within the scope of different state laws and where compliance obligations, like disclosures or safety protocols, could vary broadly across jurisdictions. As states including Oregon and Washington prepare to imminently enact new chatbot legislation, it remains to be seen how closely 2026 frameworks will ultimately align.

    To help make sense of this rapidly evolving landscape, FPF developed two one-pager resources summarizing key trends in chatbot legislation. The first highlights some of the definitional patterns beginning to appear, identifying eleven legislative frameworks used to define chatbots. The second maps the six most common regulatory provisions appearing across chatbot bills.

    With these resources, we explore two central questions shaping the emerging chatbot policy debate: how lawmakers are defining chatbots and what regulatory approaches are beginning to emerge across states.

    Chatbot Legislation in 2025

    Chatbots began attracting legislative attention in 2025. Last year, five states enacted chatbot-specific legislation: California (SB 243), New York (S-3008C), New Hampshire (HB 143), Utah (HB 452), and Maine (LD 1727). Other laws enacted in 2025, such as Illinois (HB 1806) and Nevada (AB 406), did not define chatbots directly but regulate the use of AI systems, including chatbots, in the delivery of licensed mental or behavioral health services.1

    Even with this activity in 2025, the scale of legislative interest in 2026 represents a significant expansion. In both volume and policy focus, chatbot legislation is emerging as one of the most active areas of AI policymaking. Interest is also broadly bipartisan, with 53 percent of the chatbot bills tracked by FPF introduced by Democrats and 46 percent introduced by Republicans.

    Defining Chatbots: Why It’s Harder Than It Looks

    One of the central challenges lawmakers face is: what exactly counts as a chatbot?

    In practice, chatbots appear in a wide range of contexts, from customer service assistants and tutoring tools to wellness apps, voice assistants, and AI companions designed for social interaction. Small shifts in how legislation defines these systems can dramatically impact which technologies fall inside or outside a bill’s scope.

    Three Terms Are Defined in Chatbot Legislation

    As detailed in the FPF one-pager resources, three primary terms are emerging to scope chatbot legislation in 2026: “chatbots,” “companion chatbots,” and “mental health chatbots.” Each term attempts to capture a different category and type of risk. For example, mental health chatbot bills typically focus on preventing AI systems from providing therapy without licensed professional oversight. Meanwhile, roughly half of the chatbot-related bills introduced this year focus specifically on companion chatbots, reflecting concern about systems designed to simulate interpersonal relationships with users, especially minors.

    At the same time, states are experimenting with a wide range of definitional approaches. Some definitions focus on technology, such as limiting scope to systems that use generative AI (NE LB 939). Others define chatbots based on capabilities or interaction behaviors, like whether the tool sustains dialogue about personal matters (MI SB 760, TN HB 1455). Still others define chatbots based on deployment context, such as whether a system is publicly accessible or marketed as a companion (U.S. S 3062, HI SB 2788). Legislators are not converging on a single definition but rather exploring multiple models simultaneously.

    Three Models for Companion Chatbot Definitions

    In the case of companion chatbots, three definitional approaches are beginning to emerge as particularly influential:

    Why Definitions Matter for Regulatory Scope

    These definitional differences matter because they determine who must comply with a law and who does not. For example, a definition that focuses on conversational capability may capture general-purpose assistants, tutoring tools, or wellness applications even when companionship is not their primary function.

    To narrow scope, many bills, similar to last year’s California’s SB 243 (enacted), include carveouts for tools such as customer service systems, research assistants, or workplace productivity tools. While these exclusions may reduce the likelihood that certain tools fall in scope, they also introduce interpretive questions. Chatbots often serve multiple purposes simultaneously. A chatbot might act as a customer support tool but also answer general informational questions or engage in broader dialogue. In these cases, it may be unclear when a system is “used for customer service” versus when it becomes a more general conversational chatbot. Many proposals leave the issue unresolved, for example, by not specifying that exclusions apply when a system’s primary purpose falls within those categories.

    Themes Within Chatbot Legislation

    Chatbot bills vary widely not only in how they define chatbots, but also in the substantive regulatory requirements they propose. Still, several common policy themes are beginning to emerge. Across proposals introduced in 2026, six broad regulatory themes appear: transparency; age assurance and minors’ access controls; content safety and harm prevention; professional licensure and regulated services; data protection; and liability and enforcement.

    These provisions reflect a notable shift from the first generation of chatbot legislation. Early laws such as California SB 243 and New York S-3008C focused primarily on disclosure that users were interacting with AI and basic safety protocols, such as providing crisis hotline resources when users express suicidal ideation. (FPF previously analyzed SB 243 in an earlier blog.)

    In 2026, however, lawmakers appear to be treating chatbot legislation as a broader regulatory vehicle. Bills now frequently incorporate issues beyond disclosure and companion AI safety, including restrictions on engagement optimization, data governance provisions, and even regulatory sandbox programs (CT SB 86). In some cases, this expansion has prompted debate about whether chatbot bills may implicate speech concerns and raise potential First Amendment questions. For instance, many chatbot proposals would require recurring disclosures during conversations or mandate reporting about specific categories of user speech (e.g. statements of suicidal ideation), raising questions about compelled speech and editorial discretion. As detailed in the chatbot provisions chart, the six most common regulatory provisions appearing across proposed chatbot legislation include:

    1. Transparency: Nearly every chatbot bill includes a non-human disclosure requirement, mandating that operators inform users they are interacting with an AI system rather than a human. Most proposals require “clear and conspicuous” disclosure, though timing and format vary. Some require disclosure only at the start of an interaction or every three hours (PA SB 1090), while others require persistent reminders during conversations every hour or thirty minutes (SC HB 5138). A smaller subset goes further by requiring transparency reporting, such as public disclosures about safety protocols or incidents (WA HB 2225, UT HB 438).
    2. Age Assurance and Minors’ Access Controls: Youth safety has become a central focus of chatbot legislation in 2026. Several proposals require operators to determine whether a user is a minor and impose additional safeguards. Approaches vary widely: some bills require age verification (GA SB 540, SD SB 168), others restrict or prohibit minors’ access to certain content (IA SF 2417), and some require parental consent or monitoring tools (AZ HB 2311). Notably, none of the chatbot laws enacted in 2025 (and few bills advancing in 2026) establish robust, standalone age verification systems.
    3. Content Safety and Harm Prevention: Almost every chatbot bill advancing in 2026 incorporates harm detection and response protocols. Similar to California and New York’s laws, many require operators to provide crisis resources, such as suicide hotline referrals, when detecting indicators of self-harm. A growing number also address anthropomorphic or manipulative interactions, including restrictions on emotional deception or features designed to foster dependency, like rewarding prolonged interaction (HI HB 2502, OR SB 1546). 
    4. Professional Licensure and Regulated Services: Another category of provisions addresses the use of chatbots to deliver services traditionally regulated through professional licensure. Several laws prohibit AI systems from diagnosing, treating, or representing themselves as licensed professionals, particularly in mental or behavioral healthcare (VA HB 669, TN HB 1470). Others allow AI tools to assist licensed professionals but require human oversight or transparency of recommendations or treatment plans (VA HB 668, RI HB 7538).
    5. Data Protection: Chatbot legislation is also beginning to incorporate data governance requirements, particularly around conversational logs and sensitive user information. These proposals include restrictions on collecting, sharing, or selling chatbot input data, along with requirements for data minimization or deletion (MD SB 827). Some bills also restrict the use of minors’ data for AI training or advertising (UT HB 438).
    6. Liability and Enforcement: Most proposals grant enforcement authority to state attorneys general and establish civil penalties for violations. Some also introduce private rights of action (OR SB 1546), allowing individuals to bring lawsuits directly, as seen in California SB 243. A smaller number go further, establishing non-disclaimable liability for certain harms involving minors or creating criminal penalties for specific chatbot behaviors (TN SB 1493).

    Litigation is Shaping The Legislative Agenda

    One notable feature of the 2026 chatbot landscape is how closely these proposed provisions mirror themes emerging from recent chatbot litigation and enforcement, such as Commonwealth of Kentucky v. Character Technologies Inc. and Garcia v. Character Technologies Inc. et al.

    Across lawsuits and investigations2, several recurring concerns appear:

    Many of these concerns are now directly reflected in legislative proposals, particularly those targeting engagement optimization and emotional manipulation (WA HB 2225, OR SB 1546, IA SF 2417, GA SB 540, among others). At the same time, most proposed safety interventions remain reactive rather than preventative. Many bills require chatbots to provide crisis resources once a user expresses distress, but none mandate features that would automatically terminate risky conversations or set session limits.

    Where Chatbot Legislation May Go Next

    As more states move from proposal to enactment in 2026, the coming months will provide an early signal of which legislative approaches for chatbot governance will ultimately prevail.

    Much of this experimentation is happening at the state level, where lawmakers are advancing a wide range of chatbot definitions and regulatory approaches. But the conversation is increasingly moving to the federal stage as well. Recent activity in the U.S. House to amend the KIDS Act—including the addition of the SAFE Bots Act establishing requirements for AI chatbots interacting with minors—demonstrates that chatbots are now firmly on the national policy agenda, even as the federal administration has expressed opposition to certain state regulatory efforts in this space.

    Still, the regulatory picture remains unsettled. Several proposals gaining traction this year introduce provisions that were largely absent from the first generation of chatbot laws, including restrictions on engagement optimization practices, parental control tools for minors’ chatbot interactions and data, and limits on the use of conversational data for advertising or training purposes.  As these bills move through legislatures, the next few months will help clarify which of these emerging approaches are most likely to shape the next phase of chatbot governance.

    1. For more information on these laws and other enacted AI legislation, see FPF’s blog: From Proposal to Passage: Enacted U.S. AI Laws 2023-2025 ↩︎
    2. There have been 13 cases filed to date, most brought by parents of minors on behalf of children who either committed or attempted self-harm following interactions with AI chatbots. ↩︎

    Design elements of these resources were generated with the assistance of AI and reviewed by FPF.

    Red Lines under the EU AI Act: Unpacking the Prohibition of Individual Risk Assessment for the Prediction of Criminal Offences

    Blog 4 | Red Lines under the EU AI Act Series

    This blog is the fourth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

    The fourth blog in the “Red lines under the EU AI Act” series focuses on unpacking the prohibition on individual risk assessment and the prediction of criminal offences, as contained in Article 5(1)(d) AI Act and explored in the European Commission’s Guidelines on the topic. Our analysis led to three key takeaways:

    With this context in mind, this blog post begins with an overview of the logic and scope of the prohibition on individual risk assessment in the EU AI Act, and continues in Section 3 with an analysis of understandings of “risk” elaborated in the Commission’s Guidelines. Section 4 expands on the notion of “profiling”, including the prohibition of assessing a natural person’s personality traits and characteristics, and Section 5 outlines the exceptions to the Article 5(1)(d) prohibition. Section 6 explores cases in which this provision is applicable to private sector actors, and Section 7 notes concluding reflections and key takeaways. 

    2. The ban is limited in its scope, applying only to AI systems used to assess or predict criminal offences based solely on profiling or personality assessments

    Article 5(1)(d) AI Act establishes a crucial prohibition on AI systems that assess or predict the likelihood of natural persons committing criminal offenses based solely on profiling or personality assessment. This prohibition focuses on risk assessments relating specifically and exclusively to committing criminal offences, reflecting the fundamental principle that individuals should be judged on their actual behavior rather than predicted conduct, reinforcing the principle of legal certainty in EU criminal law.

    Importantly, the prohibition does not apply when AI systems support human assessment regarding a person’s involvement in a criminal activity (offending or re-offending), such as when the assessment is already based on objective and verifiable facts directly linked to criminal activity. In such cases, the AI system serves as a supportive tool rather than the primary decision-maker. These systems are instead classified as high-risk AI systems (Annex III, point 6, letter (d) AI Act).

    The provision does not entirely outlaw crime prediction and risk assessment practices but, rather, imposes specific conditions under which the use of certain AI systems in specific contexts shall be prohibited. The Guidelines clarify that the three cumulative conditions all have to be met simultaneously, creating a high threshold for the prohibition to apply: 

    1. The practice must involve placing an AI system on the market, putting it into service for the specific purpose of assessing or predicting the likelihood of natural persons committing criminal offenses, or using the AI system.
    2. The AI system must make risk assessments that assess or predict the risk of a natural person committing a criminal offence.
    3. The risk assessment or the prediction must be based solely on either, or both, of the following:

    a. The profiling of a natural person,

    b. Assessing a natural person’s personality traits and characteristics.

    The prohibition applies to law enforcement authorities or any entity using such systems on their behalf, as well as to Union institutions, bodies, offices, or agencies that support law enforcement authorities. Both providers and deployers therefore have the responsibility not to place on the market, put into service or use AI systems that meet the above conditions. The rationale behind this prohibition is that natural persons should be judged on the basis of their actual behaviour rather than on (AI-)predicted behaviour. While the Guidelines do not directly refer to the principle of legal certainty when analyzing the rationale for this prohibition, it should play a role in the implementation of this prohibition, as it is a primary principle of the rule of law in the EU, alongside equality before the law, the prohibition of the arbitrary exercise of executive power, and effective judicial protection. 

    It is also worth highlighting that the Article 5(1)(d) prohibition applies to criminal offences only, with administrative offences falling outside of the scope of the prohibition. Under EU criminal law, the determination of the criminal nature of an offence most often depends on national law and, as such, may include offences that are not covered by Union law. Given possible differences at national level across the EU, the use of AI systems for the risk assessment and prediction of criminal offences might require further clarification, particularly with regard to which actions amount to “criminal offences” under national law. Indeed, the Guidelines highlight that, for offences that are not directly regulated under EU law, the national qualification of the offence is nevertheless subject to scrutiny by the CJEU on a case-by-case basis, since the concept of “criminal offence” has autonomous meaning within EU law and should be interpreted consistently across EU Member States. 

    3.  Notions of “risk” in the AI Act’s prohibitions, while uncertain, are closely related to harm and ensuring individuals are only assessed on the basis of actual (not predicted) behaviou

    According to the Commission’s Guidelines, risk assessments are understood broadly and can be conducted at any stage of law enforcement activities, such as during crime prevention, detection, investigation, prosecution, execution of criminal penalties, and during the process of an individual’s reintegration into society. Such risk assessments are often referred to as individual “crime prediction” or “crime forecasting” which, according to the Guidelines, refer to “advanced AI technologies and analytical methods applied to large amounts of often historical data… which, in combination with criminology theories, are used to forecast crime as a basis to inform police and law enforcement strategies and action to combat, control, and prevent crime.” 

    In practice, there are two major areas where law enforcement applies AI risk assessments: predictive policing and recidivism risk assessment. Predictive policing involves law enforcement using predictive analytics and other algorithmic techniques to identify patterns related to the occurrence of crime and unsafe situations, and to proactively prevent crime based on these insights. This approach has been adopted by several Member States. On the other hand, a recidivism risk assessment is used to predict the risk of individuals reoffending. 

    Crime prediction or crime forecasting AI systems identify patterns within historical data, associating indicators with the likelihood of a crime occurring, and then generate risk scores as predictive outputs. The Guidelines seem to expand on the notion of “risk” contained in Article 5(1)(d), noting the inherently “forward-looking” nature of risk assessments used for crime prediction or forecasting. 

    In this context, they note that using historical data on crimes committed to predict other persons’ future behaviour may perpetuate or reinforce biases, and undermine public trust in law enforcement and the justice system. Indeed, risk is by definition uncertain: it may or may not materialise into harm. Any decision based solely on a risk score has the potential to make a wrong assumption regarding the actual commission of a criminal offence. In a recent case, the Dutch Ministry of Justice and Security instructed the probation service in the Netherlands to either adjust or stop using the OxRec algorithm, which, following an investigation, was found to have misjudged the risk of recidivism in a quarter of cases. Having been used around 44,000 times per year, OxRec was identified as relying on outdated data, being in breach of privacy legislation, and posing risk of discrimination.

    As Recital 42 of the AI Act explains, natural persons in the EU should always be assessed on the basis of their actual behaviour, and risk assessments carried out solely on the basis of profiling or an assessment of personality traits or characteristics should be prohibited. This aligns with the presumption of innocence until proven guilty under the law (Article 48 EU Charter of Fundamental Rights) and the principle of legal certainty as enshrined in EU law. Indeed, in their final section analyzing the interplay of this prohibition with other Union law, the Guidelines acknowledge the indirect link between the prohibition and Directive (EU) 2016/343 on the presumption of innocence. 

    4. The prohibition relies on the GDPR’s definition of ‘profiling’, and takes a broad understanding of ‘personality traits’ and ‘characteristics’

    The Guidelines clarify that the prohibition applies regardless of whether the AI system profiles or assesses the personality traits and characteristics of only one natural person or a group of natural persons simultaneously. In this context, group profiling can consist of, for example, an AI system assessing and predicting the risk of other persons committing similar offences, based on constructed or historic data about previously committed crimes by others.

    Similarly to the prohibition in Article 5(1)(c) AI Act, explored in Blog 3 of the “Red Lines” series, profiling is understood by reference to its definition in Article 4(4) GDPR. Further, the Guidelines highlight that the predictive policing prohibition is without prejudice to Article 11(3) of the Law Enforcement Directive (LED), which prohibits profiling on the basis of special categories of personal data which results in direct or indirect discrimination.  

    The risk assessments covered by the analyzed provision are only prohibited when they are based solely on the profiling of a person or the assessment of their personality traits and characteristics. This means that when there is a human assessment, which will normally be based on relevant objective and verifiable facts, and the AI assessment is used to support the human assessment, the prohibition does not apply. The Guidelines clarify that “personality traits” and “characteristics” are to be broadly understood, and that the examples contained in Recital 42 are not exhaustive. 

    However, according to the Guidelines, the use of the term “solely” leaves open the possibility of various other elements being taken into account in the risk assessment, beyond personality traits and characteristics, which will need to be assessed on a case-by-case basis. The Guidelines submit that in order to avoid circumvention of the prohibition and ensure its effectiveness, any such other elements will have to be real, substantial, and meaningful for them to be able to justify the conclusion that the prohibition does not apply. In this context, both providers and deployers of such systems will have to document their decision-making processes to be able to justify choosing a certain course of action over another, particularly in highly sensitive contexts such as crime prediction, in which the risks of producing legal effects can be imminent and significant. 

    5. Exception(s) to the prohibition: When a ‘predictive policing’ AI system is not prohibited, but may nonetheless be classified as ‘high-risk’ 

    The last phrase of Article 5(1)(d) AI Act clarifies that the prohibition does not apply to AI systems that are used to support the human assessment of the involvement of a person in a criminal activity. This exception applies only insofar as the human assessment is based on objective and verifiable facts directly linked to the criminal activity at hand. While both the AI Act and the Guidelines do not directly define what may constitute “objective and verifiable acts”, the Guidelines provide some examples in which these conditions for the exception to the prohibition may be fulfilled. 

    For example, this is the case for an AI system used for the profiling and categorization of actual behaviour, such as “reasonably suspicious dangerous behaviour in a crowd that someone is preparing and likely to commit a crime, and there is a meaningful human assessment of the AI classification” (emphasis added). This latter requirement for ensuring that any AI system used in this context is only acting in support of human assessment echoes the GDPR’s right to obtain human intervention in automated decision-making contexts. 

    In the highly sensitive context of crime prediction, the requirement for the “human assessment” to be based on objective and verifiable facts linked to a specific criminal activity is an important precursor to the exercise of the right to an effective remedy (Article 47 EU Charter of Fundamental Rights). While the Guidelines do not expressly refer to the EU Charter, they refer to case law of the Court of Justice of the EU (CJEU) in their understanding and interpretation of the concept of “human assessment.” In the Ligue des droits humains judgement, published in June 2022, the CJEU noted that any human assessment “must rely on objective criteria … and to ensure the non-discriminatory nature of automated processing.” 

    Additionally, according to the Dutch DPA (AP), human intervention ensures that a decision is made carefully and prevents people from being (unintentionally) excluded or discriminated against by the outcome of an algorithm. Hence, human intervention must contribute meaningfully to the decision-making process, rather than serve only as a symbolic function. 

    It is worth noting that while the Guidelines are specific in their interpretation of the exception contained in Article 5(1)(d), they also mention that this express exclusion from the prohibition may not be the only one. However, the Guidelines do not further elaborate on what other exceptions may apply and in which contexts. It is likely that such exceptions may have to be assessed on a case-by-case basis and, in any case, be real, substantial, and meaningful. Nevertheless, what the Guidelines do clarify is that when the system falls within the scope of the exclusion from the prohibition, it will be classified as a high-risk AI system and be subject to specific requirements and safeguards, including with regard to human oversight as referred to in Articles 14 and 26 AI Act. 

    Finally, it is worth noting that AI systems used in the context of national security are excluded from the scope of the AI Act as referred to in Article 2(3) and further explained in Recital 24. This means that an AI system that falls under the ‘predictive policing’ prohibition may nevertheless be permitted exclusively for national security purposes. In this context, the Guidelines do not clarify the distinction between national security and law enforcement activities, which could be crucial for delineating the boundaries of the prohibition of individual risk assessment. 

    This is particularly relevant with regard to ‘dual-use systems’ – AI systems that can be used both for law enforcement purposes and for the prevention of national security threats. Recital 24 provides a clarification for such cases, stating that ‘AI systems placed on the market or put into service for an excluded purpose, namely military, defence or national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within the scope of this Regulation and providers of those systems should ensure compliance with this Regulation.’ Hence, if an AI system is placed on the market or put into service for both national security and law enforcement purposes, it must nevertheless comply with the AI Act. 

    6. The prohibition can apply to private actors when they are entrusted by law to exercise public authority and public powers

    Notably, the ‘predictive policing’ prohibition does not apply exclusively to law enforcement authorities. The prohibition may be assumed to apply, in particular, when private actors are entrusted by law to exercise public authority and public powers for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties. Private actors may also be explicitly requested, on a case-by-case basis, to act on behalf of law enforcement authorities and carry out individual crime risk predictions. In those cases, the activities of those private actors could also fall within the scope of the Article 5(1)(d) prohibition.

    The prohibition may apply to private entities assessing or predicting the risk of a person committing a crime where this is objectively necessary for compliance with a legal obligation to which that private operator is subject to (for example, a banking institution obliged by Union anti-money laundering legislation to screen and profile customers for money-laundering offences). 

    The Guidelines also outline what is explicitly excluded from this prohibition or out of its scope, namely: 

    While the Guidelines do not expressly address the issue, it is worth noting that, while certain exemptions may exist for the use of AI technologies in the law enforcement context, the mere fact that such uses occur in the context of determining criminal activity does not absolve a private entity from complying with legal obligations beyond the AI Act, including under the GDPR. In a case that led to a more than €30 million fine imposed by the Dutch AP on Clearview AI in September 2024 under the GDPR, the company argued that they were acting in the interest of potential third-party users of their facial recognition database, in this case overwhelmed law enforcement authorities (paragraph 88 of the Dutch AP’s judgement). The company also identified “responsible organizations charged with protecting society” (paragraph 88), which may include private actors, as justifying the interest of third parties in using their service. 

    In assessing whether the interest of third parties in combating crime, tracing victims, and other public duties qualify as legitimate interests, the Dutch AP notes that “such interests do not qualify as a legitimate interest of a third party” within the meaning of Article 6(1)(f) GDPR. The Dutch AP expands that, similarly, Dutch and European regulators cannot rely on legitimate interests under Article 6(1) GDPR for the purposes of exercising their duties of preserving and protecting society-wide interests (paragraph 92). 

    With this in mind, caution must be exercised in ensuring a reading of the AI Act’s prohibitions that is contextualized within the broader set of EU rules regulating technology development and deployment. In this sense, the Guidelines could have expanded on Section 5.4 (Interplay with other Union law) by making reference to at least one specific instance in which regulatory authorities, on the basis of already applicable and relevant laws, have interpreted technology uses that directly relate to the prohibition at hand. This may have helped reinforce legal certainty with regard to the applicability and scope of the prohibition by noting instances in which uses not expressly covered by the AI Act are otherwise covered by other EU laws. 

    7. Concluding Reflections and Key Takeaways

    As Article 5(1)(d) is limited in its scope, it does not entirely prohibit crime prediction or forecasting AI technologies

    As explored in the fourth blog post in the series, given that the Article 5(1)(d) prohibition is limited and targeted in its scope, it does not entirely prohibit crime prediction or forecasting AI technologies. Rather, it focuses on prohibiting (individual) risk assessments for the prediction of criminal offences based solely on profiling or personality assessments. The prohibition draws on the logic and legal foundations of general and fundamental rights law in the EU and, in particular, on Article 47 (right to an effective remedy and fair trial) and Article 48 (presumption of innocence and right of defence) of the EU Charter of Fundamental Rights. 

    When an AI system does not meet all of the conditions for the prohibition to apply, it will nevertheless be classified as a high-risk AI system

    Similar to the analysis in previous blog posts on the AI Act’s prohibitions, we find that when an AI system does not meet all of the conditions for the prohibition to apply, it will be classified as a high-risk AI system. This is reminiscent of the AI Act’s scaled approach to delineating and classifying risk and the close interplay between Articles 5 and 6 of the AI Act. 

    The Guidelines note that engaging in crime prediction activities may perpetuate or reinforce biases and erode public trust in law enforcement

    Finally, given the particularly sensitive context and nature of applying AI technologies in the area of crime prediction and forecasting, wherein risk assessments can lead to significant legal effects and consequences for individuals, the Guidelines acknowledge that such activities may perpetuate or reinforce biases and erode public trust in law enforcement. 

    Red Lines under the EU AI Act: Unpacking Social Scoring as a Prohibited AI Practice 

    Blog 3 | Red Lines under the EU AI Act Series 

    This blog is the third of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

    The prohibition of AI-enabled social scoring is among the red lines established by the EU AI Act under its Article 5, targeting practices that assess or classify individuals or groups based on their social behavior or personal traits, leading to unfair treatment, particularly when the information is drawn from multiple unrelated social contexts or when the resulting treatment is disproportionate to the behavior assessed. Notably, the prohibition has a broad scope of application across public and private contexts and is not limited to a specific sector or field. 

    The practice of “social scoring” is not uniquely regulated by the AI Act, as it engages well-established notions under the General Data Protection Regulation (GDPR): profiling, purpose limitation and automated decision-making. Therefore, those practices in the same realm that do not meet the high threshold of the social scoring prohibition under the AI Act must in any case comply with the detailed GDPR provisions relevant to them.

    As this analysis will show, the “social scoring” prohibition under the AI Act also engages notions of “personalization” in AI, which may be particularly relevant to the current state of AI development, as prior FPF analysis has shown. 

    This blog examines the definition and contextual scope of the prohibition of social scoring under Article 5(1)(c) AI Act (Section 1), including its conditions and detailed scenarios (Section 2), as well as the practices that fall outside the scope of the prohibition (Section 3). It then takes a look at how this provision interacts with other areas of EU law, in particular data protection, non-discrimination, and sector-specific frameworks (Section 4). The main takeaways (Section 5) highlight that:

    1. Social scoring as a “contextual” prohibited AI practice

    EU legislators made the policy choice to expressly ban practices of AI systems that enable social scoring because they considered them incompatible with fundamental rights and European Union values. This results from Recital 31 of the AI Act, which states that such practices “may lead to discriminatory outcomes and the exclusion of certain groups” and can violate individuals’ dignity, privacy, and right to non-discrimination. The European Commission characterized AI systems that allow “social scoring” by governments or companies as a “clear threat to people’s fundamental rights”, noting that these are banned outright. The Guidelines the Commission issued on prohibited practices under the AI Act reiterate this framing and clarify the cumulative elements for the prohibition with practical illustrations.

    This rationale was backed by EU data protection authorities (DPAs). In June 2021, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) welcomed the intention to ban social scoring in their Joint Opinion 5/2021 on the AI Act proposal, warning that the use of AI for ‘social scoring’… can lead to discrimination and is against the EU fundamental values​. Since then, the EDPB and national DPAs have continued to develop guidelines around profiling and automated decision-making (ADM), including guidance on legitimate interests (2024) and national tools such as the Dutch DPA’s guidance (AP) on “meaningful human intervention”, which could be relevant when assessing whether an AI-enabled score could fall under the AI Act provisions or Article 22 GDPR, which provides for the right not to be subject to solely automated decision-making. 

    According to the Commission Guidelines, the AI Act prohibits social scoring practices if the following cumulative conditions are met:

    1. The AI system is placed on the market, put into service, or used.
    2. The AI system is intended to evaluate or classify individuals or groups over a certain period of time based on their social behavior or inferred personal or personality characteristics.
    3. The social score results in (i) unfavorable treatment in social contexts unrelated to where the data was originally collected and/or (ii) treatment is unjustified or disproportionate to the social behavior or its gravity.

    All three conditions must be met simultaneously for Article 5(1)(c) to apply. The prohibition applies to both providers and deployers of AI systems. Of note, the prohibition has been applicable since 2 February 2025, while the supervisory and enforcement provisions related to it have been in force since 2 August 2025. However, no enforcement or regulatory action has been announced so far regarding the social scoring prohibition. 

    The prohibition does not extend to all AI-enabled scoring practices. The Guidelines clarify that it targets only unacceptable practices that result in unfair treatment, social control or surveillance. At the same time, the Guidelines note that the prohibition is not meant to affect the “lawful practices that evaluate people for specific purposes that are legitimate and in compliance” with the EU and national law, particularly in the cases where the legislation provides for the types of data that are relevant for the specific evaluation purposes and ensures that any unfavorable or detrimental treatment that results from the practice is justified and proportionate. 

    In this context, the Guidelines clarify that sector-specific scoring systems, such as creditworthiness assessments, insurance risk scoring or fraud detection systems, are not prohibited in cases where they are carried out for clearly defined purposes and in accordance with EU or national legislation. 

    For example, the credit scoring systems used by financial institutions to assess a borrower’s creditworthiness based on relevant financial data do not fall under the provision of Article 5(1)(c) of the AI Act, provided that they do not result in unjustified or disproportionate treatment or rely on unrelated social context data. Instead, such systems are typically classified as high-risk AI systems under Article 6 and Annex III of the AI Act and must comply with the applicable requirements, including risk management, transparency, human oversight and data governance obligations. 

    2. Unpacking how the social scoring prohibition is triggered under the AI Act

    2.1 The AI system is intended to evaluate or classify individuals or groups over a certain period of time based on their social behavior or inferred personal or personality characteristics

    Article 5(1)(c) AI Act explicitly prohibits the placing on the market, putting into service or use of an AI system for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behavior or known, inferred or predicted personal or personality characteristics. The Guidelines clarify that this condition is fulfilled where an AI system assigns individuals or groups scores based on their social behavior or personal or personality characteristics. These scores could take different forms, such as numerical values, rankings or labels. This prohibition applies broadly across public and private sectors and concerns only natural persons or groups of natural persons, excluding thus legal entities. 

    The Guidelines differentiate between “evaluation” and “classification” as two distinct but related concepts within the scope of Article 5(1)(c) AI Act. “Evaluation” refers to an assessment or judgment about a person or group of persons, and “classification” has a broader scope and includes categorizing individuals or groups based on certain characteristics or behavioral patterns. “Classification” does not necessarily involve an explicit judgement or assessment but may still fall within the scope of the prohibition in cases where individuals are assigned scores, rankings or labels based on their behavior or personal or personality characteristics. 

    In addition, the Guidelines note that the term “evaluation” is closely linked to “profiling” as defined by EU data protection law, namely in Article 4(4) GDPR, and as referred to in Article 22 GDPR and Article 11 Law Enforcement Directive. Profiling refers to the processing of personal data to evaluate personal aspects of an individual, in particular to analyse or predict behavior about their ability to perform tasks, interests, likely behavior, or future actions. 

    Interestingly to note is that the Guidelines opted for the wording of the Article 29 Working Party Guidelines on Automated Decision-Making and Profiling, adopted in 2017, when referring to profiling, reflecting a broader, functional understanding of profiling that encompasses AI systems assigning behavioral scores or predictive assessments, and therefore clarifying that that the scope of the prohibition is not narrowly limited to specific technical forms of automated processing but extends to AI-enabled evaluation and categorization of persons based on their characteristics or behavior.  

    The Guidelines note that although Article 5(1)(c) AI Act does not explicitly reference profiling under the GDPR as defined in Article 4(4), the act of profiling may still fall under the prohibition when AI systems process personal data to assess individuals.

    To illustrate the link between profiling and social scoring, the Guidelines refer to the SCHUFA I judgment (Case C-634/21), in which the CJEU examined a creditworthiness scoring system used in Germany. In that case, the score generated by the computer programme consisted of a probability value estimating an individual’s ability to meet payment commitments. The CJEU found that this score was based on certain personal characteristics and involved establishing a prognosis concerning the likelihood of future behavior, such as the repayment of a loan. The scoring process relied on assigning individuals to groups of persons with comparable characteristics and using the behavior of those groups to predict the individuals’ future conduct. 

    The CJEU held that this activity constitutes “profiling” within the meaning of Article 4(4) GDPR and it held that the automated establishment of that probability value can constitute ADM under Article 22(1) GDPR where a third party draws strongly on it to decide whether to enter into, implement, or terminate a contractual relationship. The Guidelines clarify that such scoring may also constitute an “evaluation” of individuals based on their personal characteristics within the meaning of Article 5(1)(c) AI Act and will be prohibited if carried out through AI systems, provided that all the other conditions are fulfilled. 

    Additionally, even if not referenced in the Guidelines, the CJEU judgment in CK v Dun & Bradstreet Austria (CaseC-203/22) further clarified the legal framework governing profiling and scoring systems. In that case, the CJEU held that the right of access under Article 15(1)(h) GDPR requires controllers to provide data subjects with meaningful information about the logic involved in automated decision-making, including the procedures and principles used to generate a score.

    2.1.1 The prohibition requires evaluations to rely on data gathered over a period of time, ensuring that one-off assessments cannot circumvent it.

    The prohibition in Article 5(1)(c) AI Act applies only where the evaluation or classification is based on data collected over “a certain period of time”. The Guidelines clarify that this temporal requirement indicates that the assessment should not be limited to a one-time rating or grading based solely on data from a single, isolated context. This condition must be assessed in light of all the circumstances of the case to avoid the circumvention of the scope of the prohibition. 

    To illustrate this, the Guidelines refer to a scenario involving a migration or asylum authority that deploys a partly automated surveillance system in refugee camps using cameras and motion sensors. If such a system analyzes behavioral data collected over a period of time and evaluates individuals to determine, for example, if they may attempt to abscond, this would mean that the temporal condition is met and may fall within the scope of the prohibition, provided that all the other conditions are also met. 

    2.1.2 The provision prohibits AI evaluations based on social behavior or known, inferred, or predicted personal or personality characteristics

    The evaluation or clarification of individuals based on AI-enabled processing in relation to either (i) their social behavior or (ii) their known, inferred or predicted personal and personality characteristics, or both, is prohibited under the AI Act provision. This data may be directly provided by the individuals, indirectly collected through surveillance, obtained from third parties, or inferred from other information. 

    The Guidelines explain that “social behavior” is a broad concept that encompasses a wide range of actions, habits, and interactions within society. This may include behavior in private and social contexts, such as participation in cultural or voluntary activities, as well as behavior in business or institutional contexts, including payment of debts, use of services and interactions with public authorities or private entities. This type of data is often collected from multiple sources and combined, sometimes involving extensive monitoring or tracking of individuals. 

    The prohibition also applies in cases where “personal or personality characteristics” may involve specific social behavioral aspects. The Guidelines note that personal characteristics may include a wide range of information relating to an individual, such as race, ethnicity, income, profession, other legal status, location, level of debt, and so on. Personality characteristics should, in principle, be interpreted as personal characteristics, but may also involve the creation of specific profiles of individuals as “personalities”. These characteristics may indicate a judgment, made by the individuals themselves, observed by others, or generated by AI systems.

    The Guidelines distinguish between three types of characteristics used in scoring systems: (i) “known characteristics” (verifiable inputs provided to the AI systems), (ii) “inferred characteristics” (conclusions drawn from existing data, usually by AI systems), and (iii) “predicted characteristics” (estimates based on patterns, often with some degree of inaccuracy). These distinctions are relevant because inferred and predicted characteristics may be less accurate and more opaque, raising concerns about fairness and transparency in AI-driven scoring systems.

    2.2. The social score must lead to detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment to the gravity of the social behavior

    2.2.1. Causal link between the social score and the treatment

    For the prohibition to apply, the social scoring created by or with the assistance of an AI system must lead to detrimental or unfavorable treatment of the evaluated person or group of persons. There must be a causal link between the score and the resulting treatment, such that the treatment is the consequence of the score. This causal link may also exist where harmful consequences have not yet materialised, provided that the AI system is capable or intended to produce such outcomes. 

    The Guidelines further note that the AI-enabled score does not need to be the sole cause of the detrimental or unfavorable treatment. The prohibition also covers situations where AI-enabled scoring is combined with human assessment, as long as the AI output plays a sufficiently significant role in the decision. The prohibition is still applicable if the score is obtained by an organization and produced by another (e.g., a public authority using a creditworthiness score from a private company).

    2.2.2. Detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment

    For the prohibition to apply, the social score must or could result in detrimental or unfavorable treatment of the evaluated person or group. This treatment could occur either in (i) a different social context from where the data was originally generated or collected, and/or (ii) in a manner that is unjustified or disproportionate to the social behavior or its gravity. 

    The Guidelines emphasize that a case-by-case analysis is required to determine if at least one of these conditions is fulfilled, as many AI-enabled scoring practices may fall outside the scope of the prohibition. 

    The Guidelines further clarify that “unfavorable treatment” refers to situations where, as a result of the scoring, a person or a group is treated less favorably compared to others,  even where no specific harm or damage is demonstrated. By contrast, “detrimental treatment” requires that the individual or group suffer harm or disadvantage as a result of the scoring. Such treatment may also be considered discriminatory under EU non-discrimination law and may include the exclusion of certain persons or groups, although it is not a necessary condition for the prohibition to apply. As the Guidelines highlight, the treatment covered by the Article 5(1)(c) could go beyond the EU non-discrimination law. 

    The Guidelines further detail the scenarios described under Article 5(1)(c) AI Act: 

    a. Detrimental or unfavorable treatment in unrelated social contexts, such as when authorities use information like nationality, internet activity, or health status from one area to evaluate people in another

    The first scenario regards the situations where the detrimental or unfavorable treatment resulting from a social score occurs in social contexts unrelated to the one in which the data were originally generated or collected. The Guidelines clarify that this condition requires both that the data used for scoring originates from unrelated social contexts and that the resulting score leads to detrimental or unfavorable treatment in a different context. 

    This scenario typically involves AI systems processing data relating to the individuals’ social behavior or personal characteristics that were generated or collected in contexts unrelated to the purpose of the scoring, and used by the AI system for the scoring of the individual(s) without an apparent connection to the purpose of the evaluation or classification or in a way that leads to the generalised surveillance of individuals or groups. 

    As the Guidelines note, in most situations, these kinds of practices occur against the reasonable expectations of the individuals concerned and may also violate EU law and other applicable rules. To determine if this condition is met, a case-by-case assessment is required, evaluating the purpose of the evaluation and the context in which the data was collected and generated.

    There is a clear link between this scenario and the purpose limitation principle under Article 5(1)(b) GDPR, which provides that personal data must be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. When the personal data collected in one context is used to generate social scores in an unrelated context, such a practice may violate this principle, particularly where the new use of data was not foreseeable to the individual or where the new processing lacks a sufficient legal basis of connection to the original purpose. 

    The Guidelines provide several examples of prohibited practices under this first scenario, highlighting the following situations where:

    On another note, national developments illustrate the risks associated with AI-enabled social scoring and classification systems that rely on data from unrelated contexts. In the Netherlands, the Dutch Tax and Customs Administration used the “Fraude Signaleringsvoorziening” (FSV – Fraud Signaling Provision), a system used to record and assess fraud signals based on personal data collected from multiple sources, including internal systems, other public authorities, and third parties. 

    The Dutch AP found that the processing of personal data in the FSV was unlawful. The AP found that the processing had no legal basis and that the purpose of the processing was not sufficiently defined. These findings were explored in the Case 202401528/1/A3. The Council of State held that the letter of the Ministry of Finance informing the individual that they were not eligible for financial compensation following his registration in the FSV was a decision subject to judicial review. It is relevant to note that this case was decided under administrative and data protection law and did not concern the application of the AI Act, yet it highlights the risks associated with systems that record and use personal data to evaluate and classify individuals which may influence their treatment by public authorities. 

    b. Situations where the detrimental or unfavorable treatment is disproportionate to the actual behavior

    To this extent, the Guidelines provide a list of unjustified or disproportionate treatment that falls under both Article 5(1)(c)(i) and Article 5(1)(c)(ii):

    The Guidelines note that the prohibition may also cover cases when preferential treatment is granted to certain individuals or groups of people (e.g., in cases of support employment programs, (de-)prioritization for housing or resettlement).

    2.3. AI-enabled social scoring is prohibited regardless of whether the system or the score are provided or used by public or private persons

    Article 5(1)(c) prohibits AI-enabled social scoring practices regardless of whether the AI systems or the resulting score are provided or used by public or private persons. While scoring practices in the public sector may have particularly significant consequences due to the individuals’ dependence on public services and the imbalance of power between the authorities and the individuals, similarly harmful consequences may also happen in the private sector. 

    For instance, as the Guidelines exemplify, an insurance company may use an AI system to analyze spending patterns and the financial data obtained from a bank, which are unrelated to the assessment of eligibility for life insurance, in order to determine if it should refuse the insurance or impose higher premiums to individuals or groups of individuals. In another example, a private credit agency may use an AI system to determine an individual’s creditworthiness for obtaining a housing loan based on unrelated personal characteristics. 

    In the case of verifications conducted by the competent market surveillance authorities, the responsibility lies with the providers and deployers of the AI systems, within their respective obligations, to demonstrate that their AI systems are legitimate, transparent and only process context-related data. They must also ensure that the systems operate as intended and that any resulting detrimental or unfavorable treatment is justified and proportionate to the social behavior assessed. 

    The Guidelines also note that compliance with the applicable requirements, including those concerning high-risk AI, may help ensure that the evaluation and classification practices remain lawful and do not constitute prohibited social scoring. 

    3. What falls outside the scope of the prohibition?

    The AI Act makes room for carefully tailored exceptions to the social scoring prohibition. It acknowledges several scenarios where assessing individuals via algorithms is lawful and even necessary, provided that such an assessment is conducted in a targeted and proportionate manner.

    First to note is that the prohibition applies only to the scoring of natural persons or groups of natural persons. Scoring of legal entities is, in principle, excluded in situations where the evaluation is not based on the social behavior or personal or personality characteristics of individuals. However, as the Guidelines highlight, in the situations where a score attributed to a legal entity aggregates the evaluation of natural persons and directly affects those individuals, the practice may fall within the scope of this prohibition. 

    Secondly, the Guidelines distinguish AI-enabled social scoring as a “probabilistic value” and prognosis from individual ratings provided by users (for example, the ratings of drivers or service providers on online platforms). These fall outside the prohibition unless they are combined with other data and analyzed by an AI system to evaluate or classify individuals that fulfill the conditions of Article 5(1)(c).   

    Finally, Recital 31 AI Act and the Guidelines clarify that lawful evaluation practices conducted for a specific purpose in compliance with EU and national law remain outside the scope of the prohibition. Recital 31 reiterates that this prohibition “should not affect lawful evaluation practices of natural persons carried out for a specific purpose in accordance with Union or national law.” 

    The Guidelines provide additional examples of legitimate scoring practices that are out of scope, including: 

    4. Interplay with other EU laws, including consumer protection, data protection, non-discrimination, and sector-specific provisions such as credit, banking, and anti-money laundering

    Providers and deployers must assess whether other EU or national laws apply to any particular AI scoring system used in their activities, particularly if more specific legislation strictly defines the type of data considered relevant and necessary for specific evaluation purposes and ensures fair and justified treatment.

    AI-enabled social scoring in business-to-consumer relations may also require the application of EU consumer protection laws, such as Directive 2005/29/EC on unfair business-to-consumer commercial practices (the “UCPD”), if it misleads consumers or distorts their economic behavior. The practices which may amount to misleading consumers or distorting their behavior through AI uses or in AI contexts is further explored in Blog 2 of this series, accessible here

    Social scoring may also engage specific data protection rules as encoded in the GDPR, particularly those regarding the legal ground for processing, data protection principles, and other obligations, including the rules on solely automated individual decision-making. AI-enabled social scoring that results in discrimination based on protected characteristics (e.g., age, race, and religion) would also fall under EU non-discrimination law.

    Finally, certain sector-specific rules may be applicable. For example, the Consumer Credit Directive (CCD) prohibits the use of special categories of personal data in these evaluations and the obtaining of data from social networks. Additionally, guidelines from the European Banking Authority provide further specifications on the relevant information for the purpose of creditworthiness assessments, which are relevant to determine whether a practice falls under the scope of Article 5(1)(c). AI systems used for anti-money laundering and counter-terrorism financing purposes must also comply with the applicable EU legislation.

    5. Closing reflections and key takeaways

    The AI Act prohibits specific practices of AI-enabled social scoring, not scoring in general

    Article 5(1)(c) of the AI Act does not prohibit scoring as such, but rather the placing on the market, putting into service or use of AI systems for social scoring practices that meet the conditions set out in the provision. The Guidelines repeatedly focus on the concrete use of the AI system and the effects of the resulting score, rather than on the existence of the scoring mechanisms alone. In particular, the prohibition is determined only when all conditions are cumulatively met, including the evaluation or classification over a certain period of time and the link to detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment. 

    Public and private uses are equally in scope, with shared accountability across the value chain

    The Guidelines clarify that unacceptable AI-enabled social scoring is prohibited regardless of whether the system or score is provided or used by public or private persons. They also place practical weight on accountability: in case of verifications conducted by the competent market surveillance authorities, both providers and deployers, each within their responsibilities, must be able to demonstrate legitimacy and justification, including transparency about system functioning, data types and sources, and the use of only data related to the social context in which the score is used, as well as proportionality of any resulting detrimental or unfavorable treatment.

    Out of scope does not mean being exempted from scrutiny.

    Recital 31 of the AI Act and the Guidelines clarify that the prohibition is not intended to affect the lawful evaluation practices carried out for a specific purpose in accordance with the existing legislation in place. It depends on several criteria, as examined throughout this blog, if a scoring practice falls outside the scope of the prohibition, including whether the evaluation serves a legitimate and clearly defined purpose, whether the data used is relevant and necessary for that purpose, whether the scoring occurs within the same social context in which the data was collected, and whether any resulting detrimental or unfavorable treatment is justified and proportionate to the behaviour assessed.

    As the Guidelines emphasise, this assessment is contextual. The same scoring practice may fall outside the scope of the prohibition in one situation, for example, where it is used for a lawful and proportionate creditworthiness assessment based on relevant financial data, but may fall within the scope of Article 5(1)(c) where it relies on unrelated data, produces disproportionate consequences, or is used in a different social context. This reinforces that compliance depends not only on the existence of scoring systems, but on how they are designed, the types of data they process, and the purposes for which they are used.