Red Lines under the EU AI Act: Unpacking the Prohibition of Individual Risk Assessment for the Prediction of Criminal Offences

Blog 4 | Red Lines under the EU AI Act Series

This blog is the fourth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

The fourth blog in the “Red lines under the EU AI Act” series focuses on unpacking the prohibition on individual risk assessment and the prediction of criminal offences, as contained in Article 5(1)(d) AI Act and explored in the European Commission’s Guidelines on the topic. Our analysis led to three key takeaways:

With this context in mind, this blog post begins with an overview of the logic and scope of the prohibition on individual risk assessment in the EU AI Act, and continues in Section 3 with an analysis of understandings of “risk” elaborated in the Commission’s Guidelines. Section 4 expands on the notion of “profiling”, including the prohibition of assessing a natural person’s personality traits and characteristics, and Section 5 outlines the exceptions to the Article 5(1)(d) prohibition. Section 6 explores cases in which this provision is applicable to private sector actors, and Section 7 notes concluding reflections and key takeaways. 

2. The ban is limited in its scope, applying only to AI systems used to assess or predict criminal offences based solely on profiling or personality assessments

Article 5(1)(d) AI Act establishes a crucial prohibition on AI systems that assess or predict the likelihood of natural persons committing criminal offenses based solely on profiling or personality assessment. This prohibition focuses on risk assessments relating specifically and exclusively to committing criminal offences, reflecting the fundamental principle that individuals should be judged on their actual behavior rather than predicted conduct, reinforcing the principle of legal certainty in EU criminal law.

Importantly, the prohibition does not apply when AI systems support human assessment regarding a person’s involvement in a criminal activity (offending or re-offending), such as when the assessment is already based on objective and verifiable facts directly linked to criminal activity. In such cases, the AI system serves as a supportive tool rather than the primary decision-maker. These systems are instead classified as high-risk AI systems (Annex III, point 6, letter (d) AI Act).

The provision does not entirely outlaw crime prediction and risk assessment practices but, rather, imposes specific conditions under which the use of certain AI systems in specific contexts shall be prohibited. The Guidelines clarify that the three cumulative conditions all have to be met simultaneously, creating a high threshold for the prohibition to apply: 

  1. The practice must involve placing an AI system on the market, putting it into service for the specific purpose of assessing or predicting the likelihood of natural persons committing criminal offenses, or using the AI system.
  2. The AI system must make risk assessments that assess or predict the risk of a natural person committing a criminal offence.
  3. The risk assessment or the prediction must be based solely on either, or both, of the following:

a. The profiling of a natural person,

b. Assessing a natural person’s personality traits and characteristics.

The prohibition applies to law enforcement authorities or any entity using such systems on their behalf, as well as to Union institutions, bodies, offices, or agencies that support law enforcement authorities. Both providers and deployers therefore have the responsibility not to place on the market, put into service or use AI systems that meet the above conditions. The rationale behind this prohibition is that natural persons should be judged on the basis of their actual behaviour rather than on (AI-)predicted behaviour. While the Guidelines do not directly refer to the principle of legal certainty when analyzing the rationale for this prohibition, it should play a role in the implementation of this prohibition, as it is a primary principle of the rule of law in the EU, alongside equality before the law, the prohibition of the arbitrary exercise of executive power, and effective judicial protection. 

It is also worth highlighting that the Article 5(1)(d) prohibition applies to criminal offences only, with administrative offences falling outside of the scope of the prohibition. Under EU criminal law, the determination of the criminal nature of an offence most often depends on national law and, as such, may include offences that are not covered by Union law. Given possible differences at national level across the EU, the use of AI systems for the risk assessment and prediction of criminal offences might require further clarification, particularly with regard to which actions amount to “criminal offences” under national law. Indeed, the Guidelines highlight that, for offences that are not directly regulated under EU law, the national qualification of the offence is nevertheless subject to scrutiny by the CJEU on a case-by-case basis, since the concept of “criminal offence” has autonomous meaning within EU law and should be interpreted consistently across EU Member States. 

3.  Notions of “risk” in the AI Act’s prohibitions, while uncertain, are closely related to harm and ensuring individuals are only assessed on the basis of actual (not predicted) behaviou

According to the Commission’s Guidelines, risk assessments are understood broadly and can be conducted at any stage of law enforcement activities, such as during crime prevention, detection, investigation, prosecution, execution of criminal penalties, and during the process of an individual’s reintegration into society. Such risk assessments are often referred to as individual “crime prediction” or “crime forecasting” which, according to the Guidelines, refer to “advanced AI technologies and analytical methods applied to large amounts of often historical data… which, in combination with criminology theories, are used to forecast crime as a basis to inform police and law enforcement strategies and action to combat, control, and prevent crime.” 

In practice, there are two major areas where law enforcement applies AI risk assessments: predictive policing and recidivism risk assessment. Predictive policing involves law enforcement using predictive analytics and other algorithmic techniques to identify patterns related to the occurrence of crime and unsafe situations, and to proactively prevent crime based on these insights. This approach has been adopted by several Member States. On the other hand, a recidivism risk assessment is used to predict the risk of individuals reoffending. 

Crime prediction or crime forecasting AI systems identify patterns within historical data, associating indicators with the likelihood of a crime occurring, and then generate risk scores as predictive outputs. The Guidelines seem to expand on the notion of “risk” contained in Article 5(1)(d), noting the inherently “forward-looking” nature of risk assessments used for crime prediction or forecasting. 

In this context, they note that using historical data on crimes committed to predict other persons’ future behaviour may perpetuate or reinforce biases, and undermine public trust in law enforcement and the justice system. Indeed, risk is by definition uncertain: it may or may not materialise into harm. Any decision based solely on a risk score has the potential to make a wrong assumption regarding the actual commission of a criminal offence. In a recent case, the Dutch Ministry of Justice and Security instructed the probation service in the Netherlands to either adjust or stop using the OxRec algorithm, which, following an investigation, was found to have misjudged the risk of recidivism in a quarter of cases. Having been used around 44,000 times per year, OxRec was identified as relying on outdated data, being in breach of privacy legislation, and posing risk of discrimination.

As Recital 42 of the AI Act explains, natural persons in the EU should always be assessed on the basis of their actual behaviour, and risk assessments carried out solely on the basis of profiling or an assessment of personality traits or characteristics should be prohibited. This aligns with the presumption of innocence until proven guilty under the law (Article 48 EU Charter of Fundamental Rights) and the principle of legal certainty as enshrined in EU law. Indeed, in their final section analyzing the interplay of this prohibition with other Union law, the Guidelines acknowledge the indirect link between the prohibition and Directive (EU) 2016/343 on the presumption of innocence. 

4. The prohibition relies on the GDPR’s definition of ‘profiling’, and takes a broad understanding of ‘personality traits’ and ‘characteristics’

The Guidelines clarify that the prohibition applies regardless of whether the AI system profiles or assesses the personality traits and characteristics of only one natural person or a group of natural persons simultaneously. In this context, group profiling can consist of, for example, an AI system assessing and predicting the risk of other persons committing similar offences, based on constructed or historic data about previously committed crimes by others.

Similarly to the prohibition in Article 5(1)(c) AI Act, explored in Blog 3 of the “Red Lines” series, profiling is understood by reference to its definition in Article 4(4) GDPR. Further, the Guidelines highlight that the predictive policing prohibition is without prejudice to Article 11(3) of the Law Enforcement Directive (LED), which prohibits profiling on the basis of special categories of personal data which results in direct or indirect discrimination.  

The risk assessments covered by the analyzed provision are only prohibited when they are based solely on the profiling of a person or the assessment of their personality traits and characteristics. This means that when there is a human assessment, which will normally be based on relevant objective and verifiable facts, and the AI assessment is used to support the human assessment, the prohibition does not apply. The Guidelines clarify that “personality traits” and “characteristics” are to be broadly understood, and that the examples contained in Recital 42 are not exhaustive. 

However, according to the Guidelines, the use of the term “solely” leaves open the possibility of various other elements being taken into account in the risk assessment, beyond personality traits and characteristics, which will need to be assessed on a case-by-case basis. The Guidelines submit that in order to avoid circumvention of the prohibition and ensure its effectiveness, any such other elements will have to be real, substantial, and meaningful for them to be able to justify the conclusion that the prohibition does not apply. In this context, both providers and deployers of such systems will have to document their decision-making processes to be able to justify choosing a certain course of action over another, particularly in highly sensitive contexts such as crime prediction, in which the risks of producing legal effects can be imminent and significant. 

5. Exception(s) to the prohibition: When a ‘predictive policing’ AI system is not prohibited, but may nonetheless be classified as ‘high-risk’ 

The last phrase of Article 5(1)(d) AI Act clarifies that the prohibition does not apply to AI systems that are used to support the human assessment of the involvement of a person in a criminal activity. This exception applies only insofar as the human assessment is based on objective and verifiable facts directly linked to the criminal activity at hand. While both the AI Act and the Guidelines do not directly define what may constitute “objective and verifiable acts”, the Guidelines provide some examples in which these conditions for the exception to the prohibition may be fulfilled. 

For example, this is the case for an AI system used for the profiling and categorization of actual behaviour, such as “reasonably suspicious dangerous behaviour in a crowd that someone is preparing and likely to commit a crime, and there is a meaningful human assessment of the AI classification” (emphasis added). This latter requirement for ensuring that any AI system used in this context is only acting in support of human assessment echoes the GDPR’s right to obtain human intervention in automated decision-making contexts. 

In the highly sensitive context of crime prediction, the requirement for the “human assessment” to be based on objective and verifiable facts linked to a specific criminal activity is an important precursor to the exercise of the right to an effective remedy (Article 47 EU Charter of Fundamental Rights). While the Guidelines do not expressly refer to the EU Charter, they refer to case law of the Court of Justice of the EU (CJEU) in their understanding and interpretation of the concept of “human assessment.” In the Ligue des droits humains judgement, published in June 2022, the CJEU noted that any human assessment “must rely on objective criteria … and to ensure the non-discriminatory nature of automated processing.” 

Additionally, according to the Dutch DPA (AP), human intervention ensures that a decision is made carefully and prevents people from being (unintentionally) excluded or discriminated against by the outcome of an algorithm. Hence, human intervention must contribute meaningfully to the decision-making process, rather than serve only as a symbolic function. 

It is worth noting that while the Guidelines are specific in their interpretation of the exception contained in Article 5(1)(d), they also mention that this express exclusion from the prohibition may not be the only one. However, the Guidelines do not further elaborate on what other exceptions may apply and in which contexts. It is likely that such exceptions may have to be assessed on a case-by-case basis and, in any case, be real, substantial, and meaningful. Nevertheless, what the Guidelines do clarify is that when the system falls within the scope of the exclusion from the prohibition, it will be classified as a high-risk AI system and be subject to specific requirements and safeguards, including with regard to human oversight as referred to in Articles 14 and 26 AI Act. 

Finally, it is worth noting that AI systems used in the context of national security are excluded from the scope of the AI Act as referred to in Article 2(3) and further explained in Recital 24. This means that an AI system that falls under the ‘predictive policing’ prohibition may nevertheless be permitted exclusively for national security purposes. In this context, the Guidelines do not clarify the distinction between national security and law enforcement activities, which could be crucial for delineating the boundaries of the prohibition of individual risk assessment. 

This is particularly relevant with regard to ‘dual-use systems’ – AI systems that can be used both for law enforcement purposes and for the prevention of national security threats. Recital 24 provides a clarification for such cases, stating that ‘AI systems placed on the market or put into service for an excluded purpose, namely military, defence or national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within the scope of this Regulation and providers of those systems should ensure compliance with this Regulation.’ Hence, if an AI system is placed on the market or put into service for both national security and law enforcement purposes, it must nevertheless comply with the AI Act. 

6. The prohibition can apply to private actors when they are entrusted by law to exercise public authority and public powers

Notably, the ‘predictive policing’ prohibition does not apply exclusively to law enforcement authorities. The prohibition may be assumed to apply, in particular, when private actors are entrusted by law to exercise public authority and public powers for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties. Private actors may also be explicitly requested, on a case-by-case basis, to act on behalf of law enforcement authorities and carry out individual crime risk predictions. In those cases, the activities of those private actors could also fall within the scope of the Article 5(1)(d) prohibition.

The prohibition may apply to private entities assessing or predicting the risk of a person committing a crime where this is objectively necessary for compliance with a legal obligation to which that private operator is subject to (for example, a banking institution obliged by Union anti-money laundering legislation to screen and profile customers for money-laundering offences). 

The Guidelines also outline what is explicitly excluded from this prohibition or out of its scope, namely: 

While the Guidelines do not expressly address the issue, it is worth noting that, while certain exemptions may exist for the use of AI technologies in the law enforcement context, the mere fact that such uses occur in the context of determining criminal activity does not absolve a private entity from complying with legal obligations beyond the AI Act, including under the GDPR. In a case that led to a more than €30 million fine imposed by the Dutch AP on Clearview AI in September 2024 under the GDPR, the company argued that they were acting in the interest of potential third-party users of their facial recognition database, in this case overwhelmed law enforcement authorities (paragraph 88 of the Dutch AP’s judgement). The company also identified “responsible organizations charged with protecting society” (paragraph 88), which may include private actors, as justifying the interest of third parties in using their service. 

In assessing whether the interest of third parties in combating crime, tracing victims, and other public duties qualify as legitimate interests, the Dutch AP notes that “such interests do not qualify as a legitimate interest of a third party” within the meaning of Article 6(1)(f) GDPR. The Dutch AP expands that, similarly, Dutch and European regulators cannot rely on legitimate interests under Article 6(1) GDPR for the purposes of exercising their duties of preserving and protecting society-wide interests (paragraph 92). 

With this in mind, caution must be exercised in ensuring a reading of the AI Act’s prohibitions that is contextualized within the broader set of EU rules regulating technology development and deployment. In this sense, the Guidelines could have expanded on Section 5.4 (Interplay with other Union law) by making reference to at least one specific instance in which regulatory authorities, on the basis of already applicable and relevant laws, have interpreted technology uses that directly relate to the prohibition at hand. This may have helped reinforce legal certainty with regard to the applicability and scope of the prohibition by noting instances in which uses not expressly covered by the AI Act are otherwise covered by other EU laws. 

7. Concluding Reflections and Key Takeaways

As Article 5(1)(d) is limited in its scope, it does not entirely prohibit crime prediction or forecasting AI technologies

As explored in the fourth blog post in the series, given that the Article 5(1)(d) prohibition is limited and targeted in its scope, it does not entirely prohibit crime prediction or forecasting AI technologies. Rather, it focuses on prohibiting (individual) risk assessments for the prediction of criminal offences based solely on profiling or personality assessments. The prohibition draws on the logic and legal foundations of general and fundamental rights law in the EU and, in particular, on Article 47 (right to an effective remedy and fair trial) and Article 48 (presumption of innocence and right of defence) of the EU Charter of Fundamental Rights. 

When an AI system does not meet all of the conditions for the prohibition to apply, it will nevertheless be classified as a high-risk AI system

Similar to the analysis in previous blog posts on the AI Act’s prohibitions, we find that when an AI system does not meet all of the conditions for the prohibition to apply, it will be classified as a high-risk AI system. This is reminiscent of the AI Act’s scaled approach to delineating and classifying risk and the close interplay between Articles 5 and 6 of the AI Act. 

The Guidelines note that engaging in crime prediction activities may perpetuate or reinforce biases and erode public trust in law enforcement

Finally, given the particularly sensitive context and nature of applying AI technologies in the area of crime prediction and forecasting, wherein risk assessments can lead to significant legal effects and consequences for individuals, the Guidelines acknowledge that such activities may perpetuate or reinforce biases and erode public trust in law enforcement. 

Red Lines under the EU AI Act: Unpacking Social Scoring as a Prohibited AI Practice 

Blog 3 | Red Lines under the EU AI Act Series 

This blog is the third of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

The prohibition of AI-enabled social scoring is among the red lines established by the EU AI Act under its Article 5, targeting practices that assess or classify individuals or groups based on their social behavior or personal traits, leading to unfair treatment, particularly when the information is drawn from multiple unrelated social contexts or when the resulting treatment is disproportionate to the behavior assessed. Notably, the prohibition has a broad scope of application across public and private contexts and is not limited to a specific sector or field. 

The practice of “social scoring” is not uniquely regulated by the AI Act, as it engages well-established notions under the General Data Protection Regulation (GDPR): profiling, purpose limitation and automated decision-making. Therefore, those practices in the same realm that do not meet the high threshold of the social scoring prohibition under the AI Act must in any case comply with the detailed GDPR provisions relevant to them.

As this analysis will show, the “social scoring” prohibition under the AI Act also engages notions of “personalization” in AI, which may be particularly relevant to the current state of AI development, as prior FPF analysis has shown. 

This blog examines the definition and contextual scope of the prohibition of social scoring under Article 5(1)(c) AI Act (Section 1), including its conditions and detailed scenarios (Section 2), as well as the practices that fall outside the scope of the prohibition (Section 3). It then takes a look at how this provision interacts with other areas of EU law, in particular data protection, non-discrimination, and sector-specific frameworks (Section 4). The main takeaways (Section 5) highlight that:

  1. Social scoring as a “contextual” prohibited AI practice

EU legislators made the policy choice to expressly ban practices of AI systems that enable social scoring because they considered them incompatible with fundamental rights and European Union values. This results from Recital 31 of the AI Act, which states that such practices “may lead to discriminatory outcomes and the exclusion of certain groups” and can violate individuals’ dignity, privacy, and right to non-discrimination. The European Commission characterized AI systems that allow “social scoring” by governments or companies as a “clear threat to people’s fundamental rights”, noting that these are banned outright. The Guidelines the Commission issued on prohibited practices under the AI Act reiterate this framing and clarify the cumulative elements for the prohibition with practical illustrations.

This rationale was backed by EU data protection authorities (DPAs). In June 2021, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) welcomed the intention to ban social scoring in their Joint Opinion 5/2021 on the AI Act proposal, warning that the use of AI for ‘social scoring’… can lead to discrimination and is against the EU fundamental values​. Since then, the EDPB and national DPAs have continued to develop guidelines around profiling and automated decision-making (ADM), including guidance on legitimate interests (2024) and national tools such as the Dutch DPA’s guidance (AP) on “meaningful human intervention”, which could be relevant when assessing whether an AI-enabled score could fall under the AI Act provisions or Article 22 GDPR, which provides for the right not to be subject to solely automated decision-making. 

According to the Commission Guidelines, the AI Act prohibits social scoring practices if the following cumulative conditions are met:

  1. The AI system is placed on the market, put into service, or used.
  2. The AI system is intended to evaluate or classify individuals or groups over a certain period of time based on their social behavior or inferred personal or personality characteristics.
  3. The social score results in (i) unfavorable treatment in social contexts unrelated to where the data was originally collected and/or (ii) treatment is unjustified or disproportionate to the social behavior or its gravity.

All three conditions must be met simultaneously for Article 5(1)(c) to apply. The prohibition applies to both providers and deployers of AI systems. Of note, the prohibition has been applicable since 2 February 2025, while the supervisory and enforcement provisions related to it have been in force since 2 August 2025. However, no enforcement or regulatory action has been announced so far regarding the social scoring prohibition. 

The prohibition does not extend to all AI-enabled scoring practices. The Guidelines clarify that it targets only unacceptable practices that result in unfair treatment, social control or surveillance. At the same time, the Guidelines note that the prohibition is not meant to affect the “lawful practices that evaluate people for specific purposes that are legitimate and in compliance” with the EU and national law, particularly in the cases where the legislation provides for the types of data that are relevant for the specific evaluation purposes and ensures that any unfavorable or detrimental treatment that results from the practice is justified and proportionate. 

In this context, the Guidelines clarify that sector-specific scoring systems, such as creditworthiness assessments, insurance risk scoring or fraud detection systems, are not prohibited in cases where they are carried out for clearly defined purposes and in accordance with EU or national legislation. 

For example, the credit scoring systems used by financial institutions to assess a borrower’s creditworthiness based on relevant financial data do not fall under the provision of Article 5(1)(c) of the AI Act, provided that they do not result in unjustified or disproportionate treatment or rely on unrelated social context data. Instead, such systems are typically classified as high-risk AI systems under Article 6 and Annex III of the AI Act and must comply with the applicable requirements, including risk management, transparency, human oversight and data governance obligations. 

2. Unpacking how the social scoring prohibition is triggered under the AI Act

2.1 The AI system is intended to evaluate or classify individuals or groups over a certain period of time based on their social behavior or inferred personal or personality characteristics

Article 5(1)(c) AI Act explicitly prohibits the placing on the market, putting into service or use of an AI system for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behavior or known, inferred or predicted personal or personality characteristics. The Guidelines clarify that this condition is fulfilled where an AI system assigns individuals or groups scores based on their social behavior or personal or personality characteristics. These scores could take different forms, such as numerical values, rankings or labels. This prohibition applies broadly across public and private sectors and concerns only natural persons or groups of natural persons, excluding thus legal entities. 

The Guidelines differentiate between “evaluation” and “classification” as two distinct but related concepts within the scope of Article 5(1)(c) AI Act. “Evaluation” refers to an assessment or judgment about a person or group of persons, and “classification” has a broader scope and includes categorizing individuals or groups based on certain characteristics or behavioral patterns. “Classification” does not necessarily involve an explicit judgement or assessment but may still fall within the scope of the prohibition in cases where individuals are assigned scores, rankings or labels based on their behavior or personal or personality characteristics. 

In addition, the Guidelines note that the term “evaluation” is closely linked to “profiling” as defined by EU data protection law, namely in Article 4(4) GDPR, and as referred to in Article 22 GDPR and Article 11 Law Enforcement Directive. Profiling refers to the processing of personal data to evaluate personal aspects of an individual, in particular to analyse or predict behavior about their ability to perform tasks, interests, likely behavior, or future actions. 

Interestingly to note is that the Guidelines opted for the wording of the Article 29 Working Party Guidelines on Automated Decision-Making and Profiling, adopted in 2017, when referring to profiling, reflecting a broader, functional understanding of profiling that encompasses AI systems assigning behavioral scores or predictive assessments, and therefore clarifying that that the scope of the prohibition is not narrowly limited to specific technical forms of automated processing but extends to AI-enabled evaluation and categorization of persons based on their characteristics or behavior.  

The Guidelines note that although Article 5(1)(c) AI Act does not explicitly reference profiling under the GDPR as defined in Article 4(4), the act of profiling may still fall under the prohibition when AI systems process personal data to assess individuals.

To illustrate the link between profiling and social scoring, the Guidelines refer to the SCHUFA I judgment (Case C-634/21), in which the CJEU examined a creditworthiness scoring system used in Germany. In that case, the score generated by the computer programme consisted of a probability value estimating an individual’s ability to meet payment commitments. The CJEU found that this score was based on certain personal characteristics and involved establishing a prognosis concerning the likelihood of future behavior, such as the repayment of a loan. The scoring process relied on assigning individuals to groups of persons with comparable characteristics and using the behavior of those groups to predict the individuals’ future conduct. 

The CJEU held that this activity constitutes “profiling” within the meaning of Article 4(4) GDPR and it held that the automated establishment of that probability value can constitute ADM under Article 22(1) GDPR where a third party draws strongly on it to decide whether to enter into, implement, or terminate a contractual relationship. The Guidelines clarify that such scoring may also constitute an “evaluation” of individuals based on their personal characteristics within the meaning of Article 5(1)(c) AI Act and will be prohibited if carried out through AI systems, provided that all the other conditions are fulfilled. 

Additionally, even if not referenced in the Guidelines, the CJEU judgment in CK v Dun & Bradstreet Austria (CaseC-203/22) further clarified the legal framework governing profiling and scoring systems. In that case, the CJEU held that the right of access under Article 15(1)(h) GDPR requires controllers to provide data subjects with meaningful information about the logic involved in automated decision-making, including the procedures and principles used to generate a score.

2.1.1 The prohibition requires evaluations to rely on data gathered over a period of time, ensuring that one-off assessments cannot circumvent it.

The prohibition in Article 5(1)(c) AI Act applies only where the evaluation or classification is based on data collected over “a certain period of time”. The Guidelines clarify that this temporal requirement indicates that the assessment should not be limited to a one-time rating or grading based solely on data from a single, isolated context. This condition must be assessed in light of all the circumstances of the case to avoid the circumvention of the scope of the prohibition. 

To illustrate this, the Guidelines refer to a scenario involving a migration or asylum authority that deploys a partly automated surveillance system in refugee camps using cameras and motion sensors. If such a system analyzes behavioral data collected over a period of time and evaluates individuals to determine, for example, if they may attempt to abscond, this would mean that the temporal condition is met and may fall within the scope of the prohibition, provided that all the other conditions are also met. 

2.1.2 The provision prohibits AI evaluations based on social behavior or known, inferred, or predicted personal or personality characteristics

The evaluation or clarification of individuals based on AI-enabled processing in relation to either (i) their social behavior or (ii) their known, inferred or predicted personal and personality characteristics, or both, is prohibited under the AI Act provision. This data may be directly provided by the individuals, indirectly collected through surveillance, obtained from third parties, or inferred from other information. 

The Guidelines explain that “social behavior” is a broad concept that encompasses a wide range of actions, habits, and interactions within society. This may include behavior in private and social contexts, such as participation in cultural or voluntary activities, as well as behavior in business or institutional contexts, including payment of debts, use of services and interactions with public authorities or private entities. This type of data is often collected from multiple sources and combined, sometimes involving extensive monitoring or tracking of individuals. 

The prohibition also applies in cases where “personal or personality characteristics” may involve specific social behavioral aspects. The Guidelines note that personal characteristics may include a wide range of information relating to an individual, such as race, ethnicity, income, profession, other legal status, location, level of debt, and so on. Personality characteristics should, in principle, be interpreted as personal characteristics, but may also involve the creation of specific profiles of individuals as “personalities”. These characteristics may indicate a judgment, made by the individuals themselves, observed by others, or generated by AI systems.

The Guidelines distinguish between three types of characteristics used in scoring systems: (i) “known characteristics” (verifiable inputs provided to the AI systems), (ii) “inferred characteristics” (conclusions drawn from existing data, usually by AI systems), and (iii) “predicted characteristics” (estimates based on patterns, often with some degree of inaccuracy). These distinctions are relevant because inferred and predicted characteristics may be less accurate and more opaque, raising concerns about fairness and transparency in AI-driven scoring systems.

2.2. The social score must lead to detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment to the gravity of the social behavior

2.2.1. Causal link between the social score and the treatment

For the prohibition to apply, the social scoring created by or with the assistance of an AI system must lead to detrimental or unfavorable treatment of the evaluated person or group of persons. There must be a causal link between the score and the resulting treatment, such that the treatment is the consequence of the score. This causal link may also exist where harmful consequences have not yet materialised, provided that the AI system is capable or intended to produce such outcomes. 

The Guidelines further note that the AI-enabled score does not need to be the sole cause of the detrimental or unfavorable treatment. The prohibition also covers situations where AI-enabled scoring is combined with human assessment, as long as the AI output plays a sufficiently significant role in the decision. The prohibition is still applicable if the score is obtained by an organization and produced by another (e.g., a public authority using a creditworthiness score from a private company).

2.2.2. Detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment

For the prohibition to apply, the social score must or could result in detrimental or unfavorable treatment of the evaluated person or group. This treatment could occur either in (i) a different social context from where the data was originally generated or collected, and/or (ii) in a manner that is unjustified or disproportionate to the social behavior or its gravity. 

The Guidelines emphasize that a case-by-case analysis is required to determine if at least one of these conditions is fulfilled, as many AI-enabled scoring practices may fall outside the scope of the prohibition. 

The Guidelines further clarify that “unfavorable treatment” refers to situations where, as a result of the scoring, a person or a group is treated less favorably compared to others,  even where no specific harm or damage is demonstrated. By contrast, “detrimental treatment” requires that the individual or group suffer harm or disadvantage as a result of the scoring. Such treatment may also be considered discriminatory under EU non-discrimination law and may include the exclusion of certain persons or groups, although it is not a necessary condition for the prohibition to apply. As the Guidelines highlight, the treatment covered by the Article 5(1)(c) could go beyond the EU non-discrimination law. 

The Guidelines further detail the scenarios described under Article 5(1)(c) AI Act: 

a. Detrimental or unfavorable treatment in unrelated social contexts, such as when authorities use information like nationality, internet activity, or health status from one area to evaluate people in another

The first scenario regards the situations where the detrimental or unfavorable treatment resulting from a social score occurs in social contexts unrelated to the one in which the data were originally generated or collected. The Guidelines clarify that this condition requires both that the data used for scoring originates from unrelated social contexts and that the resulting score leads to detrimental or unfavorable treatment in a different context. 

This scenario typically involves AI systems processing data relating to the individuals’ social behavior or personal characteristics that were generated or collected in contexts unrelated to the purpose of the scoring, and used by the AI system for the scoring of the individual(s) without an apparent connection to the purpose of the evaluation or classification or in a way that leads to the generalised surveillance of individuals or groups. 

As the Guidelines note, in most situations, these kinds of practices occur against the reasonable expectations of the individuals concerned and may also violate EU law and other applicable rules. To determine if this condition is met, a case-by-case assessment is required, evaluating the purpose of the evaluation and the context in which the data was collected and generated.

There is a clear link between this scenario and the purpose limitation principle under Article 5(1)(b) GDPR, which provides that personal data must be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. When the personal data collected in one context is used to generate social scores in an unrelated context, such a practice may violate this principle, particularly where the new use of data was not foreseeable to the individual or where the new processing lacks a sufficient legal basis of connection to the original purpose. 

The Guidelines provide several examples of prohibited practices under this first scenario, highlighting the following situations where:

On another note, national developments illustrate the risks associated with AI-enabled social scoring and classification systems that rely on data from unrelated contexts. In the Netherlands, the Dutch Tax and Customs Administration used the “Fraude Signaleringsvoorziening” (FSV – Fraud Signaling Provision), a system used to record and assess fraud signals based on personal data collected from multiple sources, including internal systems, other public authorities, and third parties. 

The Dutch AP found that the processing of personal data in the FSV was unlawful. The AP found that the processing had no legal basis and that the purpose of the processing was not sufficiently defined. These findings were explored in the Case 202401528/1/A3. The Council of State held that the letter of the Ministry of Finance informing the individual that they were not eligible for financial compensation following his registration in the FSV was a decision subject to judicial review. It is relevant to note that this case was decided under administrative and data protection law and did not concern the application of the AI Act, yet it highlights the risks associated with systems that record and use personal data to evaluate and classify individuals which may influence their treatment by public authorities. 

b. Situations where the detrimental or unfavorable treatment is disproportionate to the actual behavior

To this extent, the Guidelines provide a list of unjustified or disproportionate treatment that falls under both Article 5(1)(c)(i) and Article 5(1)(c)(ii):

The Guidelines note that the prohibition may also cover cases when preferential treatment is granted to certain individuals or groups of people (e.g., in cases of support employment programs, (de-)prioritization for housing or resettlement).

2.3. AI-enabled social scoring is prohibited regardless of whether the system or the score are provided or used by public or private persons

Article 5(1)(c) prohibits AI-enabled social scoring practices regardless of whether the AI systems or the resulting score are provided or used by public or private persons. While scoring practices in the public sector may have particularly significant consequences due to the individuals’ dependence on public services and the imbalance of power between the authorities and the individuals, similarly harmful consequences may also happen in the private sector. 

For instance, as the Guidelines exemplify, an insurance company may use an AI system to analyze spending patterns and the financial data obtained from a bank, which are unrelated to the assessment of eligibility for life insurance, in order to determine if it should refuse the insurance or impose higher premiums to individuals or groups of individuals. In another example, a private credit agency may use an AI system to determine an individual’s creditworthiness for obtaining a housing loan based on unrelated personal characteristics. 

In the case of verifications conducted by the competent market surveillance authorities, the responsibility lies with the providers and deployers of the AI systems, within their respective obligations, to demonstrate that their AI systems are legitimate, transparent and only process context-related data. They must also ensure that the systems operate as intended and that any resulting detrimental or unfavorable treatment is justified and proportionate to the social behavior assessed. 

The Guidelines also note that compliance with the applicable requirements, including those concerning high-risk AI, may help ensure that the evaluation and classification practices remain lawful and do not constitute prohibited social scoring. 

3. What falls outside the scope of the prohibition?

The AI Act makes room for carefully tailored exceptions to the social scoring prohibition. It acknowledges several scenarios where assessing individuals via algorithms is lawful and even necessary, provided that such an assessment is conducted in a targeted and proportionate manner.

First to note is that the prohibition applies only to the scoring of natural persons or groups of natural persons. Scoring of legal entities is, in principle, excluded in situations where the evaluation is not based on the social behavior or personal or personality characteristics of individuals. However, as the Guidelines highlight, in the situations where a score attributed to a legal entity aggregates the evaluation of natural persons and directly affects those individuals, the practice may fall within the scope of this prohibition. 

Secondly, the Guidelines distinguish AI-enabled social scoring as a “probabilistic value” and prognosis from individual ratings provided by users (for example, the ratings of drivers or service providers on online platforms). These fall outside the prohibition unless they are combined with other data and analyzed by an AI system to evaluate or classify individuals that fulfill the conditions of Article 5(1)(c).   

Finally, Recital 31 AI Act and the Guidelines clarify that lawful evaluation practices conducted for a specific purpose in compliance with EU and national law remain outside the scope of the prohibition. Recital 31 reiterates that this prohibition “should not affect lawful evaluation practices of natural persons carried out for a specific purpose in accordance with Union or national law.” 

The Guidelines provide additional examples of legitimate scoring practices that are out of scope, including: 

4. Interplay with other EU laws, including consumer protection, data protection, non-discrimination, and sector-specific provisions such as credit, banking, and anti-money laundering

Providers and deployers must assess whether other EU or national laws apply to any particular AI scoring system used in their activities, particularly if more specific legislation strictly defines the type of data considered relevant and necessary for specific evaluation purposes and ensures fair and justified treatment.

AI-enabled social scoring in business-to-consumer relations may also require the application of EU consumer protection laws, such as Directive 2005/29/EC on unfair business-to-consumer commercial practices (the “UCPD”), if it misleads consumers or distorts their economic behavior. The practices which may amount to misleading consumers or distorting their behavior through AI uses or in AI contexts is further explored in Blog 2 of this series, accessible here

Social scoring may also engage specific data protection rules as encoded in the GDPR, particularly those regarding the legal ground for processing, data protection principles, and other obligations, including the rules on solely automated individual decision-making. AI-enabled social scoring that results in discrimination based on protected characteristics (e.g., age, race, and religion) would also fall under EU non-discrimination law.

Finally, certain sector-specific rules may be applicable. For example, the Consumer Credit Directive (CCD) prohibits the use of special categories of personal data in these evaluations and the obtaining of data from social networks. Additionally, guidelines from the European Banking Authority provide further specifications on the relevant information for the purpose of creditworthiness assessments, which are relevant to determine whether a practice falls under the scope of Article 5(1)(c). AI systems used for anti-money laundering and counter-terrorism financing purposes must also comply with the applicable EU legislation.

5. Closing reflections and key takeaways

The AI Act prohibits specific practices of AI-enabled social scoring, not scoring in general

Article 5(1)(c) of the AI Act does not prohibit scoring as such, but rather the placing on the market, putting into service or use of AI systems for social scoring practices that meet the conditions set out in the provision. The Guidelines repeatedly focus on the concrete use of the AI system and the effects of the resulting score, rather than on the existence of the scoring mechanisms alone. In particular, the prohibition is determined only when all conditions are cumulatively met, including the evaluation or classification over a certain period of time and the link to detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment. 

Public and private uses are equally in scope, with shared accountability across the value chain

The Guidelines clarify that unacceptable AI-enabled social scoring is prohibited regardless of whether the system or score is provided or used by public or private persons. They also place practical weight on accountability: in case of verifications conducted by the competent market surveillance authorities, both providers and deployers, each within their responsibilities, must be able to demonstrate legitimacy and justification, including transparency about system functioning, data types and sources, and the use of only data related to the social context in which the score is used, as well as proportionality of any resulting detrimental or unfavorable treatment.

Out of scope does not mean being exempted from scrutiny.

Recital 31 of the AI Act and the Guidelines clarify that the prohibition is not intended to affect the lawful evaluation practices carried out for a specific purpose in accordance with the existing legislation in place. It depends on several criteria, as examined throughout this blog, if a scoring practice falls outside the scope of the prohibition, including whether the evaluation serves a legitimate and clearly defined purpose, whether the data used is relevant and necessary for that purpose, whether the scoring occurs within the same social context in which the data was collected, and whether any resulting detrimental or unfavorable treatment is justified and proportionate to the behaviour assessed.

As the Guidelines emphasise, this assessment is contextual. The same scoring practice may fall outside the scope of the prohibition in one situation, for example, where it is used for a lawful and proportionate creditworthiness assessment based on relevant financial data, but may fall within the scope of Article 5(1)(c) where it relies on unrelated data, produces disproportionate consequences, or is used in a different social context. This reinforces that compliance depends not only on the existence of scoring systems, but on how they are designed, the types of data they process, and the purposes for which they are used.

Red Lines under the EU AI Act: Understanding Manipulative Techniques and the Exploitation of Vulnerabilities

Blog 2 | Red Lines under the EU AI Act Series  

This blog is the second of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can read the first episode here and find the whole series here.

Harmful manipulation and deception through AI systems and exploiting certain human vulnerabilities are the first on the list of prohibited practices under Article 5 of the EU AI Act. It is apparent that the underlying goal of these provisions is to ensure that individuals maintain their ability to make autonomous decisions. This is especially important when considering that one of the goals of the AI Act is “to promote the uptake of human-centric and trustworthy AI”, while ensuring respect for safety, health and fundamental rights (see Recital 1, AI Act).

These first two prohibited practices listed in Article 5(1) specifically concern AI systems that could undermine individual autonomy and well-being through:

It is notable, though, that manipulative and deceptive practices based on processing of personal data, and those that specifically occur through online platforms, are already strictly regulated by the EU’s General Data Protection Regulation (GDPR) and Digital Services Act (DSA). Specifically, the GDPR intervenes through obligations like ensuring fairness (Article 6(1)(a)) and data protection by design (Article 25) for all processing of personal data, regardless of whether that processing occurs through AI or not, while the DSA includes a prohibition for providers of online platforms to design, organise or operate their online interfaces in a way that deceives or manipulates their users (Article 25). While the relationship between the DSA obligations and those in the GDPR related to manipulative design is clear, with the DSA only being applicable where the GDPR does not apply, their relationship with the AI Act prohibitions on manipulative techniques and exploiting vulnerabilities requires further guidelines and clarification. 

The Guidelines published by the European Commission to support compliance with Article 5 AI Act highlight that the two prohibitions aim to protect individuals from being reduced to “mere tools for achieving certain ends”, and to protect those who are most vulnerable or susceptible to manipulation and exploitation. Significantly, the Guidelines analyze these two prohibitions together, making it obvious that there is a nexus between them. In this sense, according to the Guidelines, they are both designed to support and protect the right to human dignity, as enshrined in the EU Charter of Fundamental Rights

This second blog in the “Red Lines” series provides an analysis of the scope and content of the Article 5(1)(a) prohibition in Section 2, focusing on the definitions of subliminal, manipulative, and deceptive techniques. Section 3 goes on to explore the notion of vulnerability contained in the Article 5(1)(b) prohibition and in the Guidelines, while Section 4 notes the possible interplay between the two prohibitions. Section 5 takes a broader view by highlighting the interplay between the prohibitions and other EU laws, including the GDPR and the DSA, before the conclusions in Section 6 note the following key takeaways:  

2. Understanding harmful manipulation and deception as a prohibited practice under the AI Act

Article 5(1)(a) AI Act targets those cases in which AI practices subtly manipulate human action without the individual noticing. The final text of the AI Act for this provision underwent several changes from the European Commission’s initial proposal, broadening its scope and clarifying some elements.

Following amendments submitted by the European Parliament, the final text sought to add manipulative and deceptive techniques to the initial “subliminal techniques”, and broaden the scope of the ban to cover not only harmful effects on individuals but also on groups, in order to prevent discriminatory effects. Another modification of the initial proposal added that the prohibition should not be limited to cases where the systems are intended to modify behaviour, but also to cases where the modification of the behaviour that led to a significant harm is a mere “effect”, even when it was not the intended objective of the AI practice in question.

2.1. Defining subliminal, purposefully manipulative or deceptive techniques

The Guidelines list four cumulative conditions to be fulfilled in order for this prohibition to be applicable, even though, in their analysis, they also include a fifth one. 

  1. The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system. 
  2. The AI system must deploy subliminal (beyond a person’s consciousness), purposefully manipulative, or deceptive techniques. 
  3. The techniques deployed by the AI system should have the objective or the effect of materially distorting the behavior of a person or a group of persons. The distortion must appreciably impair their ability to make an informed decision, resulting in a decision that the person or the group of persons would not have otherwise made. 
  4. The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons. 

The four conditions must be met cumulatively for the prohibition to be applicable. Additionally, according to the Guidelines, there must be a plausible causal link between the techniques used, the significant change in the person’s behavior, and the significant harm that resulted or is likely to result from that behavior. While the causal link is not listed among the four conditions, it is analyzed further down in the Guidelines as a self-standing, additional condition to be met, and it should be considered as the fifth point on this list.

The prohibition applies to both providers and deployers of AI systems who, each within their own responsibilities, have an obligation not to place on the market, put into service, or use AI systems that impair an individual’s ability to make an informed decision on the basis of subliminal, manipulative or deceptive techniques. 

The Guidelines note that while the AI Act does not directly define “subliminal techniques”, the text of Article 5(1)(a) and Recital 29 imply that such techniques are inherently covert in that they operate beyond the threshold of conscious awareness, capable of influencing decisions by bypassing a person’s rational defences. However, the Recital also explains that the prohibition covers even those cases where the person is aware that the techniques used are subliminal, but cannot resist their effect. The Guidelines clarify that the prohibition on the use of subliminal techniques is not limited to those practices that influence decision-making only, but rather, it also covers those techniques that influence a person’s value- and opinion-formation, a criterion that seems highly subjective and might raise difficulties in applying it in practice. A relevant example could be an AI system facilitating deepfakes on matters of public interest when spread on platforms without appropriate labeling and in violation of the transparency obligations in place (Article 50 AI Act). Their use could be considered prohibited. 

Subliminal techniques can use audio, visual, or tactile stimuli that are too brief or subtle to be noticed. The following techniques are among several suggested in the Guidelines (p. 20) as potentially triggering a ban, if the other conditions are also met: 

The Guidelines, referring to Recital 29 AI Act, specify that the development of new AI technologies, like neurotechnology, brain-computer interfaces, virtual reality, or even “dream-hacking” increases the potential for sophisticated subliminal manipulation and its ability to influence human behavior subconsciously. 

While “purposefully manipulative techniques” are similarly not defined by the AI Act, the Guidelines fill this gap by noting that such techniques exploit cognitive biases, psychological vulnerabilities, or situational factors that make individuals more susceptible to influence. This provision covers cases where individuals are aware of the presence of a manipulative technique but cannot resist its effect and, as a result, are pushed into decisions or behaviours that they would not have otherwise made (Recital 29). 

Recital 29 of the AI Act also refers to techniques that deceive or nudge individuals “in a way that subverts and impairs their autonomy, decision-making and free choices.” A direct comparison can be made with the DSA which, inter alia, prohibits providers of online platforms from deceiving or nudging recipients of their service and from distorting or impairing their autonomy, decision-making and free choice (Article 25 and Recital 67 DSA). 

The manipulative capability of the technique is a key factor in determining its effect. Indeed, the Guidelines clarify the AI system could manipulate individuals without the provider or deployer intendingto cause harm. However, the provision would still apply, unless the result is incidental and appropriate preventive and mitigating measures were taken. This is consistent with the overall logic and scope of the AI Act’s prohibitions, as explored in Blog 1 of this series, in which deployers have a responsibility to reasonably foresee harms that may arise from the misuse of an AI system. 

Deceptive techniques are techniques that subvert or impair a person’s autonomy, decision-making, or free choice in ways of which the person is not consciously aware or, where they are aware, can still be deceived or cannot control or resist them. In the case of deepfakes, for example, Article 50 of the AI Act requires that the deployer disclose their nature. If this transparency is absent and the deepfake is used to deceive individuals, it could fall under prohibited uses. Notably, according to the Guidelines, this provision applies even if the deception occurs without the intent of the provider or deployer. However, the Guidelines also clarify that a generative AI system that produces misleading information due to hallucinations—provided the provider has communicated this possibility—does not constitute a prohibited practice.

2.2 To fall under the AI Act’s prohibited practices, manipulative techniques have to have the “objective or effect of materially distorting the behavior of a person or a group of persons” 

The subliminal, manipulative and deceptive techniques must have the objective or the effect of materially distorting the behavior of a person or a group of persons. Material distortion involves a degree of coercion, manipulation, or deception that goes beyond lawful persuasion. The Guidelines note that material distortion implies a substantial impact on a person’s behavior, such that their decision-making and free choice are undermined, rather than a minor influence. 

When interpreting “material distortion of behaviour” under Directive 2005/29/EC (the Unfair Commercial Practices Directive or ‘UCPD’), it is sufficient to demonstrate that a commercial practice is likely (i.e., capable) of influencing an average consumer’s transactional decision; there is no need to prove that a consumer’s economic behavior has been distorted. However, this requires a case-by-case assessment, considering specific facts and circumstances. Additionally, the average consumer’s perspective may not be helpful in situations where an AI system delivers highly personalized messages designed to manipulate individual behavior.

The AI Act adopts a similar understanding of “material distortion” as the UCPD, where the prohibition applies even if the material distortion of a person’s behavior occurs without the intent of the provider or deployer. The text specifies that the prohibition covers not only cases in which behavior modification is the object of the system (like in the original text of the European Commission’s proposal) but also those in which it is the mere “effect”. This change, as introduced into the final text, amplifies protection against the possible distorting effects of manipulative AI systems. 

2.3 The subliminal, manipulative and deceptive techniques must be “reasonably likely to cause significant harm” 

The Guidelines define harm under three broad categories:

However, the harm must be significant for the prohibition to apply. The determination of ‘significant harm’ is fact-specific, requiring careful consideration of each case’s circumstances and a case-by-case assessment. Still, the individual effects should always be material and significant in each case. According to the Guidelines, the assessment of the significance of the harm takes into consideration several factors:

When assessing harm, the Guidelines suggest that a comprehensive approach should be taken, which considers both the possible immediate and direct harms that are associated with AI systems that deploy subliminal, deceptive, or manipulative techniques. 

The last requirement for identifying a prohibited practice is determining the likelihood of a causal link between the manipulative technique and the distorting behavior. In that regard, to not fall in the category of prohibited practices, providers and deployers are suggested to take appropriate measures such as:

It is worth reminding that although the concept of significant harm is very similar to the one of “significant effect” that we encounter within Article 22 GDPR on automated decision-making (ADM), they do not overlap perfectly, with the latter providing for a broader interpretation than the former (see here FPF’s Report on ADM case law). For example, profiling through ADM for political targeting could have a significant effect on citizens but not result in significant harm

Not all forms of manipulation fall within the AI Act’s scope. Many persuasive techniques commonly used in advertising are legitimate because they operate transparently and respect individual autonomy. The Guidelines suggest that if an AI system appeals to emotions but remains transparent and provides accurate information, it falls outside the law’s scope.

Additionally, compliance with regulations like the GDPR helps providers and deployers demonstrate that transparency, fairness, and respect for individual rights and autonomy are upheld.

Furthermore, manipulation may be acceptable in some cases if it does not result in significant harm. For instance, in an example, the Guidelines provide – an online music platform might use an emotion recognition system to detect users’ moods and recommend songs that align with their emotions while avoiding excessive exposure to depressive content.

3. The exploitation of vulnerabilities, particularly those due to age, disability or socio-economic status, as prohibited AI practice

Cases in which an AI system exploits the vulnerabilities of a single person or a specific group with the objective of distorting their behavior are designated as prohibited AI practices under Article 5(1)(b) AI Act.

There are four cumulative conditions to be fulfilled for the application of Article 5(1)(b):

  1. The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons. 
  2. The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system. 
  3. The AI system must exploit vulnerabilities due to age, disability, or socio-economic situations. 
  4. The exploitation enabled by the AI system must have the objective or the effect of materially distorting a person’s behavior or a group of persons. 

3.1. Exploitation of vulnerabilities due to age, disability, or a specific socio-economic situation

While vulnerability is not directly defined by the AI Act, according to the Guidelines, the concept covers a wide range of categories, including cognitive, emotional, physical, and other forms of susceptibility that may impact an individual’s or group’s ability to make informed decisions or influence their behavior. 

However, under the AI Act’s prohibited practices, the exploitation of vulnerabilities is only relevant if it involves individuals who are vulnerable due to their age, disability, or socio-economic circumstances. It is worth noting that a reference to an individual’s socio-economic situation was included in the final text of the AI Act after the amendments submitted by the European Parliament, which led to a wider scope of the Article 5(1)(b) prohibition in the final text, as compared to the initial European Commission proposal. 

Exploiting other categories of vulnerabilities than those expressly mentioned falls outside the scope of the Article 5(1)(b) prohibition. The Guidelines note that age, disability, or socio-economic vulnerabilities may, in principle, lead to a limited capacity to recognize or resist manipulative AI practices. The prohibition aims to prevent the exploitation of cognitive limitations stemming from age or health conditions. However, socio-economic status can also reduce an individual’s ability to recognize deceptive practices and may intersect with other discriminatory factors, such as belonging to an ethnic, racial, or religious minority group.

The Guidelines share a number of examples in cases of exploitation of vulnerable people based on their age that fall under prohibited practices, including: 

In the case of exploitation of vulnerable people based on disabilities, the Guidelines include the example mentioned of a therapeutic chatbot aimed to provide mental health support and coping strategies to persons with cognitive disabilities, which can exploit their limited intellectual capacities to influence them to buy expensive medical products. 

When the exploitation concerns vulnerable people based on their socio-economic situation, an example mentioned is an AI-predictive algorithm that could be used to target people who live in low-income post-codes with advertisements for predatory financial products. 

3.2. For the Article 5(1)(b) prohibition to apply, AI practices have to materially distort behavior and be reasonably likely to cause significant harm 

As previously noted, a substantial impact is required to fall within the scope, even though intention is not a necessary element, as the provision also covers merely the effect (see Section 2.3). Similarly to fulfilling the conditions for Article 5(1)(a), as explored above, the AI practice has to be reasonably likely to cause significant harm. It is worth mentioning that the harms in this case may be particularly severe and multifaceted due to the increased susceptibility of the vulnerable group in question. Risks of harm that might be deemed acceptable for adults are often considered unacceptable for children and other vulnerable groups.

4. Areas of interplay between the two prohibitions, and between the prohibitions and other EU laws, including the UCPD, GDPR, and DSA

4.1. Tiered approach to the interplay between Articles 5(1)(a) and (b) 

Where the Article 5(1)(a) prohibition covers mainly the use of subliminal and manipulative techniques, Article 5(1)(b) is focused on the targets of AI exploitation, particularly individuals considered vulnerable due to age, disability or socio-economic circumstances. 

However, there may be instances where both Articles seem applicable. In such cases, examining the predominant aspect of the exploitation is essential. If the exploitation does not explicitly relate to one of the vulnerable groups previously discussed, Article 5(1)(a) applies, taking into consideration that it also covers the exploitation of vulnerabilities in groups outside those listed in Article 5(1)(b). When the exploitation specifically targets the groups identified in Article 5(1)(b), then the practice falls under this latter prohibition.

4.2. Interplay with the GDPR obligations to ensure fairness and data protection by design

The protection of individuals from manipulative processes is also covered in various other European laws, including the GDPR. Under the GDPR, the principle of fairness—enshrined in Article 5(1)(a)—acts as an overarching safeguard ensuring that personal data is not processed in a manner that is unjustifiably detrimental, unlawfully discriminatory, unexpected, or misleading to the data subject. Information and choices about data processing must be presented in an objective and neutral way, strictly avoiding any deceptive, manipulative language or design choices. In fact, the European Data Protection Board (EDPB) explicitly identifies the use of “dark patterns” and “nudging” as violations of this fairness mandate, as these techniques subconsciously manipulate data subjects into making decisions that negatively impact the protection of their personal data. 

In its Guidelines 4/2019 on Data Protection by Design and by Default, the EDPB emphasizes that controllers must incorporate fairness into their system architectures from the outset, proactively recognizing power imbalances and granting users the highest degree of autonomy over their data. This means choices to consent to or abstain from data sharing must be equally visible, and platforms cannot use invasive default options or deceptive interfaces to lock users into unfair processing. 

The profound risks of such subliminal and deceptive techniques are illustrated in the EDPB’s Binding Decision 2/2023 and the Irish Data Protection Commission’s corresponding final decision regarding TikTok. In these rulings, the authorities found that TikTok infringed the principle of fairness by utilizing deceptive design patterns to nudge child users toward public-by-default settings. TikTok has challenged these findings in a case now pending at the CJEU.

Beyond social media interfaces, the EDPB has also stressed the dangers of subliminal manipulation in democratic processes. In its Statement 2/2019 on the use of personal data in political campaigns (the Cambridge Analytica case), the EDPB warns that predictive tools used to profile people’s personality traits, moods, and points of leverage pose severe societal risks. When these sophisticated profiling techniques are used to target voters with highly personalized messaging, they not only infringe upon the fundamental right to privacy but also threaten the integrity of elections, freedom of expression, and the fundamental right to think freely without being subjected to unseen psychological manipulation. 

Synthesizing EDPB decisions and guidelines: to counteract these deceptive techniques across all sectors, the fairness principle mandates that controllers respect data subject autonomy, avoid exploiting user vulnerabilities, and ensure that individuals are never coerced into abandoning their privacy through unfair technological architectures. 

Importantly, these GDPR rules apply in the absence of high thresholds, making them particularly relevant even where the conditions to meet the AI Act prohibitions are not met. This is why clarity about the interplay of the two regulations are essential for practical implementation.

  4.3.  Interplay with other EU laws: UCPD, DSA

The AI Act serves to complement or expand the provisions of existing EU law. For instance, unlike EU consumer protection laws, Articles 5(1)(a) and 5(1)(b) of the AI Act extend protection beyond consumers to encompass any individual. As a result, it must be considered alongside other legal frameworks such as the UCPD, the GDPR, the DSA, the political advertising regulation, and EU product safety legislation. 

For example, the UCPD aims to protect individuals from misleading information that could lead them to purchase goods they would not otherwise have bought. It also offers greater protection to vulnerable individuals, such as the elderly and children. The UCPD overlaps partly with the Article 5(1)(a) and (b) prohibitions, though not entirely. Firstly, the UCPD is a Directive and not a Regulation under EU law, and secondly, it only protects consumers (those “acting outside their trade, business, craft or profession”). In the case of the AI Act, however, the prohibitions in Article 5 serve to protect everyone, irrespective of their “consumer” or other status, such as “patient”, “student”, or “tax payers” to give some examples.

Furthermore, the scope of the UCPD is limited to transactional decisions, not all decisions. For example, a surgeon persuaded by manipulative or deceptive techniques by an AI system to operate on a patient in a certain way rather than another would not be covered by the UCPD. On the contrary, both rules will apply in all cases where AI systems are used to manipulate the consumer’s decision-making autonomy subliminally.

By analogy, the scope of the DSA is also limited to what happens on online platforms, and when it comes to deceptive design and the rules in Article 25 DSA – it is relevant only where the GDPR is not applicable, so the cases in which both the AI Act and the DSA apply are limited. 

But there are other provisions of the DSA that could be relevant at the intersection with prohibited AI practices. For example, the DSA pays special attention to the prohibition of profiling using special categories of personal data (as defined by Article 9 GDPR) on online platforms, given the possible manipulative effect of disinformation campaigns that can lead to a negative impact on public health, public security, civil discourse, political participation, and equality (Recitals 69 and 95 DSA). Therefore, if bots and deepfakes spread information online to convince vulnerable individuals (such as the elderly, children, and economically disadvantaged individuals) to purchase high-profit financial products, both the DSA and the AI Act would apply.

Compliance with these laws can help mitigate harm and reduce manipulative effects. For example, suppose that a very large online platform has conducted a risk assessment to assess systemic risk (as required by Article 34 DSA) and a data protection impact assessment (as required by Article 35 GDPR in certain circumstances). In this case, it will be easier for such a platform to identify whether any of its AI systems may fall under the prohibited uses listed in Article 5 AI Act, and adopt mitigating measures accordingly.

5. Concluding Reflections and Key Takeaways

There is a high threshold for falling under the Articles 5(1)(a) and (b) prohibitions.  

To fall under the prohibitions in Article 5(1)(a) or (b), providers and deployers would have to fulfil several cumulative conditions at once. Interpreting the Guidelines, this high threshold is designed to ensure that only very specific AI use-cases and applications would fall under the scope of the prohibitions. While a high threshold of application exists, it is worth noting that the final text of the AI Act ended up being broader in scope as compared to the European Commission’s initial proposal.

It is important to note that even where this threshold is not met, EU law through provisions of the GDPR regarding fairness and data protection by design when processing personal data, or some of the DSA rules when very large online platforms are involved would still limit some manipulative and deceptive practices. 

The prohibition applies even when there is no intention of manipulation. Even when there is no voluntary intention to influence a person’s decision, Article 5(1) could still apply since the provision also covers the harmful effect of manipulating and exploiting individuals or groups. In order to mitigate potential risks, the provider may adopt transparency measures and implement appropriate safeguards to prevent harmful outcomes or consequences. While doing so, it is important to keep in mind that even though the use of a specific AI system does not meet the cumulative conditions of the Article 5(1) prohibitions, it is nevertheless highly likely to be considered a high-risk AI system under Article 6 AI Act.   

Compliance with other laws can help demonstrate compliance with the AI Act.

The Guidelines highlight that if the AI provider shows compliance with relevant EU legislation on transparency, fairness, risk assessment, and data protection, it may contribute to demonstrating compliance with the AI Act’s requirements.  

Q&A With FPF Vice President for U.S. Policy, Matthew Reisman

In a new Q&A, our Vice President for U.S. Policy, Matthew Reisman, takes a deeper look at the privacy landscape, particularly his interests in the space, what to look forward to in the U.S. and AI sector, and what is key for stakeholders to pay attention to.

What brought you into the privacy and data policy space? What drew you into working in this field/subject matter in particular? 

I was drawn to working in public policy generally because I hoped to have opportunities to improve people’s lives and the communities and societies we live in–and it’s hard to think of a space where that’s more true than data and technology. In the early years of my career, I was struck by the breathtaking pace of change in technology and the ways it was transforming our lives–and yet so many of the principles to guide its development and use remained nascent. I think that remains true today. All of us who care about building responsible public policy and governance for technology have the opportunity to create the path forward together, and I find that terrifically exciting.

You have an extensive background in the data privacy landscape across a range of issues that continue to evolve. What particular sector is one to watch in the U.S.?

As a community, we have been wrestling with how to approach privacy in the context of AI systems: the challenge is to ensure that these tools benefit as broad a spectrum of people, organizations, and society as possible while protecting the rights, freedom, and dignity of individuals. Even as we continue to work through foundational concepts for privacy in the age of AI, it is important that we anticipate the new challenges we will face as the technology continues to evolve. 

To that end, it feels like we are on the cusp of major steps forward for spatial artificial intelligence – where AI systems are enabling richer interactions with the physical world. There are so many potentially beneficial applications for spatial intelligence, from autonomous vehicles, to logistics, to healthcare, just to name a few. 

What else are you thinking about in the AI sector? What is the most timely issue that lawmakers, practitioners, or policymakers should consider the most in relation to AI? 

AI agents have been on many folks’ minds over the past year, and I think rightly so. 2026 feels like a breakout moment for agents for both enterprise and consumer applications. I was recently experimenting with coding agents for some personal projects and experienced “wow” moments similar to those I felt when first trying text-generation LLM tools several years ago. Agents offer exciting potential benefits for individuals, organizations, and society–and to realize them, we will need to work together on principles and standards for responsible development and deployment.

You have worked within the business, government, and nonprofit sectors. Given the breadth of diverse experience that you are now bringing to FPF, what continues to surprise you about the U.S. data privacy landscape across the board? 

It has been fascinating to me to see how privacy and adjacent policy issues have become prominent in everyday discourse in nearly every sector of the economy and society, and nearly every facet of our lives, from the workplace to the family dinner table. I think the factor driving this is the central role of data in virtually every system we interact with–at home, at school, and in our interactions with businesses and government agencies. It’s hard to imagine a time soon when these issues will lessen in importance, so I anticipate we’ll be talking about them with co-workers, teachers, and family and friends alike for the foreseeable future.

What do you find unique about FPF and its approach to bringing together academics, business, and thought leaders in facilitating discussion in privacy matters in the U.S. and abroad? 

FPF fulfills a unique and critical role by bringing together the full range of stakeholders who are striving to ensure that technology and data are used in ways that are responsible and beneficial for individuals, organizations, and society. It is a place that embodies both timeless values and intellectual rigor: when you meet FPF’ers, you quickly realize that they carry an infectious passion for the subject matter, a commitment to excellence in analysis and research, a gift for facilitation of meaningful and productive conversations, and a deeply held belief in the potential for their work to make a difference. I admired and was inspired by FPF’s work as an external stakeholder, and now that I’m here, I only feel those sentiments more strongly. It’s a special place. 

From Proposal to Passage: Enacted U.S. AI Laws, 2023–2025

Over the past three years, lawmakers across the United States have increasingly enacted AI-related laws that shape the development and deployment of AI systems. Between 2023 and 2025, the Future of Privacy Forum tracked 27 pieces of enacted AI-related legislation across 14 states, along with one federal law (the TAKE IT DOWN Act) that carry direct or indirect implications for private-sector AI developers and deployers. Notably, most enacted AI laws are already effective as of 2026, requiring entities to begin navigating compliance obligations. To support stakeholders, FPF has compiled a resource documenting key AI laws enacted from 2023-2025, which can be found below.

These enacted laws span a wide range of policy areas, reflecting experimentation in regulatory scope among lawmakers. In 2025 alone, states enacted laws addressing frontier model risk (such as California’s SB 53 and New York’s RAISE Act), generative AI transparency, AI use in health care settings, liability standards, data privacy, innovation, and synthetic content. Additionally, one of the clearest trends among enacted laws in 2025 included the growing focus on AI chatbots. Five states (California, Maine, New Hampshire, New York, and Utah) enacted chatbot-specific laws emphasizing transparency and safety protocols, particularly for sensitive use cases involving mental health and emotional companionship.

While the majority of these AI laws have already taken effect, a small number have delayed or phased-in effective dates that stakeholders should continue to track:

The broad diversity within 2025 AI bill categories contrasts with 2024, when laws such as the Colorado AI Act signaled a more uniform legislative emphasis on high-risk AI systems and automated decision-making technologies (ADMT) used in consequential decisionmaking. As analyzed in FPF’s State of State AI reports from 2024 and 2025, AI legislative efforts have shifted away from broad, framework-style laws and toward narrower measures tailored to specific use cases and technologies. This trend may also offer a preview of what is to come for enacted AI regulation in 2026: increased sector-specific regulation, heightened attention to sensitive populations such as minors, and a growing emphasis on substantive requirements.

Red Lines under the EU AI Act: Understanding ‘Prohibited AI Practices’ and their Interplay with the GDPR, DSA

Blog 1 | Red Lines under the EU AI Act Series  

This blog is the first of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

The EU AI Act prohibits certain AI practices in the European Union (hereinafter also “the Union”or “the EU”), at the top of the pyramid of its layered approach: harmful manipulation and deception, social scoring, individual risk assessment, untargeted scraping of facial images, emotion recognition, biometric categorization, and real-time remote biometric identification for law enforcement purposes. These are the “red lines” that the EU has drawn through the AI Act. “Red lines” in AI governance have been generally described as meaning “specific boundaries that AI systems must not cross”, and, in more detail, as “specific, non-negotiable prohibitions on certain AI behaviors or AI uses that are deemed too dangerous, high-risk, or unethical to permit”. Most “red lines” emerge from soft law or self-regulation, with the AI Act being the first law globally drawing such lines, exemplifying the strict AI regulatory approach that the EU is pursuing. 

Prohibited AI practices are regulated by Article 5 of the AI Act, which already became applicable in February 2025 (see a full timeline of when chapters of the AI Act become applicable). Starting on 2 August 2025 this provision also became enforceable by the designated authorities at Member State level, or the European Data Protection Supervisor – the supervisory authority for EU institutions, as the case may be. Non-compliance with it triggers administrative fines of up to 35 million euros or up to 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher. However, the supervision and enforcement landscape is highly fragmented and decentralized. 

This blog is the first of a series which will explore each prohibited AI practice and its interplay with existing EU law, such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), starting from the Guidelines on Prohibited Artificial Intelligence Practices under the AI Act (hereinafter ‘the Guidelines’), published by the European Commission on 4 February 2025. The aim is to understand what AI systems and practices are within the scope of Article 5 of the AI Act, and to highlight potential areas of legislative overlap or lack of clarity. This is increasingly important, at a time where the European Commission has prioritized addressing the interplay of the digital regulation acquis with a view to amending parts of the AI Act and the GDPR through the Digital Omnibus initiative. While the initial proposal for the Digital Omnibus on AI does not seek to amend the AI Act’s prohibited practices requirements, multiple political groups of the European Parliament and several governments of member states are proposing amendments to enhance the list of prohibited practices particularly with regard to intimate deep fakes and Child Sexual Abuse Material. 

This blog continues with an Introduction into the significance of the Guidelines and the place of the prohibited practices into the broader layered architecture of the AI Act, tailored to severity of risks (1), details about the definitions and scope of the prohibited practices (2), and an analysis of the interplay of the prohibited AI practices with the GDPR and DSA (3), before Conclusions (4) highlight takeaways: 

  1. Entry into force of prohibited AI practices under the AI Act: A year on 

Prohibited practices under Article 5 of the AI Act entered into force on 2 February 2025 and became enforceable on 2 August 2025. However, so far, no enforcement or otherwise regulatory action in relation to prohibited AI practices has been announced. 

About a year ago, on 4 February 2025, the European Commission released Guidelines on Prohibited Artificial Intelligence Practices under the AI Act. The AI Act regulates the placing on the market, putting into service, and use of AI systems across the Union on the basis of harmonized rules and a tiered approach based on the severity of the risks posed by some AI systems. While there are four risk categories in the AI Act, the Guidelines provide legal explanations and practical examples on AI practices that are deemed unacceptable due to their potential risks to fundamental rights and freedoms, and are therefore prohibited. 

While the Guidelines are non-binding, they offer the Commission’s first interpretation of the Article 5 prohibitions as well as crucial insights into its own analysis on the interplay between core requirements of the AI Act and other EU law, including (but not limited to) the GDPR and the DSA. In publishing the Guidelines, the Commission explicitly acknowledged that any authoritative interpretations of the AI Act ultimately reside with the Court of Justice of the European Union (CJEU), and notes that these may be reviewed or amended in light of relevant future case law or enforcement actions by market surveillance authorities. However, while enforcement actions under the AI Act are yet to emerge, analysis can be made with regard to the interplay between the Commission’s Guidelines and existing CJEU case law, as well as decisions by Data Protection Authorities (DPAs) under the GDPR. 

This first blog in our series on ‘Red Lines under the EU AI Act’ highlights how the Commission’s Guidelines take a scaled approach to delineating the practices which fall within and outside of the scope of prohibited practices. The Guidelines highlight the close interplay between Articles 5 (on prohibited AI practices) and 6 (on high-risk AI systems) of the AI Act, and note that where an AI system does not fulfil the requirements for prohibition under the AI Act, it may still be unlawful or prohibited under other laws such as the GDPR. 

  1. From emotion recognition, to social scoring via AI systems: Overview of prohibitions under Article 5 of the AI Act

The tiered regulatory approach of the AI Act takes into account four risk categories of AI systems on the basis of which scaled obligations are proposed: unacceptable risk, high risk, transparency risk, and minimal to no risk. This analysis zooms in especially on unacceptable risk, as found in Article 5 AI Act, which prohibits the placing on the EU market, putting into service or use of AI systems for manipulative, exploitative, social control or surveillance practices. Of note, Article 5 is framed as such that technology or AI systems themselves are not prohibited, but “practices” involving specific AI systems that pose unacceptable risks are. This framing is different from the one in Chapter III of the AI Act, which classifies and regulates systems themselves as “high-risk AI systems.”   

The prohibited practices are, by their inherent nature, deemed to be especially harmful and abusive due to their contravention of fundamental rights as enshrined in the EU Charter of Fundamental Rights. The Guidelines issued by the European Commission highlight Recital 28 of the AI Act by reiterating that the impacts of prohibited AI practices are not limited to the right to personal data protection (Article 8 EU Charter) and the right to a private life (Article 7), but they also pose an unacceptable risk to the rights to non-discrimination (Article 21), equality (Article 20), and the rights of the child (Article 24). 

Prohibited AI practices under the AI Act include:

2.1. The Guidelines extend the scope of prohibited AI practices to include those related to general-purpose AI systems 

In defining the material scope of Article 5 AI Act, the Guidelines expand upon the definitions of “placing on the market, putting into service or use” of an AI system. This is important, because all prohibited practices under Article 5(1) AI Act, from letters (a) to (g), refer to “the placing on the market, the putting into service or the use of an AI system that (…)” engages in a specific practice defined under each of the letters of the provision. Therefore, understanding the definitions of these terms is essential for the application of the “prohibitions”.

“Placing on the market” is the first making available of an AI system on the Union market, for distribution or use in the course of a commercial activity, either for a fee or free of charge (see Articles 3(9) and 3(10) AI Act for full definitions). Placing an AI system on the Union market is considered as such regardless of the means of supply, whether through an API, direct downloads, via cloud or physical copies. 

“Putting into service” refers to the supply of an AI system for first use to the deployer or for own use in the Union for its intended purpose (Article 3(11)), and covers both the “supply for first use” to third parties and “in-house development or deployment”1. The inclusion of in-house development to the scope of Article 3(11) is a significant extension introduced by the Guidelines, considering the definition of “putting into service” in the AI Act only refers to “the supply of an AI system for first use directly to the deployer or for own use in the Union.” This interpretation might need further clarification, especially as Article 2(8) AI Act excludes “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” from its scope of application.

Regarding the “use” of an AI system, which is not directly defined by the AI Act, the Guidelines specify that it should be similarly broadly understood to cover the use and deployment of AI systems at any point in their lifecycle, after having been put into service or placed on the market. Importantly, the Guidelines specify that “use” also includes any “misuse” that may amount to a prohibited practice, making deployers responsible for reasonably foreseeable harms that may arise. 

Given the scope of the prohibited practices, the Guidelines focus on both providers and deployers of AI systems and highlight that continuous compliance with the AI Act is required during all phases of the AI lifecycle. For each of the prohibitions, the roles and responsibilities of providers and deployers should be construed in a proportionate manner, “taking into account who in the value chain is best placed” to adopt a mitigating or preventive measure.

The Guidelines acknowledge that while harms may often arise from the ways AI systems are used in practice by deployers, providers also have a responsibility not to place on the market or put into service AI and GPAI systems that are “reasonably likely” to behave or be used in a manner prohibited by Article 5 AI Act. It is important to highlight that the Guidelines extend the scope of Article 5 to general-purpose AI systems as well, even though they are not specifically called out by the provision (see para. 40 of the Guidelines). 

As highlighted above, the provision is drafted as such to target “practices” of AI, which opens the possibility that not only GPAI systems are covered, but also practices of agentic AI or any new shape or form of AI systems that result in a practice described by Article 5 AI Act. Indeed, the Guidelines specifically mention that the “prohibitions apply to any AI system, whether with an ‘intended purpose’ or ‘general purpose.’” It is worth noting, however, that the Guidelines address prohibitions in relation to general-purpose AI systems rather than models, recalling that such systems are indeed based on general-purpose AI models but “have the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems” (Article 3(66) AI Act). 

2.2. Purposes that do not fall within the scope of the AI Act, and practices that do

The Guidelines note that the AI Act expressly excludes from its scope AI systems used for national security, defence, and military purposes (Article 2(3)). For this exclusion to apply, the AI system must be placed on the market, put into service or used exclusively for such purposes. This means that so-called “dual use” AI systems, such as those for civilian or law enforcement purposes, do fall within the scope of the law. A direct example from the Guidelines notes that: “if a company offers a RBI (remote biometric identification – n.). system for various purposes, including law enforcement and national security, that company is the provider of that dual use system and must ensure its compliance” with the AI Act (emphasis added). 

In addition to judicial and law enforcement cooperation with third countries, research and development activities also fall outside the scope of the AI Act. Indeed, as also recalled above, the AI Act does not apply to “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” (Article 2(8)). The Guidelines view this exemption as a natural continuation of the AI Act’s market-based logic, which applies to AI systems once they are placed on the market. However, this raises consistency issues with how the same Guidelines include “in-house development or deployment” of AI systems in the scope of “putting into service” (see also Section 2.1. above).

It is worth noting that the Guidelines are explicit in their reminder of the fact that the research and development exclusion does not apply to testing in real-world conditions, and in cases where those experimental systems are eventually placed on the Union market. The testing of AI systems in real-world conditions may only be carried out in AI regulatory sandboxes, and in full compliance with other Union law, including the GDPR insofar as personal data processing is concerned. 

The Guidelines also note that purely personal, non-professional activities similarly fall outside of the AI Act’s scope (Article 2(10)). This includes, for example, an individual using a facial recognition system at home. However, they are careful in noting that the facial recognition system as such does remain within the scope of the AI Act as regards the obligations of providers of such systems in ensuring compliance, even in full knowledge that the system is intended to be used by natural persons for purely non-professional purposes or activities. 

The Guidelines take an overall cautious approach in delineating the purposes and practices which fall outside the scope of the AI Act through consistent reference to Recitals 22 to 25. The Recitals recall and make clear that providers and deployers of AI systems which fall outside the scope of the AI Act may nevertheless have to comply with other Union laws that continue to apply. 

  1. Interplay of the AI Act’s Prohibitions with the High-Risk Designation and other Union Law 

3.1. A scaled approach to the interplay between high-risk AI systems and prohibited AI practices  

The Guidelines highlight key areas of interplay between the different risk categories, showing a scaled approach in the AI Act’s risk designation. Importantly, the Guidelines note the close relationship between Article 5 on prohibited practices, and Article 6 on high-risk AI systems. They note that “the use of AI systems classified as high-risk may in some cases qualify as prohibited practices in specific circumstances” and, conversely, most AI systems that fall under an exception from a prohibition listed in Article 5 will qualify as high-risk. This approach clarifies yet again that Article 5 is not meant to prohibit a specific technology, but practices or uses of technology.

An example where Articles 5 and 6 of the AI Act should be considered in relation to each other is in the case of AI-based scoring systems, such as credit scoring, which will be considered high-risk if they do not fulfil the conditions for the credit scoring prohibition as outlined in Article 5(1)(c). While not specifically mentioned by the Guidelines in this context, it is worth noting that Courts and DPAs across the EU have been active in cases involving automated credit scoring practices under Article 22 GDPR on automated-decision making (ADM), as well as in cases that may amount to “profiling”. The notion of “profiling” under the GDPR is particularly relevant in the context of understanding Article 5(1)(d) AI Act. As such, in addition to taking into full account the risk designations under Articles 5 and 6 AI Act, it is also crucial to note the ADM prohibition under Article 22 GDPR, as compliance with one law may not automatically equal to compliance with the other. 

3.2. Interplay between the prohibited AI practices under the AI Act with the GDPR and DSA 

The Guidelines acknowledge the dichotomy between the AI Act and other Union law by recalling that, as a horizontal law applying across all sectors, the Act is without prejudice to legislation on the protection of fundamental rights, consumer protection, employment, the protection of workers and product safety. They also frame the goal of the AI Act and its preventive logic in the sense that it provides additional protection by addressing potential harms arising from AI practices which may not be covered by other laws, including by addressing the earlier stages of an AI system’s lifecycle. 

The Guidelines expressly highlight that where an AI system may not be prohibited under the AI Act, it may still be prohibited or unlawful under other laws because of, for example, “the failure to respect fundamental rights in a given case, such as the lack of a legal basis for the processing of personal data required under data protection law”, where, for instance, the GDPR is applicable, including extra-territorially. 

Crucially, the Guidelines acknowledge that in the context of prohibitions, the interplay between the AI Act and data protection law is particularly relevant, since AI systems often process personal data. They specify that laws including the GDPR, the Law Enforcement Directive, and the EU Data Protection Regulation applying to EU institutions (EUDPR), “remain unaffected and continue to apply alongside the AI Act”, noting the complementarity of the Act with the EU data protection acquis

This statement in the Guidelines about this relationship seems to be weaker than the provision in the AI Act, which states that the AI Act “shall not affect” the GDPR, the EUDPR, the ePrivacy Directive or the Law Enforcement Directive (Article 2(7) AI Act). This technically means that the AI Act is without prejudice to the GDPR and any of the other EU data protection aquis. This fact might create some complex compliance situations in practice, and will require a broad and comprehensive understanding of the EU digital rulebook as a whole, noting that its component parts cannot be read in isolation. For instance, what law prevails if a prohibited AI practice under the AI Act that overlaps with a solely automated decision-making practice involving personal data and legally or significantly affecting an individual, lawfully meets the exceptions under Article 22 GDPR? The AI Act is not designated as lex specialis, based on Article 2(7).

In addition to data protection law, the Digital Services Act (DSA) is similarly deemed relevant in the context of the AI Act’s prohibitions. The Guidelines highlight that these apply in conjunction with the relevant obligations on the providers of intermediary services (defined by Article 3(g) DSA) when AI systems or models are embedded in such services. Further, the AI Act and its prohibitions do not affect the application of the DSA’s provisions on the liability of such providers, as set out in Chapter II DSA, or existing or future liability legislation at Union or national levels. In the context of liability legislation, the Guidelines refer to Directive (EU) 2024/2853 on liability for defective products, and the now withdrawn AI Liability Directive

3.3. Notes on Enforcement of the AI Act’s Prohibitions and Penalties: Fragmentation and Decentralization 

The Guidelines recall that market surveillance authorities (MSAs), as designated by EU Member States, are responsible for enforcing the AI Act and its prohibitions. Member States had until 2 August 2025 to designate one or multiple MSAs, with some countries having already assigned the role to their national DPA with regard to certain parts of the AI Act (e.g., high-risk AI systems). Competent authorities can take enforcement actions in relation to the prohibitions on their own initiative or following a complaint by any affected person or other natural or legal person. The staggered timeline between the date of applicability of the AI Act’s provisions on prohibited uses and the deadline for designating the responsible authorities to enforce them has been causing some legal uncertainty.   

A review of Member States that have already appointed MSAs at the time of writing show, for the most part, a decentralized approach to enforcing the AI Act’s prohibited practices. Such an approach, which assigns supervision and enforcement roles to a variety of authorities depending on the sector they regulate and their area of expertise, is typical for EU product safety legislation. 

For example, on 4 February this year, Ireland published its Regulation of Artificial Intelligence Act 2026, the national law that, once adopted, will implement the AI Act’s provisions. On this basis, the enforcement approach proposed by the Act is to establish the AI Office of Ireland, either on or before 2 August 2026, which will act as the central coordinator and Single Point of Contact (Article 70 AI Act). Under this umbrella, the Act also proposes to assign monitoring and enforcement powers to different existing authorities for different prohibited practices: the Central Bank of Ireland will enforce prohibited practices in respect of financial services regulated by it; the Workplace Relations Commission will enforce prohibited practices used in employment (Article 5(1)(f) AI Act); the Coimisiún na Meán will be responsible for “certain” prohibited practices in respect of online platforms (as defined by the DSA); and the Irish Data Protection Commission (DPC) will also be responsible for “certain parts” of the prohibited practices. While the Act does not yet specify which “certain parts” the Irish DPC will be responsible for monitoring, the draft already gives an indication of the decentralized approach to enforcing the rules on prohibited practices at national level, with responsibility assigned to a variety of authorities. 

In France, the CNIL is responsible for monitoring compliance of the prohibited practices for predictive policing, the untargeted scraping to develop facial recognition databases, emotion recognition in the workplace and education institutions, biometric categorization, and real-time remote biometric identification (Articles 5(1)(d) – (h)). Responsibility for monitoring compliance with Articles 5(1)(a) and (b) lies with the Audiovisual and Digital Communication Regulatory Authority and the Directorate General for Competition, Consumer Affairs and Fraud Control. Here we can also see responsibility for monitoring prohibited practices being assigned to more than one regulator, depending on their existing area(s) of regulatory focus. 

Finally, the Guidelines state that non-compliance with the AI Act’s prohibitions constitute the “most severe infringement” of the law, and is therefore subject to the highest fines. Providers and deployers engaging in prohibited AI practices can be fined up to EUR 35 000 000 or 7% of total worldwide annual turnover, whichever is the highest. 

  1. Closing reflections and key takeaways

The AI Act doesn’t prohibit technology, but uses or practices of technology that pose unacceptable risk

Article 5 of the AI Act is broadly framed as such that technologies or AI systems themselves are not directly prohibited, but “practices” involving specific AI systems that pose unacceptable risk are. Such systems are, in turn, tied to certain actions, specifically to “placing on the market, putting into service or use” of an AI system. These actions are also interpreted broadly such that, for example, the “use” of an AI system also includes its intended use and potential misuse. The broad framing ensures that both providers and deployers of AI systems consider all phases of the AI lifecycle and approach compliance in a proportionate manner, taking into account “who in the value chain is best placed to adopt a mitigating or preventive measure.”

Practices of “General Purpose AI Systems” may also fall under the “prohibitions” of the EU AI Act

Equally of note is that the Guidelines extend the Article 5 prohibitions to practices related to any AI system, including general-purpose AI systems (rather than models themselves), even though such systems are not expressly mentioned in the AI Act provision. The Guidelines acknowledge that while harm often arises from the way specific AI systems are used in practice, both deployers and providers have a responsibility not to place on the market or put into service AI systems, including general-purpose AI systems, that are “reasonably likely” to behave in ways prohibited by Article 5 of the AI Act. 

“In-house development” is at the same time excluded from the application of the AI Act and included in the “putting into service” definition in the Guidelines, needing further clarification

As shown above, the Guidelines provide clarifications about what “placing on the market”, “putting into service” and “use” of an AI system mean, which reveal a broad interpretation of the legal definitions enshrined in the AI Act. Notably, “putting into service” is expanded to mean not only “supply for first use”, but also “in-house development or deployment” (see Section 2.1 above). At the same time, Article 2(8) of the AI Act excludes from the scope of application of the regulation any “testing or development activity” regarding AI systems and models “prior to their being placed on the market or put into service”. Further clarification from the European Commission about this part of the Guidelines is needed for legal certainty.

The interplay of the prohibitions under the AI Act and the GDPR needs legal certainty

The Commission’s Guidelines on the AI Act’s prohibitions adopt a scaled approach to delineating, based on the level of risk, which AI practices or uses may be outright prohibited and which may instead fall under the Article 6 high-risk designation. The logic of the scaled approach also extends beyond the AI Act, as the Guidelines caution that while an AI practice may not fall under the Article 5 prohibitions, it may still be unlawful under other Union laws, such as the GDPR and DSA. What is not as clear, though, is what would happen if an AI practice potentially prohibited under the AI Act would otherwise be allowed by other legislation designated as prevailing over the AI Act, and particularly the GDPR. For example, Data Protection Authorities have allowed, in the past, some facial recognition systems to be used, and have found fixable infractions related to the use of emotion recognition systems, showing that such systems could be lawful under the GDPR if all conditions highlighted in the decision would be met. The European Data Protection Board could support consistency of interpretation and application of the two legal regimes with dedicated guidelines.

The enforcement architecture of prohibited AI practices exhibits significant decentralization and fragmentation, including at national level

There are two layers of decentralization of the enforcement architecture for the prohibited AI practices: first, they are primarily left to national competent authorities as opposed to a centralized authority at EU level; second, at national level, multiple authorities have often been designated within one jurisdiction, as the cases of Ireland and France described above show. This level of decentralization is expected to lead to fragmentation of how the relevant provisions of the AI Act are applied. This landscape is further complicated by the interplay of the prohibitions under the AI Act and the GDPR, through the role of supervisory authorities over processing of personal data and their independence as guaranteed by Article 16(2) of the Treaty on the Functioning of the European Union and Article 8(3) of the EU Charter of Fundamental Rights. 

Finally, besides the close interaction between the various provisions of the AI Act themselves, the Guidelines also highlight the significant interplay between the Act and other Union laws. The ways in which these interactions may play out in the context of the several prohibited practices, such as emotion recognition and real-time biometric surveillance, will be explored in more detail in future blog posts in this series. Meanwhile, a deep dive into the broad framing of the AI Act’s prohibited practices reveals that a similarly broad understanding of the data protection acquis and EU digital rulebook is required in order to fully make sense of, and comply with, key obligations for the development and deployment of AI systems across the Union. 

  1.  See para. 13 of the Guidelines, p. 4. ↩︎

Paradigm Shift in the Palmetto State: A New Approach to Online Protection-by-Design

SSouth Carolina Governor McMaster signed HB 3431, an Age-Appropriate Design Code (AADC) -style law, on February 5, adding to the growing list of new, bipartisan state frameworks fortifying online protections for minors. Although HB 3431 is dubbed an AADC, its divergence from past models and unique blend of requirements that draw upon a variety of other state laws may signal that youth privacy- and safety-by-design frameworks are undergoing a paradigm shift away from “AADCs” and into a new model for online protections entirely. South Carolina’s novel approach evolves the online design code schema from approaches seen in other jurisdictions through its focus on both privacy and safety risks, the way covered services must address those risks, the kinds of safeguards online services should provide to users and minors, enforcement priorities, and navigating constitutional pitfalls. 

For compliance teams, the need to unpack the law’s unique provisions is urgent since the law took effect upon approval by the Governor, meaning these requirements are now in effect. Further complicating the timing of compliance considerations, NetChoice filed a lawsuit on February 9 challenging the constitutionality of the Act on First Amendment and Commerce Clause grounds. NetChoice has requested a preliminary injunction to block enforcement of the law as litigation progresses. However, with an unclear litigation timeline, several newly effective legal obligations, and significant enforcement provisions carrying personal liability for employees, compliance teams may be stuck between two high-stakes options: (1) a risk of insufficient action and consequential liability if entities are slower to come into compliance while monitoring litigation outcomes; or, (2) a risk of sunken compliance costs that could have been invested in other important compliance and trust and safety operations if they invest heavily into compliance now and the law is later overturned.

This blog post covers a few key takeaways, including:

Please see our comparison chart for a full side-by-side analysis of how South Carolina’s approach compares against other state law protections for minors online.

Scope

South Carolina’s Act applies to any legal entity that owns, operates, controls, or provides an online service reasonably likely to be accessed by minors. Whereas prior comparable state laws typically limited the scope to for-profit entities, South Carolina seemingly extends application to non-profit and other non-commercial entities. This approach mirrors the legal entity framing adopted in Vermont’s and Nebraska’s AADCs, though those laws include narrower applicability thresholds. With respect to applicability threshold criteria, South Carolina aligns with the model set out in Maryland’s AADC, applying to entities that meet any one of the following: (1) $25 million or more in gross annual revenue; (2) the buying, selling, receiving, or sharing of personal data of more than 50,000 individuals; or (3) deriving more than 50 percent of annual revenue from the sale or sharing of personal data. 

An Evolving Approach to Design Protections & Enforcement

Duty of Care

Similar to Vermont’s AADC and state comprehensive privacy laws that incorporate heightened protections for minors, such as Connecticut and Colorado, South Carolina imposes a duty of care on covered online services. Significantly, South Carolina’s duty requires entities to exercise reasonable care to prevent heightened risks of harm to minors, including compulsive use, identity theft, discrimination, and severe psychological harm, among others. The obligation to “prevent” harms to minors diverges sharply from comparable duties of care which only require entities to “mitigate” risks–seemingly placing a higher bar on entities’ compliance efforts compared to other online protection frameworks. Moreover, South Carolina includes two disclaimers regarding the application of the duty of care, including: (1) clarifying that “harm” is limited to circumstances not precluded by Section 230; and, (2) clarifying that entities are not required to prevent minors from intentionally “searching for content related to the mitigation of the described harms.” 

Mandatory Tools & Default Settings

South Carolina takes a Nebraska AADC-style approach to requiring comprehensive tools and protective default settings for minors–but with a twist. Notably, South Carolina requires covered services to provide extensive tools to all users of an online service, such as tools for disabling unnecessary design features, opting-out of personalized recommendation systems (except for tailoring based on explicit preferences), and limiting the amount of time spent on a service or platform. For minors, the Act requires covered services to enable all tools by default, functionally achieving the same goals as high default settings requirements in other frameworks, like Vermont’s and Maryland’s AADCs. Additionally, South Carolina includes prescriptive requirements for the kinds of parental tools businesses must build and provide for parents to monitor and further limit minors’ use of online services–seemingly inspired by the parental tools obligations proposed by the KOSA. Importantly, businesses in scope of several minor online protection frameworks should pay close attention to South Carolina’s expansive mandatory tools and default settings requirements–and the range of users for which these tools must be available–when assessing compliance impacts. 

Processing Restrictions

South Carolina’s new law includes a common component of other minor online protection frameworks: normative processing restrictions limiting the way covered online services can collect and use minors’ data, including restrictions on profiling and geolocation data tracking and a prohibition on targeted advertising. Notably, similar to Nebraska’s AADC, South Carolina also broadly prohibits covered entities’ use of dark patterns on a service. This goes far beyond many other privacy laws that instead prohibit dark patterns only insofar as they are used in obtaining consent or collecting personal data. Although the law as a whole is subject to Attorney General enforcement, South Carolina’s Act singles out the dark patterns prohibition as a violation of the South Carolina Unfair Trade Practices Act, which includes a private right of action.

Third Party Audits

One of the key issues hampering states’ implementation of AADC frameworks has been legal challenges to requirements for service providers to perform data protection impact assessments (DPIAs). The DPIA rules typically require covered online services to assess the likelihood of harm to children. For example, California’s AADC has been subject to litigation because, among other things, it included a requirement for businesses to assess and limit the exposure of children to “potentially” harmful content. The Ninth Circuit held that assessments that require a company to opine on content-based harms are constitutionally problematic, but it did not hold that DPIAs are entirely unconstitutional–yet the litigation caused some proponents of AADC-style laws to explore alternatives to DPIAs

Within this dynamic constitutional landscape, South Carolina shifts away from requiring covered entities to internally assess harms through DPIAs and instead requires covered entities to undergo annual third-party audits and publicly disclose the reports. Those audits must include detailed information on various aspects of the online service as it pertains to minors, including the purpose of the online service, for what purpose the online service uses minors’ personal and sensitive data, whether the service uses “covered design features” (e.g., infinite scroll, autoplay, notifications/alerts, appearance-altering filters, etc.), and a description of algorithms (an undefined term) used by the covered online service. This shift towards public disclosure of service assessment information may cause notable compliance difficulties and raise trade secret questions for covered online services, although it is unclear whether this unique ‘third-party audits’ approach addresses the underlying constitutional concerns highlighted in state AADC litigation. 

Enforcement

South Carolina authorizes the Attorney General to enforce the Act, allowing for treble financial damages for violations. Most significantly, South Carolina also authorizes the Attorney General to hold officers and employees personally liable for “willful and wanton” violations–a novel and severe enforcement mechanism not employed in comparable frameworks. However, personal liability for employees and officers is not entirely unheard of in the broader consumer protection and digital services enforcement context. For example, in an aggressive enforcement approach advanced by the Federal Trade Commission (FTC) under Chair Lina Khan, the agency pursued personal liability against senior executives at a public company for violations of the FTC Act. In a more recent example, the Kentucky Attorney General filed a consumer protection lawsuit against Character.AI and its founders alleging the company knowingly harmed minors in the operation of its companion chatbot product, exposing minors to “sexual conduct, exploitation, and substance abuse.” 

Conclusion

By adopting its novel approach, South Carolina adds to a growing state-level experiment that seeks to establish obligations to address and disclose risks of harm in online services and afford greater protections for minors with constitutional constraints. South Carolina’s novel blend of different state-level models, unique take on service assessments, and unusual enforcement approach may signal a broader fragmentation of online youth protection frameworks into three increasingly defined models: (1) data management-oriented heightened protections for minors embedded in state privacy laws; (2) age appropriate design codes that impose a fiduciary duty to act in children’s best interests, require age-appropriate design, and mandate DPIAs to assess foreseeable harms; and, (3) a “protective design” model exemplified by South Carolina, that synthesizes elements observed in first two while uniquely integrating privacy and safety obligations. It remains to be seen how the emerging protective design model may influence ongoing state legislative efforts, impact business compliance efforts, and measure-up against potential constitutional scrutiny.

From Chatbot to Checkout: Who Pays When Transactional Agents Play?

Disclaimer: Please note that nothing below should be construed as legal advice. 

If 2025 was the year of agentic systems, 2026 may be the year these technologies reshape e-commerce. Agentic AI systems are defined by the ability to complete more complex, multi-step tasks, and exhibit greater autonomy over how to achieve user goals. As these systems have advanced, technology providers have been exploring the nexus between AI technologies and online commerce, with many launching purchase features and partnering with established retailers to offer shopping experiences within generative AI platforms. In doing so, these companies have also relied on developments in foundational protocols (e.g., Google’s Agent Payment Protocol) that seek to enable agentic systems to make purchases on a person’s behalf (“transactional agents”). But LLM-based systems like transactional agents can make mistakes, which raises questions about what laws apply to transactional agents and who is responsible when these systems make errors. 

This blog post examines the emerging ecosystem of transactional agents, including examples of companies that have introduced these technologies and the protocols underpinning them. Existing US laws governing online transactions, such as the Uniform Electronic Transactions Act (UETA), apply to agentic commerce, including in situations where these systems make errors. Transactional agent providers are complying with these laws and otherwise managing risks through various means, including contractual terms, error prevention features, and action logs. 

How is the Transactional Agent Ecosystem Evolving? 

Several AI and technology companies have unveiled transactional agents over the past year that enable consumers to purchase goods within their interfaces rather than having to visit individual merchants’ websites. For example, OpenAI added native checkout features into its LLM-based chatbot that hundreds of millions of consumers already use, and Perplexity introduced similar features for paid users that can find products and store payment information to enable purchases. Amazon has also released a “Buy For Me” feature, which involves an agentic system that sends payment and shipping address information to third party merchants so that Amazon’s users can buy these merchants’ goods on Amazon’s website. 

Many of these same companies are developing frameworks and protocols (e.g., A2A, AP2, UCP, ACP, and MCP) that can combine to facilitate transactional agents across e-commerce. At the same time, merchants are modifying their experiences to ensure their goods can reach transactional agent users.  

Application of Existing Laws (such as the Uniform Electronic Transactions Act)

As consumer-facing tools for agentic commerce develop, questions will arise about who is responsible when transactional agents inevitably make mistakes. Are users responsible for erroneous purchases that a transactional agent may make on their behalf? In these cases, long-standing statutes governing electronic transactions apply. The Uniform Electronic Transactions Act (UETA), a model law adopted by 49 out of 50 U.S. states, sets forth rules governing the validity of contracts undertaken by electronic means, and suggests that consumer transactions conducted by an agentic system can be considered valid transactions.

First, the UETA has provisions that apply to “electronic agents,” which are defined as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” This is a broad, technology-neutral definition that is not reserved for AI. It encompasses a range of machine-to-machine and human-to-machine technologies, such as automated supply chain procurement and signing up for subscriptions online. The latest transactional agents can take an increasing set of actions on a user’s behalf without oversight, such as finding and executing purchases, so these technologies could potentially qualify as transactional agents

This means that transactional agents can probably enter into binding transactions on a person’s behalf. Section 14 of the UETA indicates that this can occur even without human review when two entities use agentic systems to transact on their behalf (e.g., an individual user of a system that buys goods on their behalf and that of an e-commerce platform that can negotiate order quantity and price). At a time where agentic systems representing distinct parties interacting with each other are edging closer to reality, these systems could bind the user to contracts undertaken on their behalf despite the lack of human oversight. However, a significant caveat is that the UETA also says that individuals may avoid transactions entered into by transactional agents if they were not given “an opportunity for the prevention or correction of [an] error . . . .” This is true even if the user made the error. 

Finally, even if an agentic transaction is deemed valid and a mistake is not made, other legal protections may apply in the event of consumer harm. For example, a transactional agent provider that requires third parties to pay for their goods to be listed by the agent, or gives preference to its own goods, may violate antitrust and consumer protection law. There is also a growing debate over the application of other longstanding common law protections, such as fiduciary duties and “agency law.”

What Risk Management Steps are Transactional Agent Providers Taking to Manage Responsibility?

Managing responsibility for transactional agents can take varied forms, including contractual disclaimers and limitations, protocols that signal to third parties an agentic system’s authorization to act on a user’s behalf, as well as design decisions that reduce the likelihood of transactions being voided when errors occur (e.g., confirmation prompts that require users to authorize purchases):

Conclusion

Organizations are increasingly rolling out features that enable agentic systems to buy goods and services. These current and near-future technologies introduce uncertainty about who is responsible for agentic system transactions, including when mistakes are made, which is leading providers to integrate error prevention features, contractual disclaimers, and other legal and technical measures to manage and allocate risks. 

Looking ahead, there will be many more privacy, data governance, and risk management challenges to address. The uptake of transactional agents raises data governance considerations. As these technologies become more autonomous, organizations must decide to what extent transactional agents proactively infer consumer preferences and adapt actions based on their impact on a user’s financial wellbeing. Publishers and retailers also face the challenge of how to let transactional agents interact with their websites. This particular issue has fed tensions over who owns the direct consumer relationship in an agentic world (e.g., is it online marketplaces and information aggregators or the agentic system’s provider?). Even with applicable laws for transactional agents, the evolution of these technologies (e.g., less human oversight) and increased investment in these technologies will create new legal challenges for practitioners to address. 

FPF Retrospective: U.S. Privacy Enforcement in 2025

The U.S. privacy law landscape continues to mature as new laws go into effect, cure periods expire, and regulators interpret the law through enforcement actions and guidance. State attorneys general and the Federal Trade Commission act as the country’s de facto privacy regulators, regularly bringing enforcement actions under legal authorities both old and new. For privacy compliance programs, this steady stream of regulatory activity both clarifies existing responsibilities and raises new questions and obligations. FPF’s U.S. Policy team has compiled a retrospective looking back at enforcement activity in 2025 and outlining key trends and insights.

Looking at both substantive areas of focus in enforcement actions and the level of activity by different enforcers, the retrospective identified four notable trends in 2025: 

  1. California and Texas Lead Growing Public Enforcement of Comprehensive Privacy Laws: Comprehensive privacy laws may finally be moving from a period of legislative activity into a new era where enforcement is shaping the laws’ meaning, as 2025 saw a significant increase in the number of public enforcement actions.
  2. States Demonstrate Increasing Concern for Kids’ and Teens’ Online Privacy and Safety: As legislators continue to consider broad youth privacy and online safety legal frameworks, enforcers too are looking at how to protect the youth online. Bringing claims under existing state laws, including privacy and UDAP, regulators are paying close attention to opt-in consent requirements, protections for teenagers in addition to children under 13, and the online safety practices of social media and gaming services. 
  3. U.S. Regulators Go Full Speed Ahead on Location and Driving Data Enforcement: Building on recent enforcement actions concerning data brokerage and location privacy, federal and state enforcers have expanded their consumer protection enforcement strategy to focus also on first-party data collectors and the collection of “driving data.”
  4. FTC Prioritizes Enforcement on Harms to Kids and Teens, and Deceptive AI Marketing, Under New Administration: The FTC transitioned leadership in 2025, moving into a new era under Chair Andrew Ferguson that included a shift toward targeted enforcement activity focused  on ensuring children’s and teens’ privacy and safety, and  “promoting innovation” by addressing deceptive claims about the capabilities of AI-enabled products and services.

There are several practical takeaways that compliance teams can draw from these trends: obtaining required consent prior to processing sensitive data, including through oversight of vendors’ consent practices, identification of known children, and awareness of laws with broader consent requirements; ensuring that consumer controls and rights mechanisms are operational; avoiding design choices that could mislead consumers; considering if and when to deploy age assurance technologies and how to do so in an effective and privacy-protective manner; and avoiding making deceptive claims about AI products.

2026: A Year at the Crossroads for Global Data Protection and Privacy

There are three forces twirling and swirling to create a perfect storm for global data protection and privacy this year: the surprise reopening of the General Data Protection Regulation (GDPR) which will largely play out in Brussels over the following months, the complexity and velocity of AI developments, and the push and pull over the field by increasingly substantial adjacent digital and tech regulations. 

All of this will play out with geopolitics taking center stage. At the confluence of some of these developments, the protection of children online and cross-border data transfers – with their other side of the coin, data localization in the broader context of digital sovereignty, will be two major areas of focus.

1. The GDPR reform, with an eye on global ripple effects

The gradual reopening of the GDPR last year came as a surprise, without much debate or public consultation, if any. It passed its periodic evaluation in the summer of 2024 with a recommendation for more guidance and better implementation to suit SMEs and harmonization across the EU, as opposed to re-opening or amending it. Moreover, exactly one year ago, in January 2025, at CPDP-Data Protection Day Conference in Brussels, not one, but two representatives of the European Commission, in two different panels (one of which I moderated) were very clear that the Commission had no intention to re-open the GDPR. 

Despite this, a minor intervention was first proposed in May to tweak the size of entities under the obligation to keep a register of processing activities through one of the simplification Omnibus packages of the Commission. But this proved to just crack the door open for more significant amendments to the GDPR proposed later on, under the broad umbrella of competitiveness and regulatory simplification the Commission started to pursue emphatically. Towards the end of the year, in November 2025, major interventions were introduced within another simplification Omnibus dedicated to digital regulation. 

There are two significant policy shifts the GDPR Omnibus proposes that should be expected to reverberate in data protection laws around the world in the next few years. First, it entertains the end of technology-neutral data protection law. AI – the technology, is imprinted all over the proposed amendments, from the inconspicuous ones, like the new definition proposed for “scientific research”, to the express mentioning of “AI systems” in new rules created to facilitate their “training and operations” – including in relation to allowing the use of sensitive data and to recognizing a specific legitimate interest for processing personal data for this purpose. 

The second policy shift – and perhaps the most consequential for the rest of the data protection world, is the narrowing down of what constitutes “personal data”, by adding several sentences to the existing definition to transpose what resembles the relative approach to de-identification which was confirmed by the Court of Justice of the EU (CJEU) in the SRB case this September. To a certain degree, the proposed changes bring the definition to pre-GDPR days, when some data protection authorities were indeed applying a relative approach in their regulatory activity. 

The new definition technically adds that the holder of key-coded data or other information about an identifiable person, which does not have means reasonably likely to be used to identify that person, does not process personal data even if “potential subsequent recipients” can identify the person to whom the data relates. Processing of this data, including publishing it or sharing it with such recipients, would thus be outside of the scope of the GDPR and any accountability obligations that follow from it.

If the language proposed will end up in the GDPR, this would likely mark a narrowing of the scope of application of the law, leaving little room for supervisory authorities to apply the relative approach on a case-by-case basis following the test that the CJEU proposed in SRB. This is particularly notable, considering that the GDPR has successfully exported the current philosophy and much of the wording of the broad definition of personal data (particularly its “identifiability” component) to most data protection laws adopted or updated around the world since 2016, from California, to Brazil, to China, to India.

The ripple effects around the world of such significant modifications of the GDPR would not be felt immediately, but in the years to come. Hence, the legislative process unfolding this year in Brussels on the GDPR Omnibus should be followed closely. 

2. The Complexity and Velocity of AI developments: Shifting from regulating data to regulating models?

There is a lot to unpack here, almost too much. And this is at the core of why AI developments have an outsized impact on data protection. There is a lot of complexity related to understanding the data flows and processes underpinning the lifecycle of the various AI technologies, making it very difficult to untangle the ways in which data protection is applicable to them. On top of it, the speed with which AI evolves is staggering. This being said, there are a couple of particularly interesting issues at the intersection of AI and data protection to be necessarily followed this year, with an eye towards the following years too.  

One of them is the intriguing question of whether AI models are the new “data” in data protection. Some of you certainly remember the big debate of 2024: do Large Language Models (LLMs) process personal data within the model? While it was largely accepted that personal data is processed during training of LLMs and may be processed as output of queries done within LLMs, it was not at all clear that any of the informational elements related to AI models post-training, like tokens, vectors, embeddings or weights, can amount by themselves or in some combination to personal data (or not). The question was supposed to be solved by an Opinion of the European Data Protection Board (EDPB) solicited by the Irish Data Protection Commission, which was published in December 2024.

Instead, the Opinion painted a convoluted regulatory answer by offering that “AI models trained on personal data cannot, in all cases, be considered anonymous”. The EDPB then dedicated most of the Opinion on laying out criteria that can help assess whether AI models are anonymous or not. While most, if not all of the commentary around the Opinion usually focuses on the merits of these criteria, one should perhaps stop and first reflect on the framework of the analysis – namely assessing the nature of the model itself rather than the nature of the bits and pieces of information within the model. 

The EDPB did not offer any exploration of what non-anonymous (so, then, personal?) AI models might mean for the broader application of data protection law, such as data subject rights. But with it, the EDPB may have – intentionally or not, started a paradigm shift for data protection in the context of AI, signaling a possible move from the regulation of personal data items to the regulation of “personal” AI models. However, the Opinion was ostensibly shelved throughout last year as it did not seem to appear in any regulatory action yet. I would have forgotten about it myself if not for a judgment of a Court in Munich in November 2025, in an IP case related to LLMs.  

The German Court found that song lyrics in a training dataset for an LLM were “reproducibly contained and fixed in the model weights”, with the judgment specifically referring to how models themselves are “copies” of those lyrics within the meaning of the relevant copyright law. This is because of the “memorization” of the lyrics in the training data by the model, where weights and vectors are “physical fixations” of the lyrics. This judgment is not final, with a pending appeal. But it will be interesting to see whether this perspective of focusing on the models themselves as opposed to bits of data within them will find more ground this year and immediately following ones, pushing for legal reform, or will fizzle out due to over-complexity of making it fit within current legal frameworks. 

Key AI developments which might push the limits of existing data protection and privacy frameworks to a breaking point, as they descend from research to market, will be: 

3. A concert of laws adjacent to data protection and privacy steadily becoming the digital regulation establishment 

A third force pressing onto data protection for the foreseeable future are all the novel data-and-digital adjacent regulatory efforts solidifying into a new establishment of digital regulation, with their own bureaucracies, vocabulary and compliance infrastructure: online safety laws – including their branch of children’s online safety laws, digital markets laws, data laws focusing on data sharing or data strategies including personal and non-personal data, and the proliferation of AI laws, from baseline acts to sectoral or issue-specific laws (focusing on single issues, like transparency). 

It may have started in the EU five years ago, but this is now a global phenomenon. Look, for instance, at Japan’s Mobile Software Competition Act, a law regulating competition in  digital markets focusing on mobile environments which became effective in December 2025 and draws strong comparisons with the EU Digital Markets Act. Or at Vietnam’s Data Law which became effective in July 2025 and is a comprehensive framework for the governance of digital data, both personal and non-personal, applying in parallel to its new Data Protection Law.

Children’s online safety is taking increasingly more space in the world of digital regulatory frameworks, and its overlap and interaction with data protection law could not be clearer than in Brazil. A comprehensive law for children’s online safety, the Digital ECA, was passed at the end of last year and it is slated to be enforced by the Brazilian Data Protection Authority starting this spring. 

It brings interesting innovations, like a novel standard for such laws to be triggered – “likelihood of access” of a technology service or product by minors, or “age rating” for digital services, requiring providers to maintain age rating policies and continuously assess their content based on it. It also provides for “online safety by design and by default” as an obligation for digital services providers. From state level legislation in the US on “age appropriate design”, to an executive decree in UAE on “child digital safety” – the pace of adopting online safety laws for children is ramping up. What makes these laws more impactful is also the fact that age limits of minors falling under these rules are growing to capture teenagers up until 16 and even 18 year-olds in some places, bringing vastly more service providers in scope than first generation children online safety regulations.  

The overlap, intersection and even tensions of all these laws with data protection become increasingly visible. See, for instance, the recent Russmedia judgment of the CJEU, which established that an online marketplace is a joint-controller under the GDPR and it has obligations in relation to sensitive personal data published by a user, with consequences for intermediary liability that are expected to reverberate at the intersection of the GDPR and Digital Services Act in practice. 

The compliance infrastructure of this new generation of digital laws and its need for resources (human resources, budget) break their way into an already stretched field of “privacy programs”, “privacy professionals”, and regulators, with the visible risks of moving attention from, and diluting meaningful measures and controls stemming from privacy and data protection laws. 

4. Breaking the fourth wall: Geopolitics

While all these developments play out, it is particularly important to be aware that they unfold on a geopolitical stage that is unpredictable and constantly shifting, resulting in various notions of “digital sovereignty” taking root from Europe, to Africa, to elsewhere around the world. From a data protection perspective, and in the absence of a comprehensive understanding of what “digital sovereignty” might mean, this could translate into a realignment of international data transfers rules through more data localization measures, more data transfers arrangements following trade agreements, or more regional free data flows arrangements among aligned countries. 

Ten years after the GDPR was adopted as a modern upgrade of 1980s-style data protection laws for the online era, successfully promoting fair information practice principles, data subject rights and the “privacy profession” around the world, data protection and privacy are at an inflection point: either hold the line and evolve to meet these challenges, or melt away in a sea of new digital laws and technological developments.