Red Lines under the EU AI Act: Understanding Manipulative Techniques and the Exploitation of Vulnerabilities
Blog 2 | Red Lines under the EU AI Act Series
This blog is the second of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can read the first episode here and find the whole series here.
Harmful manipulation and deception through AI systems and exploiting certain human vulnerabilities are the first on the list of prohibited practices under Article 5 of the EU AI Act. It is apparent that the underlying goal of these provisions is to ensure that individuals maintain their ability to make autonomous decisions. This is especially important when considering that one of the goals of the AI Act is “to promote the uptake of human-centric and trustworthy AI”, while ensuring respect for safety, health and fundamental rights (see Recital 1, AI Act).
These first two prohibited practices listed in Article 5(1) specifically concern AI systems that could undermine individual autonomy and well-being through:
Deploying subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behavior of natural persons or group(s) of persons (Article 5(1)(a) AI Act).
Exploiting vulnerabilities due to age, disability, or a specific socio-economic situation (Article 5(1)(b) AI Act).
It is notable, though, that manipulative and deceptive practices based on processing of personal data, and those that specifically occur through online platforms, are already strictly regulated by the EU’s General Data Protection Regulation (GDPR) and Digital Services Act (DSA). Specifically, the GDPR intervenes through obligations like ensuring fairness (Article 6(1)(a)) and data protection by design (Article 25) for all processing of personal data, regardless of whether that processing occurs through AI or not, while the DSA includes a prohibition for providers of online platforms to design, organise or operate their online interfaces in a way that deceives or manipulates their users (Article 25). While the relationship between the DSA obligations and those in the GDPR related to manipulative design is clear, with the DSA only being applicable where the GDPR does not apply, their relationship with the AI Act prohibitions on manipulative techniques and exploiting vulnerabilities requires further guidelines and clarification.
The Guidelines published by the European Commission to support compliance with Article 5 AI Act highlight that the two prohibitions aim to protect individuals from being reduced to “mere tools for achieving certain ends”, and to protect those who are most vulnerable or susceptible to manipulation and exploitation. Significantly, the Guidelines analyze these two prohibitions together, making it obvious that there is a nexus between them. In this sense, according to the Guidelines, they are both designed to support and protect the right to human dignity, as enshrined in the EU Charter of Fundamental Rights.
This second blog in the “Red Lines” series provides an analysis of the scope and content of the Article 5(1)(a) prohibition in Section 2, focusing on the definitions of subliminal, manipulative, and deceptive techniques. Section 3 goes on to explore the notion of vulnerability contained in the Article 5(1)(b) prohibition and in the Guidelines, while Section 4 notes the possible interplay between the two prohibitions. Section 5 takes a broader view by highlighting the interplay between the prohibitions and other EU laws, including the GDPR and the DSA, before the conclusions in Section 6 note the following key takeaways:
There is a high threshold, including some highly subjective elements, for fulfilling the cumulative conditions required for falling under the prohibitions related to manipulative techniques and the exploitation of vulnerabilities.
The prohibition for AI practices that include manipulative techniques applies even when there is nointention of manipulation.
Compliance with other laws, including the GDPR and DSA, in relation to these two prohibitions, can help demonstrate compliance with the AI Act.
2. Understanding harmful manipulation and deception as a prohibited practice under the AI Act
Article 5(1)(a) AI Act targets those cases in which AI practices subtly manipulate human action without the individual noticing. The final text of the AI Act for this provision underwent several changes from the European Commission’s initial proposal, broadening its scope and clarifying some elements.
Following amendments submitted by the European Parliament, the final text sought to add manipulative and deceptive techniques to the initial “subliminal techniques”, and broaden the scope of the ban to cover not only harmful effects on individuals but also on groups, in order to prevent discriminatory effects. Another modification of the initial proposal added that the prohibition should not be limited to cases where the systems are intended to modify behaviour, but also to cases where the modification of the behaviour that led to a significant harm is a mere “effect”, even when it was not the intended objective of the AI practice in question.
2.1. Defining subliminal, purposefully manipulative or deceptive techniques
The Guidelines list four cumulative conditions to be fulfilled in order for this prohibition to be applicable, even though, in their analysis, they also include a fifth one.
The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system.
The AI system must deploy subliminal (beyond a person’s consciousness), purposefully manipulative, or deceptive techniques.
The techniques deployed by the AI system should have the objective or the effect of materially distorting the behavior of a person or a group of persons. The distortion must appreciably impair their ability to make an informed decision, resulting in a decision that the person or the group of persons would not have otherwise made.
The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons.
The four conditions must be met cumulatively for the prohibition to be applicable. Additionally, according to the Guidelines, there must be a plausible causal link between the techniques used, the significant change in the person’s behavior, and the significant harm that resulted or is likely to result from that behavior. While the causal link is not listed among the four conditions, it is analyzed further down in the Guidelines as a self-standing, additional condition to be met, and it should be considered as the fifth point on this list.
The prohibition applies to both providers and deployers of AI systems who, each within their own responsibilities, have an obligation not to place on the market, put into service, or use AI systems that impair an individual’s ability to make an informed decision on the basis of subliminal, manipulative or deceptive techniques.
The Guidelines note that while the AI Act does not directly define “subliminal techniques”,the text of Article 5(1)(a) and Recital 29 imply that such techniques are inherently covert in that they operate beyond the threshold of conscious awareness, capable of influencing decisions by bypassing a person’s rational defences. However, the Recital also explains that the prohibition covers even those cases where the person is aware that the techniques used are subliminal, but cannot resist their effect. The Guidelines clarify that the prohibition on the use of subliminal techniques is not limited to those practices that influence decision-making only, but rather, it also covers those techniques that influence a person’s value- and opinion-formation, a criterion that seems highly subjective and might raise difficulties in applying it in practice. A relevant example could be an AI system facilitating deepfakes on matters of public interest when spread on platforms without appropriate labeling and in violation of the transparency obligations in place (Article 50 AI Act). Their use could be considered prohibited.
Subliminal techniques can use audio, visual, or tactile stimuli that are too brief or subtle to be noticed. The following techniques are among several suggested in the Guidelines (p. 20) as potentially triggering a ban, if the other conditions are also met:
Visual Subliminal Messages: an AI system may show or embed images or text flashed briefly during video playback which are technically visible, but flashed too quickly for the conscious mind to register, while still being capable of influencing attitudes or behaviours.
Auditory Subliminal Messages: an AI system may deploy sounds or verbal messages at low volumes or masked by other sounds, influencing the listener without conscious awareness. These sounds are still technically within the range of hearing, but are not consciously noticed by the listener due to their subtlety or masking by other audio.
Embedded Images: an AI system may hide images within other visual content that are not consciously perceived, but may still be processed by the brain and influence behaviour.
The Guidelines, referring to Recital 29 AI Act, specify that the development of new AI technologies, like neurotechnology, brain-computer interfaces, virtual reality, or even “dream-hacking” increases the potential for sophisticated subliminal manipulation and its ability to influence human behavior subconsciously.
While “purposefully manipulative techniques” are similarly not defined by the AI Act, the Guidelines fill this gap by noting that such techniques exploit cognitive biases, psychological vulnerabilities, or situational factors that make individuals more susceptible to influence. This provision covers cases where individuals are aware of the presence of a manipulative technique but cannot resist its effect and, as a result, are pushed into decisions or behaviours that they would not have otherwise made (Recital 29).
Recital 29 of the AI Act also refers to techniques that deceive or nudge individuals “in a way that subverts and impairs their autonomy, decision-making and free choices.” A direct comparison can be made with the DSA which, inter alia, prohibits providers of online platforms from deceiving or nudging recipients of their service and from distorting or impairing their autonomy, decision-making and free choice (Article 25 and Recital 67 DSA).
The manipulative capability of the technique is a key factor in determining its effect. Indeed, the Guidelines clarify the AI system couldmanipulate individuals without the provider or deployer intendingto cause harm. However, the provision would still apply, unless the result is incidental and appropriate preventive and mitigating measures were taken. This is consistent with the overall logic and scope of the AI Act’s prohibitions, as explored in Blog 1 of this series, in which deployers have a responsibility to reasonably foresee harms that may arise from the misuse of an AI system.
Deceptive techniques are techniques that subvert or impair a person’s autonomy, decision-making, or free choice in ways of which the person is not consciously aware or, where they are aware, can still be deceived or cannot control or resist them. In the case of deepfakes, for example, Article 50 of the AI Act requires that the deployer disclose their nature. If this transparency is absent and the deepfake is used to deceive individuals, it could fall under prohibited uses. Notably, according to the Guidelines, this provision applies even if the deception occurs without the intent of the provider or deployer. However, the Guidelines also clarify that a generative AI system that produces misleading information due to hallucinations—provided the provider has communicated this possibility—does not constitute a prohibited practice.
2.2 To fall under the AI Act’s prohibited practices, manipulative techniques have to have the “objective or effect of materially distorting the behavior of a person or a group of persons”
The subliminal, manipulative and deceptive techniques must have the objective or the effect of materially distorting the behavior of a person or a group of persons. Material distortion involves a degree of coercion, manipulation, or deception that goes beyond lawful persuasion. The Guidelines note that material distortion implies a substantial impact on a person’s behavior, such that their decision-making and free choice are undermined, rather than a minor influence.
When interpreting “material distortion of behaviour” under Directive 2005/29/EC (the Unfair Commercial Practices Directive or ‘UCPD’), it is sufficient to demonstrate that a commercial practice is likely (i.e., capable) of influencing an average consumer’s transactional decision; there is no need to prove that a consumer’s economic behavior has been distorted. However, this requires a case-by-case assessment, considering specific facts and circumstances. Additionally, the average consumer’s perspective may not be helpful in situations where an AI system delivers highly personalized messages designed to manipulate individual behavior.
The AI Act adopts a similar understanding of “material distortion” as the UCPD, where the prohibition applies even if the material distortion of a person’s behavior occurs without the intent of the provider or deployer. The text specifies that the prohibition covers not only cases in which behavior modification is the object of the system (like in the original text of the European Commission’s proposal) but also those in which it is the mere “effect”. This change, as introduced into the final text, amplifies protection against the possible distorting effects of manipulative AI systems.
2.3 The subliminal, manipulative and deceptive techniques must be “reasonably likely to cause significant harm”
The Guidelines define harm under three broad categories:
Financial and economic harm: which can include financial loss, exclusion and economic instability (an addition by the European Parliament during the AI Act negotiations).
Physical: any injury or damage to a person’s life, health, and material damage to property (e.g., an AI chatbot promotes self-harm to users);
Psychological: harm that exploits cognitive and emotional vulnerabilities, encompassing adverse effects on a person’s mental health and psychological and emotional well-being;
However, the harm must be significant for the prohibition to apply. The determination of ‘significant harm’ is fact-specific, requiring careful consideration of each case’s circumstances and a case-by-case assessment. Still, the individual effects should always be material and significant in each case. According to the Guidelines, the assessment of the significance of the harm takes into consideration several factors:
The severity of the harm;
Context and cumulative effects;
Scale and intensity;
Affected persons’ vulnerability;
Duration and reversibility;
When assessing harm, the Guidelines suggest that a comprehensive approach should be taken, which considers both the possible immediate and direct harms that are associated with AI systems that deploy subliminal, deceptive, or manipulative techniques.
The last requirement for identifying a prohibited practice is determining the likelihood of a causal link between the manipulative technique and the distorting behavior. In that regard, to not fall in the category of prohibited practices, providers and deployers are suggested to take appropriate measures such as:
Transparency and individual autonomy: integrate appropriate user control and safeguard measures to ensure that the system is not deceptive and operates within the boundaries of lawful persuasion;
Compliance with relevant legislation: which indicates that the practice does not constitute a purposefully manipulative or deceptive practice;
State-of-the-art practices and industry standards: which can help preempt and mitigate significant unintended harms.
It is worth reminding that although the concept of significant harm is very similar to the one of “significant effect” that we encounter within Article 22 GDPR on automated decision-making (ADM), they do not overlap perfectly, with the latter providing for a broader interpretation than the former (see here FPF’s Report on ADM case law). For example, profiling through ADM for political targeting could have a significant effect on citizens but not result in significant harm.
Not all forms of manipulation fall within the AI Act’s scope. Many persuasive techniques commonly used in advertising are legitimate because they operate transparently and respect individual autonomy. The Guidelines suggest that if an AI system appeals to emotions but remains transparent and provides accurate information, it falls outside the law’s scope.
Additionally, compliance with regulations like the GDPR helps providers and deployers demonstrate that transparency, fairness, and respect for individual rights and autonomy are upheld.
Furthermore, manipulation may be acceptable in some cases if it does not result in significant harm. For instance, in an example, the Guidelines provide – an online music platform might use an emotion recognition system to detect users’ moods and recommend songs that align with their emotions while avoiding excessive exposure to depressive content.
3. The exploitation of vulnerabilities, particularly those due to age, disability or socio-economic status, as prohibited AI practice
Cases in which an AI system exploits the vulnerabilities of a single person or a specific group with the objective of distorting their behavior are designated as prohibited AI practices under Article 5(1)(b) AI Act.
There are four cumulative conditions to be fulfilled for the application of Article 5(1)(b):
The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons.
The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system.
The AI system must exploit vulnerabilities due to age, disability, or socio-economic situations.
The exploitation enabled by the AI system must have the objective or the effect of materially distorting a person’s behavior or a group of persons.
3.1. Exploitation of vulnerabilities due to age, disability, or a specific socio-economic situation
While vulnerability is not directly defined by the AI Act, according to the Guidelines, the concept covers a wide range of categories, including cognitive, emotional, physical, and other forms of susceptibility that may impact an individual’s or group’s ability to make informed decisions or influence their behavior.
However, under the AI Act’s prohibited practices, the exploitation of vulnerabilities is only relevant if it involves individuals who are vulnerable due to their age, disability, or socio-economic circumstances. It is worth noting that a reference to an individual’s socio-economic situation was included in the final text of the AI Act after the amendments submitted by the European Parliament, which led to a wider scope of the Article 5(1)(b) prohibition in the final text, as compared to the initial European Commission proposal.
Exploiting other categories of vulnerabilities than those expressly mentioned falls outside the scope of the Article 5(1)(b) prohibition. The Guidelines note that age, disability, or socio-economic vulnerabilities may, in principle, lead to a limited capacity to recognize or resist manipulative AI practices. The prohibition aims to prevent the exploitation of cognitive limitations stemming from age or health conditions. However, socio-economic status can also reduce an individual’s ability to recognize deceptive practices and may intersect with other discriminatory factors, such as belonging to an ethnic, racial, or religious minority group.
The Guidelines share a number of examples in cases of exploitation of vulnerable people based on their age that fall under prohibited practices, including:
An AI-powered toy designed to interact with children that keeps them interested in interactions with the toy by encouraging them to complete increasingly risky challenges;
An AI system used to target older people with deceptive personalized offers or scams.
In the case of exploitation of vulnerable people based on disabilities, the Guidelines include the example mentioned of a therapeutic chatbot aimed to provide mental health support and coping strategies to persons with cognitive disabilities, which can exploit their limited intellectual capacities to influence them to buy expensive medical products.
When the exploitation concerns vulnerable people based on their socio-economic situation, an example mentioned is an AI-predictive algorithm that could be used to target people who live in low-income post-codes with advertisements for predatory financial products.
3.2. For the Article 5(1)(b) prohibition to apply, AI practices have to materially distort behavior and be reasonably likely to cause significant harm
As previously noted, a substantial impact is required to fall within the scope, even though intention is not a necessary element, as the provision also covers merely the effect (see Section 2.3). Similarly to fulfilling the conditions for Article 5(1)(a), as explored above, the AI practice has to be reasonably likely to cause significant harm. It is worth mentioning that the harms in this case may be particularly severe and multifaceted due to the increased susceptibility of the vulnerable group in question. Risks of harm that might be deemed acceptable for adults are often considered unacceptable for children and other vulnerable groups.
4. Areas of interplay between the two prohibitions, and between the prohibitions and other EU laws, including the UCPD, GDPR, and DSA
4.1. Tiered approach to the interplay between Articles 5(1)(a) and (b)
Where the Article 5(1)(a) prohibition covers mainly the use of subliminal and manipulative techniques, Article 5(1)(b) is focused on the targets of AI exploitation, particularly individuals considered vulnerable due to age, disability or socio-economic circumstances.
However, there may be instances where both Articles seem applicable. In such cases, examining the predominant aspect of the exploitation is essential. If the exploitation does not explicitly relate to one of the vulnerable groups previously discussed, Article 5(1)(a) applies, taking into consideration that it also covers the exploitation of vulnerabilities in groups outside those listed in Article 5(1)(b). When the exploitation specifically targets the groups identified in Article 5(1)(b), then the practice falls under this latter prohibition.
4.2. Interplay with the GDPR obligations to ensure fairness and data protection by design
The protection of individuals from manipulative processes is also covered in various other European laws, including the GDPR. Under the GDPR, the principle of fairness—enshrined in Article 5(1)(a)—acts as an overarching safeguard ensuring that personal data is not processed in a manner that is unjustifiably detrimental, unlawfully discriminatory, unexpected, or misleading to the data subject. Information and choices about data processing must be presented in an objective and neutral way, strictly avoiding any deceptive, manipulative language or design choices. In fact, the European Data Protection Board (EDPB) explicitly identifies the use of “dark patterns” and “nudging” as violations of this fairness mandate, as these techniques subconsciously manipulate data subjects into making decisions that negatively impact the protection of their personal data.
In its Guidelines 4/2019 on Data Protection by Design and by Default, the EDPB emphasizes that controllers must incorporate fairness into their system architectures from the outset, proactively recognizing power imbalances and granting users the highest degree of autonomy over their data. This means choices to consent to or abstain from data sharing must be equally visible, and platforms cannot use invasive default options or deceptive interfaces to lock users into unfair processing.
The profound risks of such subliminal and deceptive techniques are illustrated in the EDPB’s Binding Decision 2/2023 and the Irish Data Protection Commission’s corresponding final decision regarding TikTok. In these rulings, the authorities found that TikTok infringed the principle of fairness by utilizing deceptive design patterns to nudge child users toward public-by-default settings. TikTok has challenged these findings in a case now pending at the CJEU.
Beyond social media interfaces, the EDPB has also stressed the dangers of subliminal manipulation in democratic processes. In its Statement 2/2019 on the use of personal data in political campaigns (the Cambridge Analytica case), the EDPB warns that predictive tools used to profile people’s personality traits, moods, and points of leverage pose severe societal risks. When these sophisticated profiling techniques are used to target voters with highly personalized messaging, they not only infringe upon the fundamental right to privacy but also threaten the integrity of elections, freedom of expression, and the fundamental right to think freely without being subjected to unseen psychological manipulation.
Synthesizing EDPB decisions and guidelines: to counteract these deceptive techniques across all sectors, the fairness principle mandates that controllers respect data subject autonomy, avoid exploiting user vulnerabilities, and ensure that individuals are never coerced into abandoning their privacy through unfair technological architectures.
Importantly, these GDPR rules apply in the absence of high thresholds, making them particularly relevant even where the conditions to meet the AI Act prohibitions are not met. This is why clarity about the interplay of the two regulations are essential for practical implementation.
4.3. Interplay with other EU laws: UCPD, DSA
The AI Act serves to complement or expand the provisions of existing EU law. For instance, unlike EU consumer protection laws, Articles 5(1)(a) and 5(1)(b) of the AI Act extend protection beyond consumers to encompass any individual. As a result, it must be considered alongside other legal frameworks such as the UCPD, the GDPR, the DSA, the political advertising regulation, and EU product safety legislation.
For example, the UCPD aims to protect individuals from misleading information that could lead them to purchase goods they would not otherwise have bought. It also offers greater protection to vulnerable individuals, such as the elderly and children. The UCPD overlaps partly with the Article 5(1)(a) and (b) prohibitions, though not entirely. Firstly, the UCPD is a Directive and not a Regulation under EU law, and secondly, it only protects consumers (those “acting outside their trade, business, craft or profession”). In the case of the AI Act, however, the prohibitions in Article 5 serve to protect everyone, irrespective of their “consumer” or other status, such as “patient”, “student”, or “tax payers” to give some examples.
Furthermore, the scope of the UCPD is limited to transactional decisions, not all decisions. For example, a surgeon persuaded by manipulative or deceptive techniques by an AI system to operate on a patient in a certain way rather than another would not be covered by the UCPD. On the contrary, both rules will apply in all cases where AI systems are used to manipulate the consumer’s decision-making autonomy subliminally.
By analogy, the scope of the DSA is also limited to what happens on online platforms, and when it comes to deceptive design and the rules in Article 25 DSA – it is relevant only where the GDPR is not applicable, so the cases in which both the AI Act and the DSA apply are limited.
But there are other provisions of the DSA that could be relevant at the intersection with prohibited AI practices. For example, the DSA pays special attention to the prohibition of profiling using special categories of personal data (as defined by Article 9 GDPR) on online platforms, given the possible manipulative effect of disinformation campaigns that can lead to a negative impact on public health, public security, civil discourse, political participation, and equality (Recitals 69 and 95 DSA). Therefore, if bots and deepfakes spread information online to convince vulnerable individuals (such as the elderly, children, and economically disadvantaged individuals) to purchase high-profit financial products, both the DSA and the AI Act would apply.
Compliance with these laws can help mitigate harm and reduce manipulative effects. For example, suppose that a very large online platform has conducted a risk assessment to assess systemic risk (as required by Article 34 DSA) and a data protection impact assessment (as required by Article 35 GDPR in certain circumstances). In this case, it will be easier for such a platform to identify whether any of its AI systems may fall under the prohibited uses listed in Article 5 AI Act, and adopt mitigating measures accordingly.
5. Concluding Reflections and Key Takeaways
There is a high threshold for falling under the Articles 5(1)(a) and (b) prohibitions.
To fall under the prohibitions in Article 5(1)(a) or (b), providers and deployers would have to fulfil several cumulative conditions at once. Interpreting the Guidelines, this high threshold is designed to ensure that only very specific AI use-cases and applications would fall under the scope of the prohibitions. While a high threshold of application exists, it is worth noting that the final text of the AI Act ended up being broader in scope as compared to the European Commission’s initial proposal.
It is important to note that even where this threshold is not met, EU law through provisions of the GDPR regarding fairness and data protection by design when processing personal data, or some of the DSA rules when very large online platforms are involved would still limit some manipulative and deceptive practices.
The prohibition applies even when there is no intention of manipulation. Even when there is no voluntary intention to influence a person’s decision, Article 5(1) could still apply since the provision also covers the harmful effect of manipulating and exploiting individuals or groups. In order to mitigate potential risks, the provider may adopt transparency measures and implement appropriate safeguards to prevent harmful outcomes or consequences. While doing so, it is important to keep in mind that even though the use of a specific AI system does not meet the cumulative conditions of the Article 5(1) prohibitions, it is nevertheless highly likely to be considered a high-risk AI system under Article 6 AI Act.
Compliance with other laws can help demonstrate compliance with the AI Act.
The Guidelines highlight that if the AI provider shows compliance with relevant EU legislation on transparency, fairness, risk assessment, and data protection, it may contribute to demonstrating compliance with the AI Act’s requirements.
Q&A With FPF Vice President for U.S. Policy, Matthew Reisman
In a new Q&A, our Vice President for U.S. Policy, Matthew Reisman, takes a deeper look at the privacy landscape, particularly his interests in the space, what to look forward to in the U.S. and AI sector, and what is key for stakeholders to pay attention to.
What brought you into the privacy and data policy space? What drew you into working in this field/subject matter in particular?
I was drawn to working in public policy generally because I hoped to have opportunities to improve people’s lives and the communities and societies we live in–and it’s hard to think of a space where that’s more true than data and technology. In the early years of my career, I was struck by the breathtaking pace of change in technology and the ways it was transforming our lives–and yet so many of the principles to guide its development and use remained nascent. I think that remains true today. All of us who care about building responsible public policy and governance for technology have the opportunity to create the path forward together, and I find that terrifically exciting.
You have an extensive background in the data privacy landscape across a range of issues that continue to evolve. What particular sector is one to watch in the U.S.?
As a community, we have been wrestling with how to approach privacy in the context of AI systems: the challenge is to ensure that these tools benefit as broad a spectrum of people, organizations, and society as possible while protecting the rights, freedom, and dignity of individuals. Even as we continue to work through foundational concepts for privacy in the age of AI, it is important that we anticipate the new challenges we will face as the technology continues to evolve.
To that end, it feels like we are on the cusp of major steps forward for spatial artificial intelligence – where AI systems are enabling richer interactions with the physical world. There are so many potentially beneficial applications for spatial intelligence, from autonomous vehicles, to logistics, to healthcare, just to name a few.
What else are you thinking about in the AI sector? What is the most timely issue that lawmakers, practitioners, or policymakers should consider the most in relation to AI?
AI agents have been on many folks’ minds over the past year, and I think rightly so. 2026 feels like a breakout moment for agents for both enterprise and consumer applications. I was recently experimenting with coding agents for some personal projects and experienced “wow” moments similar to those I felt when first trying text-generation LLM tools several years ago. Agents offer exciting potential benefits for individuals, organizations, and society–and to realize them, we will need to work together on principles and standards for responsible development and deployment.
You have worked within the business, government, and nonprofit sectors. Given the breadth of diverse experience that you are now bringing to FPF, what continues to surprise you about the U.S. data privacy landscape across the board?
It has been fascinating to me to see how privacy and adjacent policy issues have become prominent in everyday discourse in nearly every sector of the economy and society, and nearly every facet of our lives, from the workplace to the family dinner table. I think the factor driving this is the central role of data in virtually every system we interact with–at home, at school, and in our interactions with businesses and government agencies. It’s hard to imagine a time soon when these issues will lessen in importance, so I anticipate we’ll be talking about them with co-workers, teachers, and family and friends alike for the foreseeable future.
What do you find unique about FPF and its approach to bringing together academics, business, and thought leaders in facilitating discussion in privacy matters in the U.S. and abroad?
FPF fulfills a unique and critical role by bringing together the full range of stakeholders who are striving to ensure that technology and data are used in ways that are responsible and beneficial for individuals, organizations, and society. It is a place that embodies both timeless values and intellectual rigor: when you meet FPF’ers, you quickly realize that they carry an infectious passion for the subject matter, a commitment to excellence in analysis and research, a gift for facilitation of meaningful and productive conversations, and a deeply held belief in the potential for their work to make a difference. I admired and was inspired by FPF’s work as an external stakeholder, and now that I’m here, I only feel those sentiments more strongly. It’s a special place.
From Proposal to Passage: Enacted U.S. AI Laws, 2023–2025
Over the past three years, lawmakers across the United States have increasingly enacted AI-related laws that shape the development and deployment of AI systems. Between 2023 and 2025, the Future of Privacy Forum tracked 27 pieces of enacted AI-related legislation across 14 states, along with one federal law (the TAKE IT DOWN Act) that carry direct or indirect implications for private-sector AI developers and deployers. Notably, most enacted AI laws are already effective as of 2026, requiring entities to begin navigating compliance obligations. To support stakeholders, FPF has compiled a resource documenting key AI laws enacted from 2023-2025, which can be found below.
These enacted laws span a wide range of policy areas, reflecting experimentation in regulatory scope among lawmakers. In 2025 alone, states enacted laws addressing frontier model risk (such as California’s SB 53 and New York’s RAISE Act), generative AI transparency, AI use in health care settings, liability standards, data privacy, innovation, and synthetic content. Additionally, one of the clearest trends among enacted laws in 2025 included the growing focus on AI chatbots. Five states (California, Maine, New Hampshire, New York, and Utah) enacted chatbot-specific laws emphasizing transparency and safety protocols, particularly for sensitive use cases involving mental health and emotional companionship.
While the majority of these AI laws have already taken effect, a small number have delayed or phased-in effective dates that stakeholders should continue to track:
Federal — S 146(TAKE IT DOWN Act regarding nonconsensual intimate imagery): notice-and-removal requirements effective May 19, 2026.
Colorado — SB 205(Colorado AI Act): effective June 30, 2026.
Connecticut — SB 1295(amendments to the Connecticut Data Privacy Act regarding automated decision making): effective July 1, 2026.
New York — A 9449 (RAISE Act regarding frontier models): effective January 1, 2027.
California — AB 853(amendments to the California AI Transparency Act): effective January 1, 2027, with additional provisions phasing in January 1, 2028.
The broad diversity within 2025 AI bill categories contrasts with 2024, when laws such as the Colorado AI Act signaled a more uniform legislative emphasis on high-risk AI systems and automated decision-making technologies (ADMT) used in consequential decisionmaking. As analyzed in FPF’s State of State AI reports from 2024 and 2025, AI legislative efforts have shifted away from broad, framework-style laws and toward narrower measures tailored to specific use cases and technologies. This trend may also offer a preview of what is to come for enacted AI regulation in 2026: increased sector-specific regulation, heightened attention to sensitive populations such as minors, and a growing emphasis on substantive requirements.
Red Lines under the EU AI Act: Understanding ‘Prohibited AI Practices’ and their Interplay with the GDPR, DSA
Blog 1 | Red Lines under the EU AI Act Series
This blog is the first of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.
The EU AI Act prohibits certain AI practices in the European Union (hereinafter also “the Union”or “the EU”), at the top of the pyramid of its layered approach: harmful manipulation and deception, social scoring, individual risk assessment, untargeted scraping of facial images, emotion recognition, biometric categorization, and real-time remote biometric identification for law enforcement purposes. These are the “red lines” that the EU has drawn through the AI Act. “Red lines” in AI governance have been generally described as meaning “specific boundaries that AI systems must not cross”, and, in more detail, as “specific, non-negotiable prohibitions on certain AI behaviors or AI uses that are deemed too dangerous, high-risk, or unethical to permit”. Most “red lines” emerge from soft law or self-regulation, with the AI Act being the first law globally drawing such lines, exemplifying the strict AI regulatory approach that the EU is pursuing.
Prohibited AI practices are regulated by Article 5 of the AI Act, which already became applicable in February 2025 (see a full timeline of when chapters of the AI Act become applicable). Starting on 2 August 2025 this provision also became enforceable by the designated authorities at Member State level, or the European Data Protection Supervisor – the supervisory authority for EU institutions, as the case may be. Non-compliance with it triggers administrative fines of up to 35 million euros or up to 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher. However, the supervision and enforcement landscape is highly fragmented and decentralized.
This blog is the first of a series which will explore each prohibited AI practice and its interplay with existing EU law, such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), starting from the Guidelines on Prohibited Artificial Intelligence Practices under the AI Act (hereinafter ‘the Guidelines’), published by the European Commission on 4 February 2025. The aim is to understand what AI systems and practices are within the scope of Article 5 of the AI Act, and to highlight potential areas of legislative overlap or lack of clarity. This is increasingly important, at a time where the European Commission has prioritized addressing the interplay of the digital regulation acquis with a view to amending parts of the AI Act and the GDPR through the Digital Omnibus initiative. While the initial proposal for the Digital Omnibus on AI does not seek to amend the AI Act’s prohibited practices requirements, multiple political groups of the European Parliament and several governments of member states are proposing amendments to enhance the list of prohibited practices particularly with regard to intimate deep fakes and Child Sexual Abuse Material.
This blog continues with an Introduction into the significance of the Guidelines and the place of the prohibited practices into the broader layered architecture of the AI Act, tailored to severity of risks (1), details about the definitions and scope of the prohibited practices (2), and an analysis of the interplay of the prohibited AI practices with the GDPR and DSA (3), before Conclusions (4) highlight takeaways:
The AI Act does not prohibit technology, but uses or practices of technology that pose unacceptable risk.
Practices of “General Purpose AI Systems” may also fall under the “prohibitions” of the EU AI Act.
“In-house development” of AI systems is at the same time excluded from the application of the AI Act and included in the “putting into service” definition in the Guidelines – an action covered by the AI Act, needing thus further clarification.
The interplay of the prohibitions under the AI Act and the GDPR needs legal certainty, considering that the GDPR takes priority in application (the AI Act “shall not affect” the GDPR and the data protection acquis), and that some of the prohibited practices under the AI Act have already been subject to GDPR enforcement.
A year after the entry into force of the prohibited practices provisions, the competence to enforce them is highly scattered and decentralized, including at national level where multiple authorities are tasked with enforcing specific prohibitions under Article 5 of the AI Act.
Entry into force of prohibited AI practices under the AI Act: A year on
Prohibited practices under Article 5 of the AI Act entered into force on 2 February 2025 and became enforceable on 2 August 2025. However, so far, no enforcement or otherwise regulatory action in relation to prohibited AI practices has been announced.
About a year ago, on 4 February 2025, the European Commission released Guidelines on Prohibited Artificial Intelligence Practices under the AI Act. The AI Act regulates the placing on the market, putting into service, and use of AI systems across the Union on the basis of harmonized rules and a tiered approach based on the severity of the risks posed by some AI systems. While there are four risk categories in the AI Act, the Guidelines provide legal explanations and practical examples on AI practices that are deemed unacceptable due to their potential risks to fundamental rights and freedoms, and are therefore prohibited.
While the Guidelines are non-binding, they offer the Commission’s first interpretation of the Article 5 prohibitions as well as crucial insights into its own analysis on the interplay between core requirements of the AI Act and other EU law, including (but not limited to) the GDPR and the DSA. In publishing the Guidelines, the Commission explicitly acknowledged that any authoritative interpretations of the AI Act ultimately reside with the Court of Justice of the European Union (CJEU), and notes that these may be reviewed or amended in light of relevant future case law or enforcement actions by market surveillance authorities. However, while enforcement actions under the AI Act are yet to emerge, analysis can be made with regard to the interplay between the Commission’s Guidelines and existing CJEU case law, as well as decisions by Data Protection Authorities (DPAs) under the GDPR.
This first blog in our series on ‘Red Lines under the EU AI Act’ highlights how the Commission’s Guidelines take a scaled approach to delineating the practices which fall within and outside of the scope of prohibited practices. The Guidelines highlight the close interplay between Articles 5 (on prohibited AI practices) and 6 (on high-risk AI systems) of the AI Act, and note that where an AI system does not fulfil the requirements for prohibition under the AI Act, it may still be unlawful or prohibited under other laws such as the GDPR.
From emotion recognition, to social scoring via AI systems: Overview of prohibitions under Article 5 of the AI Act
The tiered regulatory approach of the AI Act takes into account four risk categories of AI systems on the basis of which scaled obligations are proposed: unacceptable risk, high risk, transparency risk, and minimal to no risk. This analysis zooms in especially on unacceptable risk, as found in Article 5 AI Act, which prohibits the placing on the EU market, putting into service or use of AI systems for manipulative, exploitative, social control or surveillance practices. Of note, Article 5 is framed as such that technology or AI systems themselves are not prohibited, but “practices” involving specific AI systems that pose unacceptable risks are. This framing is different from the one in Chapter III of the AI Act, which classifies and regulates systems themselves as “high-risk AI systems.”
The prohibited practices are, by their inherent nature, deemed to be especially harmful and abusive due to their contravention of fundamental rights as enshrined in the EU Charter of Fundamental Rights. The Guidelines issued by the European Commission highlight Recital 28 of the AI Act by reiterating that the impacts of prohibited AI practices are not limited to the right to personal data protection (Article 8 EU Charter) and the right to a private life (Article 7), but they also pose an unacceptable risk to the rights to non-discrimination (Article 21), equality (Article 20), and the rights of the child (Article 24).
Prohibited AI practices under the AI Act include:
Harmful manipulation and deception (Article 5(1)(a));
Harmful exploitation of vulnerabilities (Article 5(1)(b));
Social scoring (Article 5(1)(c));
Individual criminal offence risk assessment and prediction (Article 5(1)(d));
Untargeted scraping to develop facial recognition databases (Article 5(1)(e));
2.1. The Guidelines extend the scope of prohibited AI practices to include those related to general-purpose AI systems
In defining the material scope of Article 5 AI Act, the Guidelines expand upon the definitions of “placing on the market, putting into service or use” of an AI system. This is important, because all prohibited practices under Article 5(1) AI Act, from letters (a) to (g), refer to “the placing on the market, the putting into service or the use of an AI system that (…)” engages in a specific practice defined under each of the letters of the provision. Therefore, understanding the definitions of these terms is essential for the application of the “prohibitions”.
“Placing on the market” is the first making availableof an AI system on the Union market, for distribution or use in the course of a commercial activity, either for a fee or free of charge (see Articles 3(9) and 3(10) AI Act for full definitions). Placing an AI system on the Union market is considered as such regardless of the means of supply, whether through an API, direct downloads, via cloud or physical copies.
“Putting into service” refers to the supply of an AI system for first use to the deployer or for own use in the Union for its intended purpose (Article 3(11)), and covers both the “supply for first use” to third parties and “in-house development or deployment”1. The inclusion of in-house development to the scope of Article 3(11) is a significant extension introduced by the Guidelines, considering the definition of “putting into service” in the AI Act only refers to “the supply of an AI system for first use directly to the deployer or for own use in the Union.” This interpretation might need further clarification, especially as Article 2(8) AI Act excludes “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” from its scope of application.
Regarding the “use” of an AI system, which is not directly defined by the AI Act, the Guidelines specify that it should be similarly broadly understood to cover the use and deployment of AI systems at any point in their lifecycle, after having been put into service or placed on the market. Importantly, the Guidelines specify that “use” also includes any “misuse” that may amount to a prohibited practice, making deployers responsible for reasonably foreseeable harms that may arise.
Given the scope of the prohibited practices, the Guidelines focus on both providers and deployers of AI systems and highlight that continuous compliance with the AI Act is required during all phases of the AI lifecycle. For each of the prohibitions, the roles and responsibilities of providers and deployers should be construed in a proportionate manner, “taking into account who in the value chain is best placed” to adopt a mitigating or preventive measure.
The Guidelines acknowledge that while harms may often arise from the ways AI systems are used in practice by deployers, providers also have a responsibility not to place on the market or put into service AI and GPAI systems that are “reasonably likely” to behave or be used in a manner prohibited by Article 5 AI Act. It is important to highlight that the Guidelines extend the scope of Article 5 to general-purpose AI systems as well, even though they are not specifically called out by the provision (see para. 40 of the Guidelines).
As highlighted above, the provision is drafted as such to target “practices” of AI, which opens the possibility that not only GPAI systems are covered, but also practices of agentic AI or any new shape or form of AI systems that result in a practice described by Article 5 AI Act. Indeed, the Guidelines specifically mention that the “prohibitions apply to any AI system, whether with an ‘intended purpose’ or ‘general purpose.’” It is worth noting, however, that the Guidelines address prohibitions in relation to general-purpose AI systems rather than models, recalling that such systems are indeed based on general-purpose AI models but “have the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems” (Article 3(66) AI Act).
2.2. Purposes that do notfall within the scope of the AI Act, and practices that do
The Guidelines note that the AI Act expressly excludes from its scope AI systems used for national security, defence, and military purposes (Article 2(3)). For this exclusion to apply, the AI system must be placed on the market, put into service or used exclusively for such purposes. This means that so-called “dual use” AI systems, such as those for civilian or law enforcement purposes, do fall within the scope of the law. A direct example from the Guidelines notes that: “if a company offers a RBI (remote biometric identification – n.). system for various purposes, including law enforcement and national security, that company is the provider of that dual use system and must ensure its compliance” with the AI Act (emphasis added).
In addition to judicial and law enforcement cooperation with third countries, research and development activities also fall outside the scope of the AI Act. Indeed, as also recalled above, the AI Act does not apply to “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” (Article 2(8)). The Guidelines view this exemption as a natural continuation of the AI Act’s market-based logic, which applies to AI systems once they are placed on the market. However, this raises consistency issues with how the same Guidelines include “in-house development or deployment” of AI systems in the scope of “putting into service” (see also Section 2.1. above).
It is worth noting that the Guidelines are explicit in their reminder of the fact that the research and development exclusion does not apply to testing in real-world conditions, and in cases where those experimental systems are eventually placed on the Union market. The testing of AI systems in real-world conditions may only be carried out in AI regulatory sandboxes, and in full compliance with other Union law, including the GDPR insofar as personal data processing is concerned.
The Guidelines also note that purely personal, non-professional activities similarly fall outside of the AI Act’s scope (Article 2(10)). This includes, for example, an individual using a facial recognition system at home. However, they are careful in noting that the facial recognition system as such does remain within the scope of the AI Act as regards the obligations of providers of such systems in ensuring compliance, even in full knowledge that the system is intended to be used by natural persons for purely non-professional purposes or activities.
The Guidelines take an overall cautious approach in delineating the purposes and practices which fall outside the scope of the AI Act through consistent reference to Recitals 22 to 25. The Recitals recall and make clear that providers and deployers of AI systems which fall outside the scope of the AI Act may nevertheless have to comply with other Union laws that continue to apply.
Interplay of the AI Act’s Prohibitions with the High-Risk Designation and other Union Law
3.1. A scaled approach to the interplay between high-risk AI systems and prohibited AI practices
The Guidelines highlight key areas of interplay between the different risk categories, showing a scaled approach in the AI Act’s risk designation. Importantly, the Guidelines note the close relationship between Article 5 on prohibited practices, and Article 6 on high-risk AI systems. They note that “the use of AI systems classified as high-risk may in some cases qualify as prohibited practices in specific circumstances” and, conversely, most AI systems that fall under an exception from a prohibition listed in Article 5 will qualify as high-risk. This approach clarifies yet again that Article 5 is not meant to prohibit a specific technology, but practices or uses of technology.
An example where Articles 5 and 6 of the AI Act should be considered in relation to each other is in the case of AI-based scoring systems, such as credit scoring, which will be considered high-risk if they do not fulfil the conditions for the credit scoring prohibition as outlined in Article 5(1)(c). While not specifically mentioned by the Guidelines in this context, it is worth noting that Courts and DPAs across the EU have been active in cases involving automated credit scoring practices under Article 22 GDPR on automated-decision making (ADM), as well as in cases that may amount to “profiling”. The notion of “profiling” under the GDPR is particularly relevant in the context of understanding Article 5(1)(d) AI Act. As such, in addition to taking into full account the risk designations under Articles 5 and 6 AI Act, it is also crucial to note the ADM prohibition under Article 22 GDPR, as compliance with one law may not automatically equal to compliance with the other.
3.2. Interplay between the prohibited AI practices under the AI Act with the GDPR and DSA
The Guidelines acknowledge the dichotomy between the AI Act and other Union law by recalling that, as a horizontal law applying across all sectors, the Act is without prejudice to legislation on the protection of fundamental rights, consumer protection, employment, the protection of workers and product safety. They also frame the goal of the AI Act and its preventive logic in the sense that it provides additional protection by addressing potential harms arising from AI practices which may not be covered by other laws, including by addressing the earlier stages of an AI system’s lifecycle.
The Guidelines expressly highlight that where an AI system may not be prohibited under the AI Act, it may still be prohibited or unlawful under other laws because of, for example, “the failure to respect fundamental rights in a given case, such as the lack of a legal basis for the processing of personal data required under data protection law”, where, for instance, the GDPR is applicable, including extra-territorially.
Crucially, the Guidelines acknowledge that in the context of prohibitions, the interplay between the AI Act and data protection law is particularly relevant, since AI systems often process personal data. They specify that laws including the GDPR, the Law Enforcement Directive, and the EU Data Protection Regulation applying to EU institutions (EUDPR), “remain unaffected and continue to apply alongside the AI Act”, noting the complementarity of the Act with the EU data protection acquis.
This statement in the Guidelines about this relationship seems to be weaker than the provision in the AI Act, which states that the AI Act “shall not affect” the GDPR, the EUDPR, the ePrivacy Directive or the Law Enforcement Directive (Article 2(7) AI Act). This technically means that the AI Act is without prejudice to the GDPR and any of the other EU data protection aquis. This fact might create some complex compliance situations in practice, and will require a broad and comprehensive understanding of the EU digital rulebook as a whole, noting that its component parts cannot be read in isolation. For instance, what law prevails if a prohibited AI practice under the AI Act that overlaps with a solely automated decision-making practice involving personal data and legally or significantly affecting an individual, lawfully meets the exceptions under Article 22 GDPR? The AI Act is not designated as lex specialis, based on Article 2(7).
In addition to data protection law, the Digital Services Act (DSA) is similarly deemed relevant in the context of the AI Act’s prohibitions. The Guidelines highlight that these apply in conjunction with the relevant obligations on the providers of intermediary services (defined by Article 3(g) DSA) when AI systems or models are embedded in such services. Further, the AI Act and its prohibitions do not affect the application of the DSA’s provisions on the liability of such providers, as set out in Chapter II DSA, or existing or future liability legislation at Union or national levels. In the context of liability legislation, the Guidelines refer to Directive (EU) 2024/2853 on liability for defective products, and the now withdrawn AI Liability Directive.
3.3. Notes on Enforcement of the AI Act’s Prohibitions and Penalties: Fragmentation and Decentralization
The Guidelines recall that market surveillance authorities (MSAs), as designated by EU Member States, are responsible for enforcing the AI Act and its prohibitions. Member States had until 2 August 2025 to designate one or multiple MSAs, with some countries having already assigned the role to their national DPA with regard to certain parts of the AI Act (e.g., high-risk AI systems). Competent authorities can take enforcement actions in relation to the prohibitions on their own initiative or following a complaint by any affected person or other natural or legal person. The staggered timeline between the date of applicability of the AI Act’s provisions on prohibited uses and the deadline for designating the responsible authorities to enforce them has been causing some legal uncertainty.
A review of Member States that have already appointed MSAs at the time of writing show, for the most part, a decentralized approach to enforcing the AI Act’s prohibited practices. Such an approach, which assigns supervision and enforcement roles to a variety of authorities depending on the sector they regulate and their area of expertise, is typical for EU product safety legislation.
For example, on 4 February this year, Ireland published its Regulation of Artificial Intelligence Act 2026, the national law that, once adopted, will implement the AI Act’s provisions. On this basis, the enforcement approach proposed by the Act is to establish the AI Office of Ireland, either on or before 2 August 2026, which will act as the central coordinator and Single Point of Contact (Article 70 AI Act). Under this umbrella, the Act also proposes to assign monitoring and enforcement powers to different existing authorities for different prohibited practices: the Central Bank of Ireland will enforce prohibited practices in respect of financial services regulated by it; the Workplace Relations Commission will enforce prohibited practices used in employment (Article 5(1)(f) AI Act); the Coimisiún na Meán will be responsible for “certain” prohibited practices in respect of online platforms (as defined by the DSA); and the Irish Data Protection Commission (DPC) will also be responsible for “certain parts” of the prohibited practices. While the Act does not yet specify which “certain parts” the Irish DPC will be responsible for monitoring, the draft already gives an indication of the decentralized approach to enforcing the rules on prohibited practices at national level, with responsibility assigned to a variety of authorities.
In France, the CNIL is responsible for monitoring compliance of the prohibited practices for predictive policing, the untargeted scraping to develop facial recognition databases, emotion recognition in the workplace and education institutions, biometric categorization, and real-time remote biometric identification (Articles 5(1)(d) – (h)). Responsibility for monitoring compliance with Articles 5(1)(a) and (b) lies with the Audiovisual and Digital Communication Regulatory Authority and the Directorate General for Competition, Consumer Affairs and Fraud Control. Here we can also see responsibility for monitoring prohibited practices being assigned to more than one regulator, depending on their existing area(s) of regulatory focus.
Finally, the Guidelines state that non-compliance with the AI Act’s prohibitions constitute the “most severe infringement” of the law, and is therefore subject to the highest fines. Providers and deployers engaging in prohibited AI practices can be fined up to EUR 35 000 000 or 7% of total worldwide annual turnover, whichever is the highest.
Closing reflections and key takeaways
The AI Act doesn’t prohibit technology, but uses or practices of technology that pose unacceptable risk
Article 5 of the AI Act is broadly framed as such that technologies or AI systems themselves are not directly prohibited, but “practices” involving specific AI systems that pose unacceptable risk are. Such systems are, in turn, tied to certain actions, specifically to “placing on the market, putting into service or use” of an AI system. These actions are also interpreted broadly such that, for example, the “use” of an AI system also includes its intended use and potential misuse. The broad framing ensures that both providers and deployers of AI systems consider all phases of the AI lifecycle and approach compliance in a proportionate manner, taking into account “who in the value chain is best placed to adopt a mitigating or preventive measure.”
Practices of “General Purpose AI Systems” may also fall under the “prohibitions” of the EU AI Act
Equally of note is that the Guidelines extend the Article 5 prohibitions to practices related to any AI system, including general-purpose AI systems (rather than models themselves), even though such systems are not expressly mentioned in the AI Act provision. The Guidelines acknowledge that while harm often arises from the way specific AI systems are used in practice, both deployers and providers have a responsibility not to place on the market or put into service AI systems, including general-purpose AI systems, that are “reasonably likely” to behave in ways prohibited by Article 5 of the AI Act.
“In-house development” is at the same time excluded from the application of the AI Act and included in the “putting into service” definition in the Guidelines, needing further clarification
As shown above, the Guidelines provide clarifications about what “placing on the market”, “putting into service” and “use” of an AI system mean, which reveal a broad interpretation of the legal definitions enshrined in the AI Act. Notably, “putting into service” is expanded to mean not only “supply for first use”, but also “in-house development or deployment” (see Section 2.1 above). At the same time, Article 2(8) of the AI Act excludes from the scope of application of the regulation any “testing or development activity” regarding AI systems and models “prior to their being placed on the market or put into service”. Further clarification from the European Commission about this part of the Guidelines is needed for legal certainty.
The interplay of the prohibitions under the AI Act and the GDPR needs legal certainty
The Commission’s Guidelines on the AI Act’s prohibitions adopt a scaled approach to delineating, based on the level of risk, which AI practices or uses may be outright prohibited and which may instead fall under the Article 6 high-risk designation. The logic of the scaled approach also extends beyond the AI Act, as the Guidelines caution that while an AI practice may not fall under the Article 5 prohibitions, it may still be unlawful under other Union laws, such as the GDPR and DSA. What is not as clear, though, is what would happen if an AI practice potentially prohibited under the AI Act would otherwise be allowed by other legislation designated as prevailing over the AI Act, and particularly the GDPR. For example, Data Protection Authorities have allowed, in the past, some facial recognition systems to be used, and have found fixable infractions related to the use of emotion recognition systems, showing that such systems could be lawful under the GDPR if all conditions highlighted in the decision would be met. The European Data Protection Board could support consistency of interpretation and application of the two legal regimes with dedicated guidelines.
The enforcement architecture of prohibited AI practices exhibits significant decentralization and fragmentation, including at national level
There are two layers of decentralization of the enforcement architecture for the prohibited AI practices: first, they are primarily left to national competent authorities as opposed to a centralized authority at EU level; second, at national level, multiple authorities have often been designated within one jurisdiction, as the cases of Ireland and France described above show. This level of decentralization is expected to lead to fragmentation of how the relevant provisions of the AI Act are applied. This landscape is further complicated by the interplay of the prohibitions under the AI Act and the GDPR, through the role of supervisory authorities over processing of personal data and their independence as guaranteed by Article 16(2) of the Treaty on the Functioning of the European Union and Article 8(3) of the EU Charter of Fundamental Rights.
Finally, besides the close interaction between the various provisions of the AI Act themselves, the Guidelines also highlight the significant interplay between the Act and other Union laws. The ways in which these interactions may play out in the context of the several prohibited practices, such as emotion recognition and real-time biometric surveillance, will be explored in more detail in future blog posts in this series. Meanwhile, a deep dive into the broad framing of the AI Act’s prohibited practices reveals that a similarly broad understanding of the data protection acquis and EU digital rulebook is required in order to fully make sense of, and comply with, key obligations for the development and deployment of AI systems across the Union.
Paradigm Shift in the Palmetto State: A New Approach to Online Protection-by-Design
South Carolina Governor McMaster signed HB 3431, an Age-Appropriate Design Code (AADC) -style law, on February 5, adding to the growing list of new, bipartisan state frameworks fortifying online protections for minors. Although HB 3431 is dubbed an AADC, its divergence from past models and unique blend of requirements that draw upon a variety of other state laws may signal that youth privacy- and safety-by-design frameworks are undergoing a paradigm shift away from “AADCs” and into a new model for online protections entirely. South Carolina’s novel approach evolves the online design code schema from approaches seen in other jurisdictions through its focus on both privacy and safety risks, the way covered services must address those risks, the kinds of safeguards online services should provide to users and minors, enforcement priorities, and navigating constitutional pitfalls.
For compliance teams, the need to unpack the law’s unique provisions is urgent since the law took effect upon approval by the Governor, meaning these requirements are now in effect. Further complicating the timing of compliance considerations, NetChoice filed a lawsuit on February 9 challenging the constitutionality of the Act on First Amendment and Commerce Clause grounds. NetChoice has requested a preliminary injunction to block enforcement of the law as litigation progresses. However, with an unclear litigation timeline, several newly effective legal obligations, and significant enforcement provisions carrying personal liability for employees, compliance teams may be stuck between two high-stakes options: (1) a risk of insufficient action and consequential liability if entities are slower to come into compliance while monitoring litigation outcomes; or, (2) a risk of sunken compliance costs that could have been invested in other important compliance and trust and safety operations if they invest heavily into compliance now and the law is later overturned.
This blog post covers a few key takeaways, including:
Broad Scope & Thresholds: The Act applies to any legal entity (potentially including non-profits) providing online services “reasonably likely to be accessed by minors” that meets specific applicability thresholds, representing a blend of the scope of Maryland’s and Nebraska’s AADCs.
Heightened “Duty of Care”: Unlike other states that only require risk mitigation, South Carolina mandates that entities exercise reasonable care to prevent heightened risks of harm to minors, which include compulsive use, severe psychological harm, identity theft, and discrimination, among others. This sets a notably higher compliance bar than other states’ duty of care obligations.
Mandatory Tools & Universal Defaults: Despite only applying to services “reasonably likely to be accessed by minors,” services must provide all users of covered online services with expansive tools. For minors, these tools must be enabled by default, coupled with prescriptive parental monitoring requirements seemingly inspired by the proposed federal Kids Online Safety Act (KOSA).
Third-Party Audits & Public Reporting: Apparently attempting to navigate the constitutional pitfalls plaguing California’s and Maryland’s AADCs, South Carolina replaces requirements for internal data protection assessments (DPIAs) with mandatory annual third-party audits. These audits must be submitted to the Attorney General who will post audit reports publicly on the state website and include detailed information, including descriptions of algorithms and how “covered design features” are used by the online service, potentially raising trade secret concerns.
Significant Enforcement Provisions & Personal Liability for Employees: In a novel and extreme enforcement shift, the Attorney General is authorized to hold individual officers and employees personally liable for “willful and wanton” violations, in addition to seeking treble financial damages.
Please see our comparison chart for a full side-by-side analysis of how South Carolina’s approach compares against other state law protections for minors online.
South Carolina’s Act applies to any legal entity that owns, operates, controls, or provides an online service reasonably likely to be accessed by minors. Whereas prior comparable state laws typically limited the scope to for-profit entities, South Carolina seemingly extends application to non-profit and other non-commercial entities. This approach mirrors the legal entity framing adopted in Vermont’s and Nebraska’s AADCs, though those laws include narrower applicability thresholds. With respect to applicability threshold criteria, South Carolina aligns with the model set out in Maryland’s AADC, applying to entities that meet any one of the following: (1) $25 million or more in gross annual revenue; (2) the buying, selling, receiving, or sharing of personal data of more than 50,000 individuals; or (3) deriving more than 50 percent of annual revenue from the sale or sharing of personal data.
An Evolving Approach to Design Protections & Enforcement
Duty of Care
Similar to Vermont’s AADC and state comprehensive privacy laws that incorporate heightened protections for minors, such as Connecticut and Colorado, South Carolina imposes a duty of care on covered online services. Significantly, South Carolina’s duty requires entities to exercise reasonable care to prevent heightened risks of harm to minors, including compulsive use, identity theft, discrimination, and severe psychological harm, among others. The obligation to “prevent” harms to minors diverges sharply from comparable duties of care which only require entities to “mitigate” risks–seemingly placing a higher bar on entities’ compliance efforts compared to other online protection frameworks. Moreover, South Carolina includes two disclaimers regarding the application of the duty of care, including: (1) clarifying that “harm” is limited to circumstances not precluded by Section 230; and, (2) clarifying that entities are not required to prevent minors from intentionally “searching for content related to the mitigation of the described harms.”
Mandatory Tools & Default Settings
South Carolina takes a Nebraska AADC-style approach to requiring comprehensive tools and protective default settings for minors–but with a twist. Notably, South Carolina requires covered services to provide extensive tools to all users of an online service, such as tools for disabling unnecessary design features, opting-out of personalized recommendation systems (except for tailoring based on explicit preferences), and limiting the amount of time spent on a service or platform. For minors, the Act requires covered services to enable all tools by default, functionally achieving the same goals as high default settings requirements in other frameworks, like Vermont’s and Maryland’s AADCs. Additionally, South Carolina includes prescriptive requirements for the kinds of parental tools businesses must build and provide for parents to monitor and further limit minors’ use of online services–seemingly inspired by the parental tools obligations proposed by the KOSA. Importantly, businesses in scope of several minor online protection frameworks should pay close attention to South Carolina’s expansive mandatory tools and default settings requirements–and the range of users for which these tools must be available–when assessing compliance impacts.
Processing Restrictions
South Carolina’s new law includes a common component of other minor online protection frameworks: normative processing restrictions limiting the way covered online services can collect and use minors’ data, including restrictions on profiling and geolocation data tracking and a prohibition on targeted advertising. Notably, similar to Nebraska’s AADC, South Carolina also broadly prohibits covered entities’ use of dark patterns on a service. This goes far beyond many other privacy laws that instead prohibit dark patterns only insofar as they are used in obtaining consent or collecting personal data. Although the law as a whole is subject to Attorney General enforcement, South Carolina’s Act singles out the dark patterns prohibition as a violation of the South Carolina Unfair Trade Practices Act, which includes a private right of action.
Third Party Audits
One of the key issues hampering states’ implementation of AADC frameworks has been legal challenges to requirements for service providers to perform data protection impact assessments (DPIAs). The DPIA rules typically require covered online services to assess the likelihood of harm to children. For example, California’s AADC has been subject to litigation because, among other things, it included a requirement for businesses to assess and limit the exposure of children to “potentially” harmful content. The Ninth Circuit held that assessments that require a company to opine on content-based harms are constitutionally problematic, but it did not hold that DPIAs are entirely unconstitutional–yet the litigation caused some proponents of AADC-style laws to explore alternatives to DPIAs
Within this dynamic constitutional landscape, South Carolina shifts away from requiring covered entities to internally assess harms through DPIAs and instead requires covered entities to undergo annual third-party audits and publicly disclose the reports. Those audits must include detailed information on various aspects of the online service as it pertains to minors, including the purpose of the online service, for what purpose the online service uses minors’ personal and sensitive data, whether the service uses “covered design features” (e.g., infinite scroll, autoplay, notifications/alerts, appearance-altering filters, etc.), and a description of algorithms (an undefined term) used by the covered online service. This shift towards public disclosure of service assessment information may cause notable compliance difficulties and raise trade secret questions for covered online services, although it is unclear whether this unique ‘third-party audits’ approach addresses the underlying constitutional concerns highlighted in state AADC litigation.
Enforcement
South Carolina authorizes the Attorney General to enforce the Act, allowing for treble financial damages for violations. Most significantly, South Carolina also authorizes the Attorney General to hold officers and employees personally liable for “willful and wanton” violations–a novel and severe enforcement mechanism not employed in comparable frameworks. However, personal liability for employees and officers is not entirely unheard of in the broader consumer protection and digital services enforcement context. For example, in an aggressive enforcement approach advanced by the Federal Trade Commission (FTC) under Chair Lina Khan, the agency pursued personal liability against senior executives at a public company for violations of the FTC Act. In a more recent example, the Kentucky Attorney General filed a consumer protection lawsuit against Character.AI and its founders alleging the company knowingly harmed minors in the operation of its companion chatbot product, exposing minors to “sexual conduct, exploitation, and substance abuse.”
Conclusion
By adopting its novel approach, South Carolina adds to a growing state-level experiment that seeks to establish obligations to address and disclose risks of harm in online services and afford greater protections for minors with constitutional constraints. South Carolina’s novel blend of different state-level models, unique take on service assessments, and unusual enforcement approach may signal a broader fragmentation of online youth protection frameworks into three increasingly defined models: (1) data management-oriented heightened protections for minors embedded in state privacy laws; (2) age appropriate design codes that impose a fiduciary duty to act in children’s best interests, require age-appropriate design, and mandate DPIAs to assess foreseeable harms; and, (3) a “protective design” model exemplified by South Carolina, that synthesizes elements observed in first two while uniquely integrating privacy and safety obligations. It remains to be seen how the emerging protective design model may influence ongoing state legislative efforts, impact business compliance efforts, and measure-up against potential constitutional scrutiny.
From Chatbot to Checkout: Who Pays When Transactional Agents Play?
Disclaimer: Please note that nothing below should be construed as legal advice.
If 2025 was the year of agentic systems, 2026 may be the year these technologies reshape e-commerce. Agentic AI systems are defined by the ability to complete more complex, multi-step tasks, and exhibit greater autonomy over how to achieve user goals. As these systems have advanced, technology providers have been exploring the nexus between AI technologies and online commerce, with many launching purchase features and partnering with established retailers to offer shopping experiences within generative AI platforms. In doing so, these companies have also relied on developments in foundational protocols (e.g., Google’s Agent Payment Protocol) that seek to enable agentic systems to make purchases on a person’s behalf (“transactional agents”). But LLM-based systems like transactional agents can make mistakes, which raises questions about what laws apply to transactional agents and who is responsible when these systems make errors.
This blog post examines the emerging ecosystem of transactional agents, including examples of companies that have introduced these technologies and the protocols underpinning them. Existing US laws governing online transactions, such as the Uniform Electronic Transactions Act (UETA), apply to agentic commerce, including in situations where these systems make errors. Transactional agent providers are complying with these laws and otherwise managing risks through various means, including contractual terms, error prevention features, and action logs.
How is the Transactional Agent Ecosystem Evolving?
Several AI and technology companies have unveiled transactional agents over the past year that enable consumers to purchase goods within their interfaces rather than having to visit individual merchants’ websites. For example, OpenAI added native checkout features into its LLM-based chatbot that hundreds of millions of consumers already use, and Perplexity introduced similar features for paid users that can find products and store payment information to enable purchases. Amazon has also released a “Buy For Me” feature, which involves an agentic system that sends payment and shipping address information to third party merchants so that Amazon’s users can buy these merchants’ goods on Amazon’s website.
Application of Existing Laws (such as the Uniform Electronic Transactions Act)
As consumer-facing tools for agentic commerce develop, questions will arise about who is responsible when transactional agents inevitably make mistakes. Are users responsible for erroneous purchases that a transactional agent may make on their behalf? In these cases, long-standing statutes governing electronic transactions apply. The Uniform Electronic Transactions Act (UETA), a model law adopted by 49 out of 50 U.S. states, sets forth rules governing the validity of contracts undertaken by electronic means, and suggests that consumer transactions conducted by an agentic system can be considered valid transactions.
First, the UETA has provisions that apply to “electronic agents,” which are defined as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” This is a broad, technology-neutral definition that is not reserved for AI. It encompasses a range of machine-to-machine and human-to-machine technologies, such as automated supply chain procurement and signing up for subscriptions online. The latest transactional agents can take an increasing set of actions on a user’s behalf without oversight, such as finding and executing purchases, so these technologies could potentially qualify as transactional agents.
This means that transactional agents can probably enter into binding transactions on a person’s behalf. Section 14 of the UETA indicates that this can occur even without human review when two entities use agentic systems to transact on their behalf (e.g., an individual user of a system that buys goods on their behalf and that of an e-commerce platform that can negotiate order quantity and price). At a time where agentic systems representing distinct parties interacting with each other are edging closer to reality, these systems could bind the user to contracts undertaken on their behalf despite the lack of human oversight. However, a significant caveat is that the UETA also says that individuals may avoid transactions entered into by transactional agents if they were not given “an opportunity for the prevention or correction of [an] error . . . .” This is true even if the user made the error.
Finally, even if an agentic transaction is deemed valid and a mistake is not made, other legal protections may apply in the event of consumer harm. For example, a transactional agent provider that requires third parties to pay for their goods to be listed by the agent, or gives preference to its own goods, may violate antitrust and consumer protection law. There is also a growing debate over the application of other longstanding common law protections, such as fiduciary duties and “agency law.”
What Risk Management Steps are Transactional Agent Providers Taking to Manage Responsibility?
Managing responsibility for transactional agents can take varied forms, including contractual disclaimers and limitations, protocols that signal to third parties an agentic system’s authorization to act on a user’s behalf, as well as design decisions that reduce the likelihood of transactions being voided when errors occur (e.g., confirmation prompts that require users to authorize purchases):
Protocols that signal the scope of a user’s authorization to third parties: Transactional agent providers should also evaluate how a third party may perceive their actions, as these may provide the basis for a third party arguing that the agent was not acting on the user’s behalf. This may take the form of using various protocols that can communicate the limits of the agentic system’s authority to conduct a purchase, including those that allow parties to separate benign from undesirable agentic systems and ensure that a system is not impersonating an individual without their authorization.
Error prevention and correction features: Organizations should address the UETA-related risk of contracts being avoided by users in the absence of pre-purchase error and correction measures through the thoughtful design of UI flows and implementation of human review steps. Transactional agent providers and others do this through various means, such as confirmation prompts, alerts, and purchase size limits. These measures are important, as organizations cannot use contractual terms (e.g., the consumer is solely liable for errors made by the system) to circumvent this UETA requirement. For these reasons, many agentic platforms are still not operating totally independently.
Action logs that capture the what, when, and why of an agentic system’s decisions: Companies can create action logs that give users visibility into the system’s decision flow for a purchase to promote trust in transactional agents. Such logs could also help organizations demonstrate that a user authorized an agent to act on their behalf.
Conclusion
Organizations are increasingly rolling out features that enable agentic systems to buy goods and services. These current and near-future technologies introduce uncertainty about who is responsible for agentic system transactions, including when mistakes are made, which is leading providers to integrate error prevention features, contractual disclaimers, and other legal and technical measures to manage and allocate risks.
FPF Retrospective: U.S. Privacy Enforcement in 2025
The U.S. privacy law landscape continues to mature as new laws go into effect, cure periods expire, and regulators interpret the law through enforcement actions and guidance. State attorneys general and the Federal Trade Commission act as the country’s de facto privacy regulators, regularly bringing enforcement actions under legal authorities both old and new. For privacy compliance programs, this steady stream of regulatory activity both clarifies existing responsibilities and raises new questions and obligations. FPF’s U.S. Policy team has compiled a retrospective looking back at enforcement activity in 2025 and outlining key trends and insights.
Looking at both substantive areas of focus in enforcement actions and the level of activity by different enforcers, the retrospective identified four notable trends in 2025:
California and Texas Lead Growing Public Enforcement of Comprehensive Privacy Laws: Comprehensive privacy laws may finally be moving from a period of legislative activity into a new era where enforcement is shaping the laws’ meaning, as 2025 saw a significant increase in the number of public enforcement actions.
States Demonstrate Increasing Concern for Kids’ and Teens’ Online Privacy and Safety: As legislators continue to consider broad youth privacy and online safety legal frameworks, enforcers too are looking at how to protect the youth online. Bringing claims under existing state laws, including privacy and UDAP, regulators are paying close attention to opt-in consent requirements, protections for teenagers in addition to children under 13, and the online safety practices of social media and gaming services.
U.S. Regulators Go Full Speed Ahead on Location and Driving Data Enforcement: Building on recent enforcement actions concerning data brokerage and location privacy, federal and state enforcers have expanded their consumer protection enforcement strategy to focus also on first-party data collectors and the collection of “driving data.”
FTC Prioritizes Enforcement on Harms to Kids and Teens, and Deceptive AI Marketing, Under New Administration: The FTC transitioned leadership in 2025, moving into a new era under Chair Andrew Ferguson that included a shift toward targeted enforcement activity focused on ensuring children’s and teens’ privacy and safety, and “promoting innovation” by addressing deceptive claims about the capabilities of AI-enabled products and services.
There are several practical takeaways that compliance teams can draw from these trends: obtaining required consent prior to processing sensitive data, including through oversight of vendors’ consent practices, identification of known children, and awareness of laws with broader consent requirements; ensuring that consumer controls and rights mechanisms are operational; avoiding design choices that could mislead consumers; considering if and when to deploy age assurance technologies and how to do so in an effective and privacy-protective manner; and avoiding making deceptive claims about AI products.
2026: A Year at the Crossroads for Global Data Protection and Privacy
There are three forces twirling and swirling to create a perfect storm for global data protection and privacy this year: the surprise reopening of the General Data Protection Regulation (GDPR) which will largely play out in Brussels over the following months, the complexity and velocity of AI developments, and the push and pull over the field by increasingly substantial adjacent digital and tech regulations.
All of this will play out with geopolitics taking center stage. At the confluence of some of these developments, the protection of children online and cross-border data transfers – with their other side of the coin, data localization in the broader context of digital sovereignty, will be two major areas of focus.
1. The GDPR reform, with an eye on global ripple effects
The gradual reopening of the GDPR last year came as a surprise, without much debate or public consultation, if any. It passed its periodic evaluation in the summer of 2024 with a recommendation for more guidance and better implementation to suit SMEs and harmonization across the EU, as opposed to re-opening or amending it. Moreover, exactly one year ago, in January 2025, at CPDP-Data Protection Day Conference in Brussels, not one, but two representatives of the European Commission, in two different panels (one of which I moderated) were very clear that the Commission had no intention to re-open the GDPR.
Despite this, a minor intervention was first proposed in May to tweak the size of entities under the obligation to keep a register of processing activities through one of the simplification Omnibus packages of the Commission. But this proved to just crack the door open for more significant amendments to the GDPR proposed later on, under the broad umbrella of competitiveness and regulatory simplification the Commission started to pursue emphatically. Towards the end of the year, in November 2025, major interventions were introduced within another simplification Omnibus dedicated to digital regulation.
There are two significant policy shifts the GDPR Omnibus proposes that should be expected to reverberate in data protection laws around the world in the next few years. First, it entertains the end of technology-neutral data protection law. AI – the technology, is imprinted all over the proposed amendments, from the inconspicuous ones, like the new definition proposed for “scientific research”, to the express mentioning of “AI systems” in new rules created to facilitate their “training and operations” – including in relation to allowing the use of sensitive data and to recognizing a specific legitimate interest for processing personal data for this purpose.
The second policy shift – and perhaps the most consequential for the rest of the data protection world, is the narrowing down of what constitutes “personal data”, by adding several sentences to the existing definition to transpose what resembles the relative approach to de-identification which was confirmed by the Court of Justice of the EU (CJEU) in the SRB case this September. To a certain degree, the proposed changes bring the definition to pre-GDPR days, when some data protection authorities were indeed applying a relative approach in their regulatory activity.
The new definition technically adds that the holder of key-coded data or other information about an identifiable person, which does not have means reasonably likely to be used to identify that person, does not process personal data even if “potential subsequent recipients” can identify the person to whom the data relates. Processing of this data, including publishing it or sharing it with such recipients, would thus be outside of the scope of the GDPR and any accountability obligations that follow from it.
If the language proposed will end up in the GDPR, this would likely mark a narrowing of the scope of application of the law, leaving little room for supervisory authorities to apply the relative approach on a case-by-case basis following the test that the CJEU proposed in SRB. This is particularly notable, considering that the GDPR has successfully exported the current philosophy and much of the wording of the broad definition of personal data (particularly its “identifiability” component) to most data protection laws adopted or updated around the world since 2016, from California, to Brazil, to China, to India.
The ripple effects around the world of such significant modifications of the GDPR would not be felt immediately, but in the years to come. Hence, the legislative process unfolding this year in Brussels on the GDPR Omnibus should be followed closely.
2. The Complexity and Velocity of AI developments: Shifting from regulating data to regulating models?
There is a lot to unpack here, almost too much. And this is at the core of why AI developments have an outsized impact on data protection. There is a lot of complexity related to understanding the data flows and processes underpinning the lifecycle of the various AI technologies, making it very difficult to untangle the ways in which data protection is applicable to them. On top of it, the speed with which AI evolves is staggering. This being said, there are a couple of particularly interesting issues at the intersection of AI and data protection to be necessarily followed this year, with an eye towards the following years too.
One of them is the intriguing question of whether AI models are the new “data” in data protection. Some of you certainly remember the big debate of 2024: do Large Language Models (LLMs) process personal data within the model? While it was largely accepted that personal data is processed during training of LLMs and may be processed as output of queries done within LLMs, it was not at all clear that any of the informational elements related to AI models post-training, like tokens, vectors, embeddings or weights, can amount by themselves or in some combination to personal data (or not). The question was supposed to be solved by an Opinion of the European Data Protection Board (EDPB) solicited by the Irish Data Protection Commission, which was published in December 2024.
Instead, the Opinion painted a convoluted regulatory answer by offering that “AI models trained on personal data cannot, in all cases, be considered anonymous”. The EDPB then dedicated most of the Opinion on laying out criteria that can help assess whether AI models are anonymous or not. While most, if not all of the commentary around the Opinion usually focuses on the merits of these criteria, one should perhaps stop and first reflect on the framework of the analysis – namely assessing the nature of the model itself rather than the nature of the bits and pieces of information within the model.
The EDPB did not offer any exploration of what non-anonymous (so, then, personal?) AI models might mean for the broader application of data protection law, such as data subject rights. But with it, the EDPB may have – intentionally or not, started a paradigm shift for data protection in the context of AI, signaling a possible move from the regulation of personal data items to the regulation of “personal” AI models. However, the Opinion was ostensibly shelved throughout last year as it did not seem to appear in any regulatory action yet. I would have forgotten about it myself if not for a judgment of a Court in Munich in November 2025, in an IP case related to LLMs.
The German Court found that song lyrics in a training dataset for an LLM were “reproducibly contained and fixed in the model weights”, with the judgment specifically referring to how models themselves are “copies” of those lyrics within the meaning of the relevant copyright law. This is because of the “memorization” of the lyrics in the training data by the model, where weights and vectors are “physical fixations” of the lyrics. This judgment is not final, with a pending appeal. But it will be interesting to see whether this perspective of focusing on the models themselves as opposed to bits of data within them will find more ground this year and immediately following ones, pushing for legal reform, or will fizzle out due to over-complexity of making it fit within current legal frameworks.
Key AI developments which might push the limits of existing data protection and privacy frameworks to a breaking point, as they descend from research to market, will be:
hyper-personalization – think of the decades-old debate around targeting and individual profiling, but on steroids;
AI agents by themselves or acting together – for one thing, “control” of a person over their information is at the core of data protection, while the fundamental proposition of AI agents is to take over control in certain contexts;
World models and AI wearables – perhaps a good comparison would be a hyperbolized Internet of Things and all of its implications for by-stander privacy, consent, informational self-determination. Notwithstanding the fact that it is perhaps a naive comparison, particularly if the previous two points will be layered on top of this one, which would also integrate LLMs.
3. A concert of laws adjacent to data protection and privacy steadily becoming the digital regulation establishment
A third force pressing onto data protection for the foreseeable future are all the novel data-and-digital adjacent regulatory efforts solidifying into a new establishment of digital regulation, with their own bureaucracies, vocabulary and compliance infrastructure: online safety laws – including their branch of children’s online safety laws, digital markets laws, data laws focusing on data sharing or data strategies including personal and non-personal data, and the proliferation of AI laws, from baseline acts to sectoral or issue-specific laws (focusing on single issues, like transparency).
It may have started in the EU five years ago, but this is now a global phenomenon. Look, for instance, at Japan’s Mobile Software Competition Act, a law regulating competition in digital markets focusing on mobile environments which became effective in December 2025 and draws strong comparisons with the EU Digital Markets Act. Or at Vietnam’s Data Law which became effective in July 2025 and is a comprehensive framework for the governance of digital data, both personal and non-personal, applying in parallel to its new Data Protection Law.
Children’s online safety is taking increasingly more space in the world of digital regulatory frameworks, and its overlap and interaction with data protection law could not be clearer than in Brazil. A comprehensive law for children’s online safety, the Digital ECA, was passed at the end of last year and it is slated to be enforced by the Brazilian Data Protection Authority starting this spring.
It brings interesting innovations, like a novel standard for such laws to be triggered – “likelihood of access” of a technology service or product by minors, or “age rating” for digital services, requiring providers to maintain age rating policies and continuously assess their content based on it. It also provides for “online safety by design and by default” as an obligation for digital services providers. From state level legislation in the US on “age appropriate design”, to an executive decree in UAE on “child digital safety” – the pace of adopting online safety laws for children is ramping up. What makes these laws more impactful is also the fact that age limits of minors falling under these rules are growing to capture teenagers up until 16 and even 18 year-olds in some places, bringing vastly more service providers in scope than first generation children online safety regulations.
The overlap, intersection and even tensions of all these laws with data protection become increasingly visible. See, for instance, the recent Russmedia judgment of the CJEU, which established that an online marketplace is a joint-controller under the GDPR and it has obligations in relation to sensitive personal data published by a user, with consequences for intermediary liability that are expected to reverberate at the intersection of the GDPR and Digital Services Act in practice.
The compliance infrastructure of this new generation of digital laws and its need for resources (human resources, budget) break their way into an already stretched field of “privacy programs”, “privacy professionals”, and regulators, with the visible risks of moving attention from, and diluting meaningful measures and controls stemming from privacy and data protection laws.
4. Breaking the fourth wall: Geopolitics
While all these developments play out, it is particularly important to be aware that they unfold on a geopolitical stage that is unpredictable and constantly shifting, resulting in various notions of “digital sovereignty” taking root from Europe, to Africa, to elsewhere around the world. From a data protection perspective, and in the absence of a comprehensive understanding of what “digital sovereignty” might mean, this could translate into a realignment of international data transfers rules through more data localization measures, more data transfers arrangements following trade agreements, or more regional free data flows arrangements among aligned countries.
Ten years after the GDPR was adopted as a modern upgrade of 1980s-style data protection laws for the online era, successfully promoting fair information practice principles, data subject rights and the “privacy profession” around the world, data protection and privacy are at an inflection point: either hold the line and evolve to meet these challenges, or melt away in a sea of new digital laws and technological developments.
6 Privacy Tips for the Generative AI Era
Data Privacy Day, or Data Protection Day in Europe, is recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data. The Council of Europe initiated the day in 2006, with the first official celebration held on January 28, 2007, marking this year as the 19th anniversary of celebration. Companies and organizations around the world often devote time for internal privacy training during this week, working to improve awareness of key data protections issues for their staff.
It’s also a good time for all of us to think about our own sharing of personal data. Nowadays, one of the most important decisions we need to make about our data is when and how we use AI-powered services. To raise awareness, we’ve partnered with Snap to create a Data Privacy Day Snapchat Lens. Check it out by scanning the Snapchat code and learn more below about privacy tips for generative AI!
Know When You’re Using Generative AI
As a first step, it’s important to know what generative AI is and when you’re using it. Generative AI is a type of artificial intelligence that creates original text, images, audio, and code in response to input. In addition to visiting dedicated generative AI platforms (such as ChatGPT), you may find that many companies’ existing functionality now also includes generative AI capabilities. For example, a search in Google now provides answers powered by Google’s generative AI, Gemini. Other examples include Snap’s AI Lenses and AI Snaps in creative tools, and Adobe’s Acrobat and Express are now powered with Firefly, Adobe’s generative AI. X’s Grok now assists users and answers questions.
One of the best ways to identify when you’re using generative AI is to look for a symbol or disclaimer. Many organizations provide clues like symbols, and a range of companies like Snap, Github, and many others often use either a sparkle or star icon to denote generative AI features. You might also notice labels like “AI-generated” or “Experimental” alongside results from some companies, including Meta.
Think Carefully Before You Share Sensitive or Private Information
While this is a general rule of thumb for interacting with any product, it’s especially important when using generative AI because most generative AI systems use data that users provide (such as conversation text or images) to allow their models to continuously learn and improve. While your prompts, generated images, and other pieces of data can improve the technology for all users, it also means that if you share sensitive or private information, it could potentially be shared or surfaced in connection with training and developing the algorithm.
Be especially careful when uploading files, images, or screenshots to generative AI tools. Documents, photos, or screenshots can include more information than you realize, such as metadata, background details, or information about third parties. Before uploading, consider redacting, cropping, or otherwise limiting files to include only the information necessary for your task.
Some companies promise to not use your data for training, often if you are using the paid version of their service. Others provide an option to opt-out of use of your data for training or versions that have special protections. For example, ChatGPT’s new health service supports the upload of health records with additional privacy and security commitments, but you need to be sure to be using the specific Health tab that is being rolled out to users.
Manage Your AI’s Memory
Many generative AI tools now feature a memory function that allows them to remember details about you over time, providing more tailored responses. While this can be helpful for maintaining context in long-term projects, such as remembering your writing style, professional background, or specific project goals, it also creates a digital record of your preferences and behaviors. A recent FPF report explores these different kinds of personalization.
Fortunately, you typically have the power to control what Generative AI platforms remember. Most have settings to view, edit, or delete specific memories or to turn the feature off entirely. For instance, in ChatGPT, you can manage these details under Settings > Personalization, and Gemini allows you to toggle off “Your past chats” within its activity settings to prevent long-term tracking. Meta also provides options for deleting all chats and images from the Meta AI app. Another option is to use “Temporary” or “Incognito” modes, so you can enjoy a personalized experience without generative AI compiling data attributed to your profile.
In addition to managing memory features, it’s also helpful to understand how long Generative AI services keep your data. Some platforms store conversations, images, or files for only a short time, while others may keep them longer unless you choose to delete them. Taking a moment to review retention timelines can give you a clearer picture of how long your information sticks around, and help you decide what you’re comfortable sharing.
Define Boundaries for Agentic AI
Agentic AI, a form of generative AI that can complete tasks for users with greater autonomy, is becoming increasingly popular. For example, companies like Perplexity, OpenAI, and Amazon have unveiled agentic systems that can make purchases for consumers. While these systems can take on more tasks, they still require users to review purchases before they are final. As a best practice, you should look over the purchase to check that it aligns with your expectations (e.g., ordering 1 pair of socks and not 10). It is also important to keep in mind that since agentic systems can pull information from third party sources, there is a risk that the system will rely on inaccurate information about a product during purchases (e.g., that an item is in stock).
As agentic systems become more embedded in our lives, you should also be mindful about how much information you share with them. Consumers are already disclosing sensitive details about themselves to more basic chatbots, which businesses, the government, and other third parties may want to access. When interacting with agentic systems, keep this in mind and pay attention to what you disclose about yourself and others. You may similarly want to consider what type of access to provide to the agentic AI product, and rely on the principle of least privilege–only providing the minimum access needed for your use. For example, if an agentic system is going to manage your calendar, think through options for narrowing the access so your entire calendar is not shared, and that other apps connected to your calendar, like your email, are not shared unless necessary.
Review How Generative AI Products Handle Privacy and Safety
It’s important to regularly review the privacy and security practices of any company with which you share information, and this applies similarly to companies offering generative AI products. This can include checking what data is collected and how, as well as how that information is used and stored.
Snap has a Snapchat Privacy Center where you can review your settings. You can find those choices here.
ChatGPT’s privacy controls are available in the ChatGPT display, and OpenAI has a Data Controls FAQ that outlines where to find the settings and what options are available.
Gemini has the Gemini Privacy Hub, as well as an area to read about and configure your settings for Gemini Apps, which includes options for turning your Gemini history off.
Claude has a Privacy Settings & Controls page that outlines how long they store your data, how you can delete it, and more.
Co-Pilot provides an array of options for reviewing and updating your privacy settings, including how to delete specific memories and how your data is used. These settings are available on Microsoft’s website, here. Microsoft also provides a detailed Privacy FAQ page as well.
Keep in mind that Generative AI products change quickly, and new features may introduce new data uses, defaults, or controls. Periodically revisiting privacy and safety settings can help ensure your preferences continue to reflect how the product works today, rather than how it worked when you first configured it.
Explore and Have Fun!
LLMs can often provide useful data protection advice, so ask them questions about AI and privacy. Just be sure to double-check sources and accuracy, especially for important topics!
Data Privacy Day is a reminder that privacy is a shared responsibility. By bringing together FPF’s expertise in privacy research and policy with Snap’s commitment to building products with privacy and safety in mind, this collaboration aims to help people better understand how AI works and how to use it thoughtfully.
FPF Releases Updated Infographic on Age Assurance Technologies, Emerging Standards, and Risk Management
The Future of Privacy Forum is releasing an updated version of its Age Assurance: Technologies and Tradeoffs infographic, reflecting how rapidly the technical and policy landscape has evolved over the past year. As lawmakers, platforms, and regulators increasingly converge on age assurance as a governance tool, the updated infographic sharpens the focus on proportionality, privacy risk, and real-world deployment challenges.
What’s New
The updated infographic introduces several key changes that reflect the current state of age assurance technology and policy:
A Fourth Category: Inference. The original infographic outlined three approaches to age assurance: declaration, estimation, and verification. This update adds a fourth category—inference—which draws reasonable conclusions about a user’s age range based on behavioral signals, account characteristics, or financial transactions. For example, an email address linked to workplace applications, a mortgage lender, or a 401(k) provider, combined with login patterns during business hours, may infer that a user is an adult.
Relatedly, the updated version intentionally downplays age declaration as a standalone solution. While declaration remains useful for low-risk contexts and as an entry point in layered systems, experience and enforcement history continue to show that it is easily bypassed and insufficient where legal or safety obligations attach to age thresholds. The infographic now situates declaration primarily as an initial step within a waterfall or layered approach, rather than as a meaningful assurance mechanism on its own.
The update also highlights several new and emerging potential risks associated with modern age assurance systems. If not addressed properly, these could include loss of anonymity through linkage, increased breach impact from improper secured retained assurance data, secondary data use of assurance data, and circumvention risks such as presentation attacks or shared-device misuse.
In parallel, the infographic expands its coverage of risk management tools that can mitigate these concerns when age assurance is warranted. These include tokenization and zero-knowledge proofs to limit data disclosure, on-device processing and immediate deletion of source data, separation of processing across third parties, user-binding through passkeys or liveness detection, and emerging standards such as ISO/IEC 27566 and IEEE 2089.1. The emphasis is not on eliminating risk—which is rarely possible—but on aligning technical controls with the specific harms a service is attempting to address.
As with prior versions, the updated infographic reinforces a core message: there is no one-size-fits-all age assurance solution. Effective approaches are risk-based, use-case-specific, and privacy-preserving by design, balancing assurance goals against the rights and expectations of users. By clarifying the role of inference, contextualizing declaration, and surfacing both new risks and mitigation strategies, this update aims to support more informed decision-making across policy, product, and engineering teams.
Emerging Age Assurance Concepts. The field has advanced considerably, and the updated infographic now includes a dedicated section on emerging technologies that address Age Signals and Age Tokens, User-Binding, Zero Knowledge Proofs (ZKP), Double-Blind Models and One-Time vs. Reusable Credential.
Updated Risks and Risk Management Approaches. The infographic now presents a more comprehensive view of the risks and challenges associated with age assurance—including excessive data collection and retention, secondary data use, lack of interoperability, false positives and negatives, data breaches, and user acceptance challenges. Correspondingly, the risk management section highlights both established and emerging mitigations: on-device processing, tokenization and zero knowledge proofs, anti-circumvention measures (such as Presentation Attack Detection), standards (ISO/IEC 27566-1, IEEE 2089.1), and certification and auditing.
Practical Example: The updated infographic includes a detailed use case following “Miles,” a 16-year-old accessing an online gaming service. The scenario illustrates how multiple age assurance methods can work together in a layered “waterfall” approach—starting with low-assurance age declaration for basic access, escalating to facial age estimation for age-restricted features, and offering authoritative inference or parental consent as inclusive fallbacks when estimation results are inconclusive and formal id is not available . The example also demonstrates token binding with passkeys, ensuring that even if Miles shares his phone with a younger friend, the age credential cannot be accessed without the correct PIN, pattern, or biometric.