Red Lines under the EU AI Act: Understanding Manipulative Techniques and the Exploitation of Vulnerabilities
Blog 2/ Red Lines under the EU AI Act Series
This blog is the second of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can read the first episode here and find the whole series here.
Harmful manipulation and deception through AI systems and exploiting certain human vulnerabilities are the first on the list of prohibited practices under Article 5 of the EU AI Act. It is apparent that the underlying goal of these provisions is to ensure that individuals maintain their ability to make autonomous decisions. This is especially important when considering that one of the goals of the AI Act is “to promote the uptake of human-centric and trustworthy AI”, while ensuring respect for safety, health and fundamental rights (see Recital 1, AI Act).
These first two prohibited practices listed in Article 5(1) specifically concern AI systems that could undermine individual autonomy and well-being through:
- Deploying subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behavior of natural persons or group(s) of persons (Article 5(1)(a) AI Act).
- Exploiting vulnerabilities due to age, disability, or a specific socio-economic situation (Article 5(1)(b) AI Act).
It is notable, though, that manipulative and deceptive practices based on processing of personal data, and those that specifically occur through online platforms, are already strictly regulated by the EU’s General Data Protection Regulation (GDPR) and Digital Services Act (DSA). Specifically, the GDPR intervenes through obligations like ensuring fairness (Article 6(1)(a)) and data protection by design (Article 25) for all processing of personal data, regardless of whether that processing occurs through AI or not, while the DSA includes a prohibition for providers of online platforms to design, organise or operate their online interfaces in a way that deceives or manipulates their users (Article 25). While the relationship between the DSA obligations and those in the GDPR related to manipulative design is clear, with the DSA only being applicable where the GDPR does not apply, their relationship with the AI Act prohibitions on manipulative techniques and exploiting vulnerabilities requires further guidelines and clarification.
The Guidelines published by the European Commission to support compliance with Article 5 AI Act highlight that the two prohibitions aim to protect individuals from being reduced to “mere tools for achieving certain ends”, and to protect those who are most vulnerable or susceptible to manipulation and exploitation. Significantly, the Guidelines analyze these two prohibitions together, making it obvious that there is a nexus between them. In this sense, according to the Guidelines, they are both designed to support and protect the right to human dignity, as enshrined in the EU Charter of Fundamental Rights.
This second blog in the “Red Lines” series provides an analysis of the scope and content of the Article 5(1)(a) prohibition in Section 2, focusing on the definitions of subliminal, manipulative, and deceptive techniques. Section 3 goes on to explore the notion of vulnerability contained in the Article 5(1)(b) prohibition and in the Guidelines, while Section 4 notes the possible interplay between the two prohibitions. Section 5 takes a broader view by highlighting the interplay between the prohibitions and other EU laws, including the GDPR and the DSA, before the conclusions in Section 6 note the following key takeaways:
- There is a high threshold, including some highly subjective elements, for fulfilling the cumulative conditions required for falling under the prohibitions related to manipulative techniques and the exploitation of vulnerabilities.
- The prohibition for AI practices that include manipulative techniques applies even when there is no intention of manipulation.
- Compliance with other laws, including the GDPR and DSA, in relation to these two prohibitions, can help demonstrate compliance with the AI Act.
2. Understanding harmful manipulation and deception as a prohibited practice under the AI Act
Article 5(1)(a) AI Act targets those cases in which AI practices subtly manipulate human action without the individual noticing. The final text of the AI Act for this provision underwent several changes from the European Commission’s initial proposal, broadening its scope and clarifying some elements.
Following amendments submitted by the European Parliament, the final text sought to add manipulative and deceptive techniques to the initial “subliminal techniques”, and broaden the scope of the ban to cover not only harmful effects on individuals but also on groups, in order to prevent discriminatory effects. Another modification of the initial proposal added that the prohibition should not be limited to cases where the systems are intended to modify behaviour, but also to cases where the modification of the behaviour that led to a significant harm is a mere “effect”, even when it was not the intended objective of the AI practice in question.
2.1. Defining subliminal, purposefully manipulative or deceptive techniques
The Guidelines list four cumulative conditions to be fulfilled in order for this prohibition to be applicable, even though, in their analysis, they also include a fifth one.
- The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system.
- The AI system must deploy subliminal (beyond a person’s consciousness), purposefully manipulative, or deceptive techniques.
- The techniques deployed by the AI system should have the objective or the effect of materially distorting the behavior of a person or a group of persons. The distortion must appreciably impair their ability to make an informed decision, resulting in a decision that the person or the group of persons would not have otherwise made.
- The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons.
The four conditions must be met cumulatively for the prohibition to be applicable. Additionally, according to the Guidelines, there must be a plausible causal link between the techniques used, the significant change in the person’s behavior, and the significant harm that resulted or is likely to result from that behavior. While the causal link is not listed among the four conditions, it is analyzed further down in the Guidelines as a self-standing, additional condition to be met, and it should be considered as the fifth point on this list.
The prohibition applies to both providers and deployers of AI systems who, each within their own responsibilities, have an obligation not to place on the market, put into service, or use AI systems that impair an individual’s ability to make an informed decision on the basis of subliminal, manipulative or deceptive techniques.
The Guidelines note that while the AI Act does not directly define “subliminal techniques”, the text of Article 5(1)(a) and Recital 29 imply that such techniques are inherently covert in that they operate beyond the threshold of conscious awareness, capable of influencing decisions by bypassing a person’s rational defences. However, the Recital also explains that the prohibition covers even those cases where the person is aware that the techniques used are subliminal, but cannot resist their effect. The Guidelines clarify that the prohibition on the use of subliminal techniques is not limited to those practices that influence decision-making only, but rather, it also covers those techniques that influence a person’s value- and opinion-formation, a criterion that seems highly subjective and might raise difficulties in applying it in practice. A relevant example could be an AI system facilitating deepfakes on matters of public interest when spread on platforms without appropriate labeling and in violation of the transparency obligations in place (Article 50 AI Act). Their use could be considered prohibited.
Subliminal techniques can use audio, visual, or tactile stimuli that are too brief or subtle to be noticed. The following techniques are among several suggested in the Guidelines (p. 20) as potentially triggering a ban, if the other conditions are also met:
- Visual Subliminal Messages: an AI system may show or embed images or text flashed briefly during video playback which are technically visible, but flashed too quickly for the conscious mind to register, while still being capable of influencing attitudes or behaviours.
- Auditory Subliminal Messages: an AI system may deploy sounds or verbal messages at low volumes or masked by other sounds, influencing the listener without conscious awareness. These sounds are still technically within the range of hearing, but are not consciously noticed by the listener due to their subtlety or masking by other audio.
- Embedded Images: an AI system may hide images within other visual content that are not consciously perceived, but may still be processed by the brain and influence behaviour.
The Guidelines, referring to Recital 29 AI Act, specify that the development of new AI technologies, like neurotechnology, brain-computer interfaces, virtual reality, or even “dream-hacking” increases the potential for sophisticated subliminal manipulation and its ability to influence human behavior subconsciously.
While “purposefully manipulative techniques” are similarly not defined by the AI Act, the Guidelines fill this gap by noting that such techniques exploit cognitive biases, psychological vulnerabilities, or situational factors that make individuals more susceptible to influence. This provision covers cases where individuals are aware of the presence of a manipulative technique but cannot resist its effect and, as a result, are pushed into decisions or behaviours that they would not have otherwise made (Recital 29).
Recital 29 of the AI Act also refers to techniques that deceive or nudge individuals “in a way that subverts and impairs their autonomy, decision-making and free choices.” A direct comparison can be made with the DSA which, inter alia, prohibits providers of online platforms from deceiving or nudging recipients of their service and from distorting or impairing their autonomy, decision-making and free choice (Article 25 and Recital 67 DSA).
The manipulative capability of the technique is a key factor in determining its effect. Indeed, the Guidelines clarify the AI system could manipulate individuals without the provider or deployer intendingto cause harm. However, the provision would still apply, unless the result is incidental and appropriate preventive and mitigating measures were taken. This is consistent with the overall logic and scope of the AI Act’s prohibitions, as explored in Blog 1 of this series, in which deployers have a responsibility to reasonably foresee harms that may arise from the misuse of an AI system.
Deceptive techniques are techniques that subvert or impair a person’s autonomy, decision-making, or free choice in ways of which the person is not consciously aware or, where they are aware, can still be deceived or cannot control or resist them. In the case of deepfakes, for example, Article 50 of the AI Act requires that the deployer disclose their nature. If this transparency is absent and the deepfake is used to deceive individuals, it could fall under prohibited uses. Notably, according to the Guidelines, this provision applies even if the deception occurs without the intent of the provider or deployer. However, the Guidelines also clarify that a generative AI system that produces misleading information due to hallucinations—provided the provider has communicated this possibility—does not constitute a prohibited practice.
2.2 To fall under the AI Act’s prohibited practices, manipulative techniques have to have the “objective or effect of materially distorting the behavior of a person or a group of persons”
The subliminal, manipulative and deceptive techniques must have the objective or the effect of materially distorting the behavior of a person or a group of persons. Material distortion involves a degree of coercion, manipulation, or deception that goes beyond lawful persuasion. The Guidelines note that material distortion implies a substantial impact on a person’s behavior, such that their decision-making and free choice are undermined, rather than a minor influence.
When interpreting “material distortion of behaviour” under Directive 2005/29/EC (the Unfair Commercial Practices Directive or ‘UCPD’), it is sufficient to demonstrate that a commercial practice is likely (i.e., capable) of influencing an average consumer’s transactional decision; there is no need to prove that a consumer’s economic behavior has been distorted. However, this requires a case-by-case assessment, considering specific facts and circumstances. Additionally, the average consumer’s perspective may not be helpful in situations where an AI system delivers highly personalized messages designed to manipulate individual behavior.
The AI Act adopts a similar understanding of “material distortion” as the UCPD, where the prohibition applies even if the material distortion of a person’s behavior occurs without the intent of the provider or deployer. The text specifies that the prohibition covers not only cases in which behavior modification is the object of the system (like in the original text of the European Commission’s proposal) but also those in which it is the mere “effect”. This change, as introduced into the final text, amplifies protection against the possible distorting effects of manipulative AI systems.
2.3 The subliminal, manipulative and deceptive techniques must be “reasonably likely to cause significant harm”
The Guidelines define harm under three broad categories:
- Financial and economic harm: which can include financial loss, exclusion and economic instability (an addition by the European Parliament during the AI Act negotiations).
- Physical: any injury or damage to a person’s life, health, and material damage to property (e.g., an AI chatbot promotes self-harm to users);
- Psychological: harm that exploits cognitive and emotional vulnerabilities, encompassing adverse effects on a person’s mental health and psychological and emotional well-being;
However, the harm must be significant for the prohibition to apply. The determination of ‘significant harm’ is fact-specific, requiring careful consideration of each case’s circumstances and a case-by-case assessment. Still, the individual effects should always be material and significant in each case. According to the Guidelines, the assessment of the significance of the harm takes into consideration several factors:
- The severity of the harm;
- Context and cumulative effects;
- Scale and intensity;
- Affected persons’ vulnerability;
- Duration and reversibility;
When assessing harm, the Guidelines suggest that a comprehensive approach should be taken, which considers both the possible immediate and direct harms that are associated with AI systems that deploy subliminal, deceptive, or manipulative techniques.
The last requirement for identifying a prohibited practice is determining the likelihood of a causal link between the manipulative technique and the distorting behavior. In that regard, to not fall in the category of prohibited practices, providers and deployers are suggested to take appropriate measures such as:
- Transparency and individual autonomy: integrate appropriate user control and safeguard measures to ensure that the system is not deceptive and operates within the boundaries of lawful persuasion;
- Compliance with relevant legislation: which indicates that the practice does not constitute a purposefully manipulative or deceptive practice;
- State-of-the-art practices and industry standards: which can help preempt and mitigate significant unintended harms.
It is worth reminding that although the concept of significant harm is very similar to the one of “significant effect” that we encounter within Article 22 GDPR on automated decision-making (ADM), they do not overlap perfectly, with the latter providing for a broader interpretation than the former (see here FPF’s Report on ADM case law). For example, profiling through ADM for political targeting could have a significant effect on citizens but not result in significant harm.
Not all forms of manipulation fall within the AI Act’s scope. Many persuasive techniques commonly used in advertising are legitimate because they operate transparently and respect individual autonomy. The Guidelines suggest that if an AI system appeals to emotions but remains transparent and provides accurate information, it falls outside the law’s scope.
Additionally, compliance with regulations like the GDPR helps providers and deployers demonstrate that transparency, fairness, and respect for individual rights and autonomy are upheld.
Furthermore, manipulation may be acceptable in some cases if it does not result in significant harm. For instance, in an example, the Guidelines provide – an online music platform might use an emotion recognition system to detect users’ moods and recommend songs that align with their emotions while avoiding excessive exposure to depressive content.
3. The exploitation of vulnerabilities, particularly those due to age, disability or socio-economic status, as prohibited AI practice
Cases in which an AI system exploits the vulnerabilities of a single person or a specific group with the objective of distorting their behavior are designated as prohibited AI practices under Article 5(1)(b) AI Act.
There are four cumulative conditions to be fulfilled for the application of Article 5(1)(b):
- The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons.
- The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system.
- The AI system must exploit vulnerabilities due to age, disability, or socio-economic situations.
- The exploitation enabled by the AI system must have the objective or the effect of materially distorting a person’s behavior or a group of persons.
3.1. Exploitation of vulnerabilities due to age, disability, or a specific socio-economic situation
While vulnerability is not directly defined by the AI Act, according to the Guidelines, the concept covers a wide range of categories, including cognitive, emotional, physical, and other forms of susceptibility that may impact an individual’s or group’s ability to make informed decisions or influence their behavior.
However, under the AI Act’s prohibited practices, the exploitation of vulnerabilities is only relevant if it involves individuals who are vulnerable due to their age, disability, or socio-economic circumstances. It is worth noting that a reference to an individual’s socio-economic situation was included in the final text of the AI Act after the amendments submitted by the European Parliament, which led to a wider scope of the Article 5(1)(b) prohibition in the final text, as compared to the initial European Commission proposal.
Exploiting other categories of vulnerabilities than those expressly mentioned falls outside the scope of the Article 5(1)(b) prohibition. The Guidelines note that age, disability, or socio-economic vulnerabilities may, in principle, lead to a limited capacity to recognize or resist manipulative AI practices. The prohibition aims to prevent the exploitation of cognitive limitations stemming from age or health conditions. However, socio-economic status can also reduce an individual’s ability to recognize deceptive practices and may intersect with other discriminatory factors, such as belonging to an ethnic, racial, or religious minority group.
The Guidelines share a number of examples in cases of exploitation of vulnerable people based on their age that fall under prohibited practices, including:
- An AI-powered toy designed to interact with children that keeps them interested in interactions with the toy by encouraging them to complete increasingly risky challenges;
- An AI system used to target older people with deceptive personalized offers or scams.
In the case of exploitation of vulnerable people based on disabilities, the Guidelines include the example mentioned of a therapeutic chatbot aimed to provide mental health support and coping strategies to persons with cognitive disabilities, which can exploit their limited intellectual capacities to influence them to buy expensive medical products.
When the exploitation concerns vulnerable people based on their socio-economic situation, an example mentioned is an AI-predictive algorithm that could be used to target people who live in low-income post-codes with advertisements for predatory financial products.
3.2. For the Article 5(1)(b) prohibition to apply, AI practices have to materially distort behavior and be reasonably likely to cause significant harm
As previously noted, a substantial impact is required to fall within the scope, even though intention is not a necessary element, as the provision also covers merely the effect (see Section 2.3). Similarly to fulfilling the conditions for Article 5(1)(a), as explored above, the AI practice has to be reasonably likely to cause significant harm. It is worth mentioning that the harms in this case may be particularly severe and multifaceted due to the increased susceptibility of the vulnerable group in question. Risks of harm that might be deemed acceptable for adults are often considered unacceptable for children and other vulnerable groups.
4. Areas of interplay between the two prohibitions, and between the prohibitions and other EU laws, including the UCPD, GDPR, and DSA
4.1. Tiered approach to the interplay between Articles 5(1)(a) and (b)
Where the Article 5(1)(a) prohibition covers mainly the use of subliminal and manipulative techniques, Article 5(1)(b) is focused on the targets of AI exploitation, particularly individuals considered vulnerable due to age, disability or socio-economic circumstances.
However, there may be instances where both Articles seem applicable. In such cases, examining the predominant aspect of the exploitation is essential. If the exploitation does not explicitly relate to one of the vulnerable groups previously discussed, Article 5(1)(a) applies, taking into consideration that it also covers the exploitation of vulnerabilities in groups outside those listed in Article 5(1)(b). When the exploitation specifically targets the groups identified in Article 5(1)(b), then the practice falls under this latter prohibition.
4.2. Interplay with the GDPR obligations to ensure fairness and data protection by design
The protection of individuals from manipulative processes is also covered in various other European laws, including the GDPR. Under the GDPR, the principle of fairness—enshrined in Article 5(1)(a)—acts as an overarching safeguard ensuring that personal data is not processed in a manner that is unjustifiably detrimental, unlawfully discriminatory, unexpected, or misleading to the data subject. Information and choices about data processing must be presented in an objective and neutral way, strictly avoiding any deceptive, manipulative language or design choices. In fact, the European Data Protection Board (EDPB) explicitly identifies the use of “dark patterns” and “nudging” as violations of this fairness mandate, as these techniques subconsciously manipulate data subjects into making decisions that negatively impact the protection of their personal data.
In its Guidelines 4/2019 on Data Protection by Design and by Default, the EDPB emphasizes that controllers must incorporate fairness into their system architectures from the outset, proactively recognizing power imbalances and granting users the highest degree of autonomy over their data. This means choices to consent to or abstain from data sharing must be equally visible, and platforms cannot use invasive default options or deceptive interfaces to lock users into unfair processing.
The profound risks of such subliminal and deceptive techniques are illustrated in the EDPB’s Binding Decision 2/2023 and the Irish Data Protection Commission’s corresponding final decision regarding TikTok. In these rulings, the authorities found that TikTok infringed the principle of fairness by utilizing deceptive design patterns to nudge child users toward public-by-default settings. TikTok has challenged these findings in a case now pending at the CJEU.
Beyond social media interfaces, the EDPB has also stressed the dangers of subliminal manipulation in democratic processes. In its Statement 2/2019 on the use of personal data in political campaigns (the Cambridge Analytica case), the EDPB warns that predictive tools used to profile people’s personality traits, moods, and points of leverage pose severe societal risks. When these sophisticated profiling techniques are used to target voters with highly personalized messaging, they not only infringe upon the fundamental right to privacy but also threaten the integrity of elections, freedom of expression, and the fundamental right to think freely without being subjected to unseen psychological manipulation.
Synthesizing EDPB decisions and guidelines: to counteract these deceptive techniques across all sectors, the fairness principle mandates that controllers respect data subject autonomy, avoid exploiting user vulnerabilities, and ensure that individuals are never coerced into abandoning their privacy through unfair technological architectures.
Importantly, these GDPR rules apply in the absence of high thresholds, making them particularly relevant even where the conditions to meet the AI Act prohibitions are not met. This is why clarity about the interplay of the two regulations are essential for practical implementation.
4.3. Interplay with other EU laws: UCPD, DSA
The AI Act serves to complement or expand the provisions of existing EU law. For instance, unlike EU consumer protection laws, Articles 5(1)(a) and 5(1)(b) of the AI Act extend protection beyond consumers to encompass any individual. As a result, it must be considered alongside other legal frameworks such as the UCPD, the GDPR, the DSA, the political advertising regulation, and EU product safety legislation.
For example, the UCPD aims to protect individuals from misleading information that could lead them to purchase goods they would not otherwise have bought. It also offers greater protection to vulnerable individuals, such as the elderly and children. The UCPD overlaps partly with the Article 5(1)(a) and (b) prohibitions, though not entirely. Firstly, the UCPD is a Directive and not a Regulation under EU law, and secondly, it only protects consumers (those “acting outside their trade, business, craft or profession”). In the case of the AI Act, however, the prohibitions in Article 5 serve to protect everyone, irrespective of their “consumer” or other status, such as “patient”, “student”, or “tax payers” to give some examples.
Furthermore, the scope of the UCPD is limited to transactional decisions, not all decisions. For example, a surgeon persuaded by manipulative or deceptive techniques by an AI system to operate on a patient in a certain way rather than another would not be covered by the UCPD. On the contrary, both rules will apply in all cases where AI systems are used to manipulate the consumer’s decision-making autonomy subliminally.
By analogy, the scope of the DSA is also limited to what happens on online platforms, and when it comes to deceptive design and the rules in Article 25 DSA – it is relevant only where the GDPR is not applicable, so the cases in which both the AI Act and the DSA apply are limited.
But there are other provisions of the DSA that could be relevant at the intersection with prohibited AI practices. For example, the DSA pays special attention to the prohibition of profiling using special categories of personal data (as defined by Article 9 GDPR) on online platforms, given the possible manipulative effect of disinformation campaigns that can lead to a negative impact on public health, public security, civil discourse, political participation, and equality (Recitals 69 and 95 DSA). Therefore, if bots and deepfakes spread information online to convince vulnerable individuals (such as the elderly, children, and economically disadvantaged individuals) to purchase high-profit financial products, both the DSA and the AI Act would apply.
Compliance with these laws can help mitigate harm and reduce manipulative effects. For example, suppose that a very large online platform has conducted a risk assessment to assess systemic risk (as required by Article 34 DSA) and a data protection impact assessment (as required by Article 35 GDPR in certain circumstances). In this case, it will be easier for such a platform to identify whether any of its AI systems may fall under the prohibited uses listed in Article 5 AI Act, and adopt mitigating measures accordingly.
5. Concluding Reflections and Key Takeaways
There is a high threshold for falling under the Articles 5(1)(a) and (b) prohibitions.
To fall under the prohibitions in Article 5(1)(a) or (b), providers and deployers would have to fulfil several cumulative conditions at once. Interpreting the Guidelines, this high threshold is designed to ensure that only very specific AI use-cases and applications would fall under the scope of the prohibitions. While a high threshold of application exists, it is worth noting that the final text of the AI Act ended up being broader in scope as compared to the European Commission’s initial proposal.
It is important to note that even where this threshold is not met, EU law through provisions of the GDPR regarding fairness and data protection by design when processing personal data, or some of the DSA rules when very large online platforms are involved would still limit some manipulative and deceptive practices.
The prohibition applies even when there is no intention of manipulation. Even when there is no voluntary intention to influence a person’s decision, Article 5(1) could still apply since the provision also covers the harmful effect of manipulating and exploiting individuals or groups. In order to mitigate potential risks, the provider may adopt transparency measures and implement appropriate safeguards to prevent harmful outcomes or consequences. While doing so, it is important to keep in mind that even though the use of a specific AI system does not meet the cumulative conditions of the Article 5(1) prohibitions, it is nevertheless highly likely to be considered a high-risk AI system under Article 6 AI Act.
Compliance with other laws can help demonstrate compliance with the AI Act.
The Guidelines highlight that if the AI provider shows compliance with relevant EU legislation on transparency, fairness, risk assessment, and data protection, it may contribute to demonstrating compliance with the AI Act’s requirements.