Red Lines under the EU AI Act: Understanding ‘Prohibited AI Practices’ and their Interplay with the GDPR, DSA
Blog 1/ Red Lines under the EU AI Act Series
The EU AI Act prohibits certain AI practices in the European Union (hereinafter also “the Union”or “the EU”), at the top of the pyramid of its layered approach: harmful manipulation and deception, social scoring, individual risk assessment, untargeted scraping of facial images, emotion recognition, biometric categorization, and real-time remote biometric identification for law enforcement purposes. These are the “red lines” that the EU has drawn through the AI Act. “Red lines” in AI governance have been generally described as meaning “specific boundaries that AI systems must not cross”, and, in more detail, as “specific, non-negotiable prohibitions on certain AI behaviors or AI uses that are deemed too dangerous, high-risk, or unethical to permit”. Most “red lines” emerge from soft law or self-regulation, with the AI Act being the first law globally drawing such lines, exemplifying the strict AI regulatory approach that the EU is pursuing.
Prohibited AI practices are regulated by Article 5 of the AI Act, which already became applicable in February 2025 (see a full timeline of when chapters of the AI Act become applicable). Starting on 2 August 2025 this provision also became enforceable by the designated authorities at Member State level, or the European Data Protection Supervisor – the supervisory authority for EU institutions, as the case may be. Non-compliance with it triggers administrative fines of up to 35 million euros or up to 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher. However, the supervision and enforcement landscape is highly fragmented and decentralized.
This blog is the first of a series which will explore each prohibited AI practice and its interplay with existing EU law, such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), starting from the Guidelines on Prohibited Artificial Intelligence Practices under the AI Act (hereinafter ‘the Guidelines’), published by the European Commission on 4 February 2025. The aim is to understand what AI systems and practices are within the scope of Article 5 of the AI Act, and to highlight potential areas of legislative overlap or lack of clarity. This is increasingly important, at a time where the European Commission has prioritized addressing the interplay of the digital regulation acquis with a view to amending parts of the AI Act and the GDPR through the Digital Omnibus initiative. While the initial proposal for the Digital Omnibus on AI does not seek to amend the AI Act’s prohibited practices requirements, multiple political groups of the European Parliament and several governments of member states are proposing amendments to enhance the list of prohibited practices particularly with regard to intimate deep fakes and Child Sexual Abuse Material.
This blog continues with an Introduction into the significance of the Guidelines and the place of the prohibited practices into the broader layered architecture of the AI Act, tailored to severity of risks (1), details about the definitions and scope of the prohibited practices (2), and an analysis of the interplay of the prohibited AI practices with the GDPR and DSA (3), before Conclusions (4) highlight takeaways:
- The AI Act does not prohibit technology, but uses or practices of technology that pose unacceptable risk.
- Practices of “General Purpose AI Systems” may also fall under the “prohibitions” of the EU AI Act.
- “In-house development” of AI systems is at the same time excluded from the application of the AI Act and included in the “putting into service” definition in the Guidelines – an action covered by the AI Act, needing thus further clarification.
- The interplay of the prohibitions under the AI Act and the GDPR needs legal certainty, considering that the GDPR takes priority in application (the AI Act “shall not affect” the GDPR and the data protection acquis), and that some of the prohibited practices under the AI Act have already been subject to GDPR enforcement.
- A year after the entry into force of the prohibited practices provisions, the competence to enforce them is highly scattered and decentralized, including at national level where multiple authorities are tasked with enforcing specific prohibitions under Article 5 of the AI Act.
- Entry into force of prohibited AI practices under the AI Act: A year on
Prohibited practices under Article 5 of the AI Act entered into force on 2 February 2025 and became enforceable on 2 August 2025. However, so far, no enforcement or otherwise regulatory action in relation to prohibited AI practices has been announced.
About a year ago, on 4 February 2025, the European Commission released Guidelines on Prohibited Artificial Intelligence Practices under the AI Act. The AI Act regulates the placing on the market, putting into service, and use of AI systems across the Union on the basis of harmonized rules and a tiered approach based on the severity of the risks posed by some AI systems. While there are four risk categories in the AI Act, the Guidelines provide legal explanations and practical examples on AI practices that are deemed unacceptable due to their potential risks to fundamental rights and freedoms, and are therefore prohibited.
While the Guidelines are non-binding, they offer the Commission’s first interpretation of the Article 5 prohibitions as well as crucial insights into its own analysis on the interplay between core requirements of the AI Act and other EU law, including (but not limited to) the GDPR and the DSA. In publishing the Guidelines, the Commission explicitly acknowledged that any authoritative interpretations of the AI Act ultimately reside with the Court of Justice of the European Union (CJEU), and notes that these may be reviewed or amended in light of relevant future case law or enforcement actions by market surveillance authorities. However, while enforcement actions under the AI Act are yet to emerge, analysis can be made with regard to the interplay between the Commission’s Guidelines and existing CJEU case law, as well as decisions by Data Protection Authorities (DPAs) under the GDPR.
This first blog in our series on ‘Red Lines under the EU AI Act’ highlights how the Commission’s Guidelines take a scaled approach to delineating the practices which fall within and outside of the scope of prohibited practices. The Guidelines highlight the close interplay between Articles 5 (on prohibited AI practices) and 6 (on high-risk AI systems) of the AI Act, and note that where an AI system does not fulfil the requirements for prohibition under the AI Act, it may still be unlawful or prohibited under other laws such as the GDPR.
- From emotion recognition, to social scoring via AI systems: Overview of prohibitions under Article 5 of the AI Act
The tiered regulatory approach of the AI Act takes into account four risk categories of AI systems on the basis of which scaled obligations are proposed: unacceptable risk, high risk, transparency risk, and minimal to no risk. This analysis zooms in especially on unacceptable risk, as found in Article 5 AI Act, which prohibits the placing on the EU market, putting into service or use of AI systems for manipulative, exploitative, social control or surveillance practices. Of note, Article 5 is framed as such that technology or AI systems themselves are not prohibited, but “practices” involving specific AI systems that pose unacceptable risks are. This framing is different from the one in Chapter III of the AI Act, which classifies and regulates systems themselves as “high-risk AI systems.”
The prohibited practices are, by their inherent nature, deemed to be especially harmful and abusive due to their contravention of fundamental rights as enshrined in the EU Charter of Fundamental Rights. The Guidelines issued by the European Commission highlight Recital 28 of the AI Act by reiterating that the impacts of prohibited AI practices are not limited to the right to personal data protection (Article 8 EU Charter) and the right to a private life (Article 7), but they also pose an unacceptable risk to the rights to non-discrimination (Article 21), equality (Article 20), and the rights of the child (Article 24).
Prohibited AI practices under the AI Act include:
- Harmful manipulation and deception (Article 5(1)(a));
- Harmful exploitation of vulnerabilities (Article 5(1)(b));
- Social scoring (Article 5(1)(c));
- Individual criminal offence risk assessment and prediction (Article 5(1)(d));
- Untargeted scraping to develop facial recognition databases (Article 5(1)(e));
- Emotion recognition (Article 5(1)(f));
- Biometric categorisation (Article 5(1)(g));
- Real-time remote biometric identification (RBI) (Article 5(1)(h)).
2.1. The Guidelines extend the scope of prohibited AI practices to include those related to general-purpose AI systems
In defining the material scope of Article 5 AI Act, the Guidelines expand upon the definitions of “placing on the market, putting into service or use” of an AI system. This is important, because all prohibited practices under Article 5(1) AI Act, from letters (a) to (g), refer to “the placing on the market, the putting into service or the use of an AI system that (…)” engages in a specific practice defined under each of the letters of the provision. Therefore, understanding the definitions of these terms is essential for the application of the “prohibitions”.
“Placing on the market” is the first making available of an AI system on the Union market, for distribution or use in the course of a commercial activity, either for a fee or free of charge (see Articles 3(9) and 3(10) AI Act for full definitions). Placing an AI system on the Union market is considered as such regardless of the means of supply, whether through an API, direct downloads, via cloud or physical copies.
“Putting into service” refers to the supply of an AI system for first use to the deployer or for own use in the Union for its intended purpose (Article 3(11)), and covers both the “supply for first use” to third parties and “in-house development or deployment”1. The inclusion of in-house development to the scope of Article 3(11) is a significant extension introduced by the Guidelines, considering the definition of “putting into service” in the AI Act only refers to “the supply of an AI system for first use directly to the deployer or for own use in the Union.” This interpretation might need further clarification, especially as Article 2(8) AI Act excludes “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” from its scope of application.
Regarding the “use” of an AI system, which is not directly defined by the AI Act, the Guidelines specify that it should be similarly broadly understood to cover the use and deployment of AI systems at any point in their lifecycle, after having been put into service or placed on the market. Importantly, the Guidelines specify that “use” also includes any “misuse” that may amount to a prohibited practice, making deployers responsible for reasonably foreseeable harms that may arise.
Given the scope of the prohibited practices, the Guidelines focus on both providers and deployers of AI systems and highlight that continuous compliance with the AI Act is required during all phases of the AI lifecycle. For each of the prohibitions, the roles and responsibilities of providers and deployers should be construed in a proportionate manner, “taking into account who in the value chain is best placed” to adopt a mitigating or preventive measure.
The Guidelines acknowledge that while harms may often arise from the ways AI systems are used in practice by deployers, providers also have a responsibility not to place on the market or put into service AI and GPAI systems that are “reasonably likely” to behave or be used in a manner prohibited by Article 5 AI Act. It is important to highlight that the Guidelines extend the scope of Article 5 to general-purpose AI systems as well, even though they are not specifically called out by the provision (see para. 40 of the Guidelines).
As highlighted above, the provision is drafted as such to target “practices” of AI, which opens the possibility that not only GPAI systems are covered, but also practices of agentic AI or any new shape or form of AI systems that result in a practice described by Article 5 AI Act. Indeed, the Guidelines specifically mention that the “prohibitions apply to any AI system, whether with an ‘intended purpose’ or ‘general purpose.’” It is worth noting, however, that the Guidelines address prohibitions in relation to general-purpose AI systems rather than models, recalling that such systems are indeed based on general-purpose AI models but “have the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems” (Article 3(66) AI Act).
2.2. Purposes that do not fall within the scope of the AI Act, and practices that do
The Guidelines note that the AI Act expressly excludes from its scope AI systems used for national security, defence, and military purposes (Article 2(3)). For this exclusion to apply, the AI system must be placed on the market, put into service or used exclusively for such purposes. This means that so-called “dual use” AI systems, such as those for civilian or law enforcement purposes, do fall within the scope of the law. A direct example from the Guidelines notes that: “if a company offers a RBI (remote biometric identification – n.). system for various purposes, including law enforcement and national security, that company is the provider of that dual use system and must ensure its compliance” with the AI Act (emphasis added).
In addition to judicial and law enforcement cooperation with third countries, research and development activities also fall outside the scope of the AI Act. Indeed, as also recalled above, the AI Act does not apply to “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” (Article 2(8)). The Guidelines view this exemption as a natural continuation of the AI Act’s market-based logic, which applies to AI systems once they are placed on the market. However, this raises consistency issues with how the same Guidelines include “in-house development or deployment” of AI systems in the scope of “putting into service” (see also Section 2.1. above).
It is worth noting that the Guidelines are explicit in their reminder of the fact that the research and development exclusion does not apply to testing in real-world conditions, and in cases where those experimental systems are eventually placed on the Union market. The testing of AI systems in real-world conditions may only be carried out in AI regulatory sandboxes, and in full compliance with other Union law, including the GDPR insofar as personal data processing is concerned.
The Guidelines also note that purely personal, non-professional activities similarly fall outside of the AI Act’s scope (Article 2(10)). This includes, for example, an individual using a facial recognition system at home. However, they are careful in noting that the facial recognition system as such does remain within the scope of the AI Act as regards the obligations of providers of such systems in ensuring compliance, even in full knowledge that the system is intended to be used by natural persons for purely non-professional purposes or activities.
The Guidelines take an overall cautious approach in delineating the purposes and practices which fall outside the scope of the AI Act through consistent reference to Recitals 22 to 25. The Recitals recall and make clear that providers and deployers of AI systems which fall outside the scope of the AI Act may nevertheless have to comply with other Union laws that continue to apply.
- Interplay of the AI Act’s Prohibitions with the High-Risk Designation and other Union Law
3.1. A scaled approach to the interplay between high-risk AI systems and prohibited AI practices
The Guidelines highlight key areas of interplay between the different risk categories, showing a scaled approach in the AI Act’s risk designation. Importantly, the Guidelines note the close relationship between Article 5 on prohibited practices, and Article 6 on high-risk AI systems. They note that “the use of AI systems classified as high-risk may in some cases qualify as prohibited practices in specific circumstances” and, conversely, most AI systems that fall under an exception from a prohibition listed in Article 5 will qualify as high-risk. This approach clarifies yet again that Article 5 is not meant to prohibit a specific technology, but practices or uses of technology.
An example where Articles 5 and 6 of the AI Act should be considered in relation to each other is in the case of AI-based scoring systems, such as credit scoring, which will be considered high-risk if they do not fulfil the conditions for the credit scoring prohibition as outlined in Article 5(1)(c). While not specifically mentioned by the Guidelines in this context, it is worth noting that Courts and DPAs across the EU have been active in cases involving automated credit scoring practices under Article 22 GDPR on automated-decision making (ADM), as well as in cases that may amount to “profiling”. The notion of “profiling” under the GDPR is particularly relevant in the context of understanding Article 5(1)(d) AI Act. As such, in addition to taking into full account the risk designations under Articles 5 and 6 AI Act, it is also crucial to note the ADM prohibition under Article 22 GDPR, as compliance with one law may not automatically equal to compliance with the other.
3.2. Interplay between the prohibited AI practices under the AI Act with the GDPR and DSA
The Guidelines acknowledge the dichotomy between the AI Act and other Union law by recalling that, as a horizontal law applying across all sectors, the Act is without prejudice to legislation on the protection of fundamental rights, consumer protection, employment, the protection of workers and product safety. They also frame the goal of the AI Act and its preventive logic in the sense that it provides additional protection by addressing potential harms arising from AI practices which may not be covered by other laws, including by addressing the earlier stages of an AI system’s lifecycle.
The Guidelines expressly highlight that where an AI system may not be prohibited under the AI Act, it may still be prohibited or unlawful under other laws because of, for example, “the failure to respect fundamental rights in a given case, such as the lack of a legal basis for the processing of personal data required under data protection law”, where, for instance, the GDPR is applicable, including extra-territorially.
Crucially, the Guidelines acknowledge that in the context of prohibitions, the interplay between the AI Act and data protection law is particularly relevant, since AI systems often process personal data. They specify that laws including the GDPR, the Law Enforcement Directive, and the EU Data Protection Regulation applying to EU institutions (EUDPR), “remain unaffected and continue to apply alongside the AI Act”, noting the complementarity of the Act with the EU data protection acquis.
This statement in the Guidelines about this relationship seems to be weaker than the provision in the AI Act, which states that the AI Act “shall not affect” the GDPR, the EUDPR, the ePrivacy Directive or the Law Enforcement Directive (Article 2(7) AI Act). This technically means that the AI Act is without prejudice to the GDPR and any of the other EU data protection aquis. This fact might create some complex compliance situations in practice, and will require a broad and comprehensive understanding of the EU digital rulebook as a whole, noting that its component parts cannot be read in isolation. For instance, what law prevails if a prohibited AI practice under the AI Act that overlaps with a solely automated decision-making practice involving personal data and legally or significantly affecting an individual, lawfully meets the exceptions under Article 22 GDPR? The AI Act is not designated as lex specialis, based on Article 2(7).
In addition to data protection law, the Digital Services Act (DSA) is similarly deemed relevant in the context of the AI Act’s prohibitions. The Guidelines highlight that these apply in conjunction with the relevant obligations on the providers of intermediary services (defined by Article 3(g) DSA) when AI systems or models are embedded in such services. Further, the AI Act and its prohibitions do not affect the application of the DSA’s provisions on the liability of such providers, as set out in Chapter II DSA, or existing or future liability legislation at Union or national levels. In the context of liability legislation, the Guidelines refer to Directive (EU) 2024/2853 on liability for defective products, and the now withdrawn AI Liability Directive.
3.3. Notes on Enforcement of the AI Act’s Prohibitions and Penalties: Fragmentation and Decentralization
The Guidelines recall that market surveillance authorities (MSAs), as designated by EU Member States, are responsible for enforcing the AI Act and its prohibitions. Member States had until 2 August 2025 to designate one or multiple MSAs, with some countries having already assigned the role to their national DPA with regard to certain parts of the AI Act (e.g., high-risk AI systems). Competent authorities can take enforcement actions in relation to the prohibitions on their own initiative or following a complaint by any affected person or other natural or legal person. The staggered timeline between the date of applicability of the AI Act’s provisions on prohibited uses and the deadline for designating the responsible authorities to enforce them has been causing some legal uncertainty.
A review of Member States that have already appointed MSAs at the time of writing show, for the most part, a decentralized approach to enforcing the AI Act’s prohibited practices. Such an approach, which assigns supervision and enforcement roles to a variety of authorities depending on the sector they regulate and their area of expertise, is typical for EU product safety legislation.
For example, on 4 February this year, Ireland published its Regulation of Artificial Intelligence Act 2026, the national law that, once adopted, will implement the AI Act’s provisions. On this basis, the enforcement approach proposed by the Act is to establish the AI Office of Ireland, either on or before 2 August 2026, which will act as the central coordinator and Single Point of Contact (Article 70 AI Act). Under this umbrella, the Act also proposes to assign monitoring and enforcement powers to different existing authorities for different prohibited practices: the Central Bank of Ireland will enforce prohibited practices in respect of financial services regulated by it; the Workplace Relations Commission will enforce prohibited practices used in employment (Article 5(1)(f) AI Act); the Coimisiún na Meán will be responsible for “certain” prohibited practices in respect of online platforms (as defined by the DSA); and the Irish Data Protection Commission (DPC) will also be responsible for “certain parts” of the prohibited practices. While the Act does not yet specify which “certain parts” the Irish DPC will be responsible for monitoring, the draft already gives an indication of the decentralized approach to enforcing the rules on prohibited practices at national level, with responsibility assigned to a variety of authorities.
In France, the CNIL is responsible for monitoring compliance of the prohibited practices for predictive policing, the untargeted scraping to develop facial recognition databases, emotion recognition in the workplace and education institutions, biometric categorization, and real-time remote biometric identification (Articles 5(1)(d) – (h)). Responsibility for monitoring compliance with Articles 5(1)(a) and (b) lies with the Audiovisual and Digital Communication Regulatory Authority and the Directorate General for Competition, Consumer Affairs and Fraud Control. Here we can also see responsibility for monitoring prohibited practices being assigned to more than one regulator, depending on their existing area(s) of regulatory focus.
Finally, the Guidelines state that non-compliance with the AI Act’s prohibitions constitute the “most severe infringement” of the law, and is therefore subject to the highest fines. Providers and deployers engaging in prohibited AI practices can be fined up to EUR 35 000 000 or 7% of total worldwide annual turnover, whichever is the highest.
- Closing reflections and key takeaways
The AI Act doesn’t prohibit technology, but uses or practices of technology that pose unacceptable risk
Article 5 of the AI Act is broadly framed as such that technologies or AI systems themselves are not directly prohibited, but “practices” involving specific AI systems that pose unacceptable risk are. Such systems are, in turn, tied to certain actions, specifically to “placing on the market, putting into service or use” of an AI system. These actions are also interpreted broadly such that, for example, the “use” of an AI system also includes its intended use and potential misuse. The broad framing ensures that both providers and deployers of AI systems consider all phases of the AI lifecycle and approach compliance in a proportionate manner, taking into account “who in the value chain is best placed to adopt a mitigating or preventive measure.”
Practices of “General Purpose AI Systems” may also fall under the “prohibitions” of the EU AI Act
Equally of note is that the Guidelines extend the Article 5 prohibitions to practices related to any AI system, including general-purpose AI systems (rather than models themselves), even though such systems are not expressly mentioned in the AI Act provision. The Guidelines acknowledge that while harm often arises from the way specific AI systems are used in practice, both deployers and providers have a responsibility not to place on the market or put into service AI systems, including general-purpose AI systems, that are “reasonably likely” to behave in ways prohibited by Article 5 of the AI Act.
“In-house development” is at the same time excluded from the application of the AI Act and included in the “putting into service” definition in the Guidelines, needing further clarification
As shown above, the Guidelines provide clarifications about what “placing on the market”, “putting into service” and “use” of an AI system mean, which reveal a broad interpretation of the legal definitions enshrined in the AI Act. Notably, “putting into service” is expanded to mean not only “supply for first use”, but also “in-house development or deployment” (see Section 2.1 above). At the same time, Article 2(8) of the AI Act excludes from the scope of application of the regulation any “testing or development activity” regarding AI systems and models “prior to their being placed on the market or put into service”. Further clarification from the European Commission about this part of the Guidelines is needed for legal certainty.
The interplay of the prohibitions under the AI Act and the GDPR needs legal certainty
The Commission’s Guidelines on the AI Act’s prohibitions adopt a scaled approach to delineating, based on the level of risk, which AI practices or uses may be outright prohibited and which may instead fall under the Article 6 high-risk designation. The logic of the scaled approach also extends beyond the AI Act, as the Guidelines caution that while an AI practice may not fall under the Article 5 prohibitions, it may still be unlawful under other Union laws, such as the GDPR and DSA. What is not as clear, though, is what would happen if an AI practice potentially prohibited under the AI Act would otherwise be allowed by other legislation designated as prevailing over the AI Act, and particularly the GDPR. For example, Data Protection Authorities have allowed, in the past, some facial recognition systems to be used, and have found fixable infractions related to the use of emotion recognition systems, showing that such systems could be lawful under the GDPR if all conditions highlighted in the decision would be met. The European Data Protection Board could support consistency of interpretation and application of the two legal regimes with dedicated guidelines.
The enforcement architecture of prohibited AI practices exhibits significant decentralization and fragmentation, including at national level
There are two layers of decentralization of the enforcement architecture for the prohibited AI practices: first, they are primarily left to national competent authorities as opposed to a centralized authority at EU level; second, at national level, multiple authorities have often been designated within one jurisdiction, as the cases of Ireland and France described above show. This level of decentralization is expected to lead to fragmentation of how the relevant provisions of the AI Act are applied. This landscape is further complicated by the interplay of the prohibitions under the AI Act and the GDPR, through the role of supervisory authorities over processing of personal data and their independence as guaranteed by Article 16(2) of the Treaty on the Functioning of the European Union and Article 8(3) of the EU Charter of Fundamental Rights.
Finally, besides the close interaction between the various provisions of the AI Act themselves, the Guidelines also highlight the significant interplay between the Act and other Union laws. The ways in which these interactions may play out in the context of the several prohibited practices, such as emotion recognition and real-time biometric surveillance, will be explored in more detail in future blog posts in this series. Meanwhile, a deep dive into the broad framing of the AI Act’s prohibited practices reveals that a similarly broad understanding of the data protection acquis and EU digital rulebook is required in order to fully make sense of, and comply with, key obligations for the development and deployment of AI systems across the Union.
- See para. 13 of the Guidelines, p. 4. ↩︎