Red Lines under the EU AI Act: Unpacking the Prohibition of Individual Risk Assessment for the Prediction of Criminal Offences
Blog 4 | Red Lines under the EU AI Act Series
This blog is the fourth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.
The fourth blog in the “Red lines under the EU AI Act” series focuses on unpacking the prohibition on individual risk assessment and the prediction of criminal offences, as contained in Article 5(1)(d) AI Act and explored in the European Commission’s Guidelines on the topic. Our analysis led to three key takeaways:
- As this provision is limited in its scope, it does not entirely prohibit crime prediction or forecasting AI technologies – rather, it focuses on prohibiting individual risk assessments to predict criminal offences based solely on profiling or personality assessments. The provision relies on the well-established GDPR definition of ‘profiling’;
- Similarly to other prohibitions explored in this series, when an AI system does not meet all of the conditions for the Article 5(1)(d) prohibition to apply, it will nevertheless be classified as a high-risk AI system and be subject to specific requirements and safeguards, including human intervention;
- Given the particularly sensitive context of crime prediction, and the inherently “forward-looking” nature of risk assessments, the Guidelines note that engaging in such activities may perpetuate or reinforce biases and erode public trust in law enforcement.
With this context in mind, this blog post begins with an overview of the logic and scope of the prohibition on individual risk assessment in the EU AI Act, and continues in Section 3 with an analysis of understandings of “risk” elaborated in the Commission’s Guidelines. Section 4 expands on the notion of “profiling”, including the prohibition of assessing a natural person’s personality traits and characteristics, and Section 5 outlines the exceptions to the Article 5(1)(d) prohibition. Section 6 explores cases in which this provision is applicable to private sector actors, and Section 7 notes concluding reflections and key takeaways.
2. The ban is limited in its scope, applying only to AI systems used to assess or predict criminal offences based solely on profiling or personality assessments
Article 5(1)(d) AI Act establishes a crucial prohibition on AI systems that assess or predict the likelihood of natural persons committing criminal offenses based solely on profiling or personality assessment. This prohibition focuses on risk assessments relating specifically and exclusively to committing criminal offences, reflecting the fundamental principle that individuals should be judged on their actual behavior rather than predicted conduct, reinforcing the principle of legal certainty in EU criminal law.
Importantly, the prohibition does not apply when AI systems support human assessment regarding a person’s involvement in a criminal activity (offending or re-offending), such as when the assessment is already based on objective and verifiable facts directly linked to criminal activity. In such cases, the AI system serves as a supportive tool rather than the primary decision-maker. These systems are instead classified as high-risk AI systems (Annex III, point 6, letter (d) AI Act).
The provision does not entirely outlaw crime prediction and risk assessment practices but, rather, imposes specific conditions under which the use of certain AI systems in specific contexts shall be prohibited. The Guidelines clarify that the three cumulative conditions all have to be met simultaneously, creating a high threshold for the prohibition to apply:
- The practice must involve placing an AI system on the market, putting it into service for the specific purpose of assessing or predicting the likelihood of natural persons committing criminal offenses, or using the AI system.
- The AI system must make risk assessments that assess or predict the risk of a natural person committing a criminal offence.
- The risk assessment or the prediction must be based solely on either, or both, of the following:
a. The profiling of a natural person,
b. Assessing a natural person’s personality traits and characteristics.
The prohibition applies to law enforcement authorities or any entity using such systems on their behalf, as well as to Union institutions, bodies, offices, or agencies that support law enforcement authorities. Both providers and deployers therefore have the responsibility not to place on the market, put into service or use AI systems that meet the above conditions. The rationale behind this prohibition is that natural persons should be judged on the basis of their actual behaviour rather than on (AI-)predicted behaviour. While the Guidelines do not directly refer to the principle of legal certainty when analyzing the rationale for this prohibition, it should play a role in the implementation of this prohibition, as it is a primary principle of the rule of law in the EU, alongside equality before the law, the prohibition of the arbitrary exercise of executive power, and effective judicial protection.
It is also worth highlighting that the Article 5(1)(d) prohibition applies to criminal offences only, with administrative offences falling outside of the scope of the prohibition. Under EU criminal law, the determination of the criminal nature of an offence most often depends on national law and, as such, may include offences that are not covered by Union law. Given possible differences at national level across the EU, the use of AI systems for the risk assessment and prediction of criminal offences might require further clarification, particularly with regard to which actions amount to “criminal offences” under national law. Indeed, the Guidelines highlight that, for offences that are not directly regulated under EU law, the national qualification of the offence is nevertheless subject to scrutiny by the CJEU on a case-by-case basis, since the concept of “criminal offence” has autonomous meaning within EU law and should be interpreted consistently across EU Member States.
3. Notions of “risk” in the AI Act’s prohibitions, while uncertain, are closely related to harm and ensuring individuals are only assessed on the basis of actual (not predicted) behaviou
According to the Commission’s Guidelines, risk assessments are understood broadly and can be conducted at any stage of law enforcement activities, such as during crime prevention, detection, investigation, prosecution, execution of criminal penalties, and during the process of an individual’s reintegration into society. Such risk assessments are often referred to as individual “crime prediction” or “crime forecasting” which, according to the Guidelines, refer to “advanced AI technologies and analytical methods applied to large amounts of often historical data… which, in combination with criminology theories, are used to forecast crime as a basis to inform police and law enforcement strategies and action to combat, control, and prevent crime.”
In practice, there are two major areas where law enforcement applies AI risk assessments: predictive policing and recidivism risk assessment. Predictive policing involves law enforcement using predictive analytics and other algorithmic techniques to identify patterns related to the occurrence of crime and unsafe situations, and to proactively prevent crime based on these insights. This approach has been adopted by several Member States. On the other hand, a recidivism risk assessment is used to predict the risk of individuals reoffending.
Crime prediction or crime forecasting AI systems identify patterns within historical data, associating indicators with the likelihood of a crime occurring, and then generate risk scores as predictive outputs. The Guidelines seem to expand on the notion of “risk” contained in Article 5(1)(d), noting the inherently “forward-looking” nature of risk assessments used for crime prediction or forecasting.
In this context, they note that using historical data on crimes committed to predict other persons’ future behaviour may perpetuate or reinforce biases, and undermine public trust in law enforcement and the justice system. Indeed, risk is by definition uncertain: it may or may not materialise into harm. Any decision based solely on a risk score has the potential to make a wrong assumption regarding the actual commission of a criminal offence. In a recent case, the Dutch Ministry of Justice and Security instructed the probation service in the Netherlands to either adjust or stop using the OxRec algorithm, which, following an investigation, was found to have misjudged the risk of recidivism in a quarter of cases. Having been used around 44,000 times per year, OxRec was identified as relying on outdated data, being in breach of privacy legislation, and posing risk of discrimination.
As Recital 42 of the AI Act explains, natural persons in the EU should always be assessed on the basis of their actual behaviour, and risk assessments carried out solely on the basis of profiling or an assessment of personality traits or characteristics should be prohibited. This aligns with the presumption of innocence until proven guilty under the law (Article 48 EU Charter of Fundamental Rights) and the principle of legal certainty as enshrined in EU law. Indeed, in their final section analyzing the interplay of this prohibition with other Union law, the Guidelines acknowledge the indirect link between the prohibition and Directive (EU) 2016/343 on the presumption of innocence.
4. The prohibition relies on the GDPR’s definition of ‘profiling’, and takes a broad understanding of ‘personality traits’ and ‘characteristics’
The Guidelines clarify that the prohibition applies regardless of whether the AI system profiles or assesses the personality traits and characteristics of only one natural person or a group of natural persons simultaneously. In this context, group profiling can consist of, for example, an AI system assessing and predicting the risk of other persons committing similar offences, based on constructed or historic data about previously committed crimes by others.
Similarly to the prohibition in Article 5(1)(c) AI Act, explored in Blog 3 of the “Red Lines” series, profiling is understood by reference to its definition in Article 4(4) GDPR. Further, the Guidelines highlight that the predictive policing prohibition is without prejudice to Article 11(3) of the Law Enforcement Directive (LED), which prohibits profiling on the basis of special categories of personal data which results in direct or indirect discrimination.
The risk assessments covered by the analyzed provision are only prohibited when they are based solely on the profiling of a person or the assessment of their personality traits and characteristics. This means that when there is a human assessment, which will normally be based on relevant objective and verifiable facts, and the AI assessment is used to support the human assessment, the prohibition does not apply. The Guidelines clarify that “personality traits” and “characteristics” are to be broadly understood, and that the examples contained in Recital 42 are not exhaustive.
However, according to the Guidelines, the use of the term “solely” leaves open the possibility of various other elements being taken into account in the risk assessment, beyond personality traits and characteristics, which will need to be assessed on a case-by-case basis. The Guidelines submit that in order to avoid circumvention of the prohibition and ensure its effectiveness, any such other elements will have to be real, substantial, and meaningful for them to be able to justify the conclusion that the prohibition does not apply. In this context, both providers and deployers of such systems will have to document their decision-making processes to be able to justify choosing a certain course of action over another, particularly in highly sensitive contexts such as crime prediction, in which the risks of producing legal effects can be imminent and significant.
5. Exception(s) to the prohibition: When a ‘predictive policing’ AI system is not prohibited, but may nonetheless be classified as ‘high-risk’
The last phrase of Article 5(1)(d) AI Act clarifies that the prohibition does not apply to AI systems that are used to support the human assessment of the involvement of a person in a criminal activity. This exception applies only insofar as the human assessment is based on objective and verifiable facts directly linked to the criminal activity at hand. While both the AI Act and the Guidelines do not directly define what may constitute “objective and verifiable acts”, the Guidelines provide some examples in which these conditions for the exception to the prohibition may be fulfilled.
For example, this is the case for an AI system used for the profiling and categorization of actual behaviour, such as “reasonably suspicious dangerous behaviour in a crowd that someone is preparing and likely to commit a crime, and there is a meaningful human assessment of the AI classification” (emphasis added). This latter requirement for ensuring that any AI system used in this context is only acting in support of human assessment echoes the GDPR’s right to obtain human intervention in automated decision-making contexts.
In the highly sensitive context of crime prediction, the requirement for the “human assessment” to be based on objective and verifiable facts linked to a specific criminal activity is an important precursor to the exercise of the right to an effective remedy (Article 47 EU Charter of Fundamental Rights). While the Guidelines do not expressly refer to the EU Charter, they refer to case law of the Court of Justice of the EU (CJEU) in their understanding and interpretation of the concept of “human assessment.” In the Ligue des droits humains judgement, published in June 2022, the CJEU noted that any human assessment “must rely on objective criteria … and to ensure the non-discriminatory nature of automated processing.”
Additionally, according to the Dutch DPA (AP), human intervention ensures that a decision is made carefully and prevents people from being (unintentionally) excluded or discriminated against by the outcome of an algorithm. Hence, human intervention must contribute meaningfully to the decision-making process, rather than serve only as a symbolic function.
It is worth noting that while the Guidelines are specific in their interpretation of the exception contained in Article 5(1)(d), they also mention that this express exclusion from the prohibition may not be the only one. However, the Guidelines do not further elaborate on what other exceptions may apply and in which contexts. It is likely that such exceptions may have to be assessed on a case-by-case basis and, in any case, be real, substantial, and meaningful. Nevertheless, what the Guidelines do clarify is that when the system falls within the scope of the exclusion from the prohibition, it will be classified as a high-risk AI system and be subject to specific requirements and safeguards, including with regard to human oversight as referred to in Articles 14 and 26 AI Act.
Finally, it is worth noting that AI systems used in the context of national security are excluded from the scope of the AI Act as referred to in Article 2(3) and further explained in Recital 24. This means that an AI system that falls under the ‘predictive policing’ prohibition may nevertheless be permitted exclusively for national security purposes. In this context, the Guidelines do not clarify the distinction between national security and law enforcement activities, which could be crucial for delineating the boundaries of the prohibition of individual risk assessment.
This is particularly relevant with regard to ‘dual-use systems’ – AI systems that can be used both for law enforcement purposes and for the prevention of national security threats. Recital 24 provides a clarification for such cases, stating that ‘AI systems placed on the market or put into service for an excluded purpose, namely military, defence or national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within the scope of this Regulation and providers of those systems should ensure compliance with this Regulation.’ Hence, if an AI system is placed on the market or put into service for both national security and law enforcement purposes, it must nevertheless comply with the AI Act.
6. The prohibition can apply to private actors when they are entrusted by law to exercise public authority and public powers
Notably, the ‘predictive policing’ prohibition does not apply exclusively to law enforcement authorities. The prohibition may be assumed to apply, in particular, when private actors are entrusted by law to exercise public authority and public powers for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties. Private actors may also be explicitly requested, on a case-by-case basis, to act on behalf of law enforcement authorities and carry out individual crime risk predictions. In those cases, the activities of those private actors could also fall within the scope of the Article 5(1)(d) prohibition.
The prohibition may apply to private entities assessing or predicting the risk of a person committing a crime where this is objectively necessary for compliance with a legal obligation to which that private operator is subject to (for example, a banking institution obliged by Union anti-money laundering legislation to screen and profile customers for money-laundering offences).
The Guidelines also outline what is explicitly excluded from this prohibition or out of its scope, namely:
- Location-based crime prediction without individual profiling (e.g. prediction on the likelihood of criminality in certain areas within a city);
- AI systems that support human assessments based on objective and verifiable facts linked to a criminal activity, as explored above;
- Administrative offense prediction, on the basis that their prosecution is less intrusive for individuals’ fundamental rights and freedoms; and
- Risk assessments of legal entities (unless targeting specific individuals).
While the Guidelines do not expressly address the issue, it is worth noting that, while certain exemptions may exist for the use of AI technologies in the law enforcement context, the mere fact that such uses occur in the context of determining criminal activity does not absolve a private entity from complying with legal obligations beyond the AI Act, including under the GDPR. In a case that led to a more than €30 million fine imposed by the Dutch AP on Clearview AI in September 2024 under the GDPR, the company argued that they were acting in the interest of potential third-party users of their facial recognition database, in this case overwhelmed law enforcement authorities (paragraph 88 of the Dutch AP’s judgement). The company also identified “responsible organizations charged with protecting society” (paragraph 88), which may include private actors, as justifying the interest of third parties in using their service.
In assessing whether the interest of third parties in combating crime, tracing victims, and other public duties qualify as legitimate interests, the Dutch AP notes that “such interests do not qualify as a legitimate interest of a third party” within the meaning of Article 6(1)(f) GDPR. The Dutch AP expands that, similarly, Dutch and European regulators cannot rely on legitimate interests under Article 6(1) GDPR for the purposes of exercising their duties of preserving and protecting society-wide interests (paragraph 92).
With this in mind, caution must be exercised in ensuring a reading of the AI Act’s prohibitions that is contextualized within the broader set of EU rules regulating technology development and deployment. In this sense, the Guidelines could have expanded on Section 5.4 (Interplay with other Union law) by making reference to at least one specific instance in which regulatory authorities, on the basis of already applicable and relevant laws, have interpreted technology uses that directly relate to the prohibition at hand. This may have helped reinforce legal certainty with regard to the applicability and scope of the prohibition by noting instances in which uses not expressly covered by the AI Act are otherwise covered by other EU laws.
7. Concluding Reflections and Key Takeaways
As Article 5(1)(d) is limited in its scope, it does not entirely prohibit crime prediction or forecasting AI technologies
As explored in the fourth blog post in the series, given that the Article 5(1)(d) prohibition is limited and targeted in its scope, it does not entirely prohibit crime prediction or forecasting AI technologies. Rather, it focuses on prohibiting (individual) risk assessments for the prediction of criminal offences based solely on profiling or personality assessments. The prohibition draws on the logic and legal foundations of general and fundamental rights law in the EU and, in particular, on Article 47 (right to an effective remedy and fair trial) and Article 48 (presumption of innocence and right of defence) of the EU Charter of Fundamental Rights.
When an AI system does not meet all of the conditions for the prohibition to apply, it will nevertheless be classified as a high-risk AI system
Similar to the analysis in previous blog posts on the AI Act’s prohibitions, we find that when an AI system does not meet all of the conditions for the prohibition to apply, it will be classified as a high-risk AI system. This is reminiscent of the AI Act’s scaled approach to delineating and classifying risk and the close interplay between Articles 5 and 6 of the AI Act.
The Guidelines note that engaging in crime prediction activities may perpetuate or reinforce biases and erode public trust in law enforcement
Finally, given the particularly sensitive context and nature of applying AI technologies in the area of crime prediction and forecasting, wherein risk assessments can lead to significant legal effects and consequences for individuals, the Guidelines acknowledge that such activities may perpetuate or reinforce biases and erode public trust in law enforcement.