Red Lines under the EU AI Act: Restricting Real-time Remote Biometric Identification Systems for Law Enforcement Purposes
Blog 8 | Red Lines under the EU AI Act Series
This blog is the eighth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.
- Introduction
The eighth blog in the “Red lines under the EU AI Act” series examines the general prohibition on the use of real-time biometric (RBI) systems in publicly accessible spaces for law enforcement purposes imposed by Article 5(1)(h) of the EU AI Act, the three narrow exceptions to the prohibition permitted for Member States to utilize, and how these obligations fit in the broader context of real-time biometric identification in the EU.
There are a few key takeaways from our analysis of this provision:
- The prohibition on the use of RBI systems in public spaces is narrowly tailored. All of the factors must be present for the prohibition to be triggered, otherwise the collection and use of biometric information is categorized as related to “high-risk” AI systems .
- RBI systems can create a risk to the rights and freedoms of individuals simply by being deployed. The European Commission Guidelines and the AI Act Recitals both emphasize the risk of a “chilling effect” on the exercise of public freedoms that can come from a perception of ubiquitous surveillance.
- The Guidelines and the AI Act itself make a significant effort to distinguish banned “remote biometric identification” from permitted uses of biometric identification, such as device-level identity verification.
- Mileage may vary – because the offenses for which an exception to the RBI prohibition may be sought are defined in Member State criminal law, actual implementation of the prohibition and its exceptions may diverge significantly in implementation.
With these key takeaways in mind, Section 2 of this blog examines the reasoning behind the prohibition on RBI, while Section 3 explores the specific elements that all must be triggered to bring processing activity within the provision’s scope. Section 4 outlines the important but limited exceptions to the prohibition, while Section 5 examines how this provision interacts with other relevant areas of EU law, such as Article 9 of the General Data Protection Regulation (GDPR). Section 6 includes closing thoughts and takeaways along with a brief examination of salient activity by DPAs.
2. Why the prohibition? Specific risks associated with RBI for law enforcement purposes
As noted earlier in this blog series, the creation and use of large scale biometric identification systems has long been an area of serious concern for EU authorities. This is particularly acute in the context of such systems’ deployment for law enforcement purposes; the Guidelines recognize the potential impact on the rights and freedoms of individuals widespread deployment of these technologies represents. The Guidelines further identify the “feeling of constant surveillance” the deployment of RBI systems in public spaces may elicit risks “indirectly dissuad[ing] the exercise of freedom of assembly and other fundamental rights,” and technical failures in AI systems may also produce discriminatory effects based on sensitive personal characteristics such as age, ethnicity, race, sex, or disability status.
3. Verification vs. Identification: what systems are captured by the RBI prohibition?
The Guidelines walk through a number of questions that must be examined in order to understand whether a given system falls within the prohibition’s scope:
- Does this system qualify as “remote biometric identification”?
- Is the system “real time”?
- Is the space “publicly accessible”?
- Is the system used for law enforcement purposes?
It is critical to note that all of these criteria must be present for a system to be affected by the ban set forth in Article 5.
Article 3(41) of the AI Act defines an RBI system as an “AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.”
Whether a system qualifies as an RBI system depends on:
- Whether the system captures “biometric information”
- Whether the system is “remote”
- Whether the system is used for identification
The Act and Guidelines consider biometric data to be machine-readable representations of individuals’ measurable physical characteristics – for example, eye distance and size, or nose length – or behavioral characteristics, such as gait or voice print. This is broader than the definition of biometric information provided in Article 4(14) of the GDPR, which defines biometric data as information arising from specific technical processing of physical, physiological, or behavioral characteristics of a natural person in such a way that would permit the unique identification of that person. This last part of the GDPR definition of biometric data (“unique identification”) is absent from the AI Act concept, as further analyzed in Blog 6 and Blog 7 of this series. However, “identification” plays a key part in defining RBI systems.
The function of such a system at a distance and an individual choice to interact with it (or possibly even knowledge of its existence) are at the core of whether a system qualifies as remote. “Identification” is critical in that it is distinguished from “verification”: establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database as opposed to verifying a specific person is who they claim to be via matching sensor data to an on-device record.
Per Recital 17 of the AI Act, a system operates in “real time” if it captures and processes biometric data “instantaneously, near-instantaneously or in any event without significant delay.” This determination is a fact-based inquiry, ensuring that an artificial, “minor” delay cannot be incorporated in order to allow a prohibited system to be deployed. The Commission also notes that the same device may well be capable of “real-time” and “post-identification” functions – the prohibition’s application is technology-agnostic.
“Publicly accessible space” is defined in Article 3(44) of the AI Act as “any publicly or privately owned physical space accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.” The Act and Guidelines emphasize this status is also a fact-based inquiry and cannot be evaded by mere signage or official designation; this component of the prohibition is clearly tied to the potential risk posed by RBI deployments to the exercise of fundamental political freedoms such as the freedom to assemble.
Finally, Article 3(46) of the AI Act defines “law enforcement purpose” as those “activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security.” This definition is consistent with the Data Protection in Law Enforcement Directive (LED). The Commission is careful to note in the Guidelines that non-Law Enforcement entities acting on their own behalf to detect crime would not fall afoul of the prohibition, but rather need comply with the Article 6 governance of “high-risk” AI systems.
4. When is RBI processing for law enforcement permitted?
Recital 33 of the AI Act emphasizes that any exceptions to the prohibition on using RBI systems for law enforcement purposes must be “exhaustively listed and narrowly defined situations.” There are three set out in Article 5(1)(h)(i)-(iii) of the AI Act:
(i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.
Article 5(2)-(7) of the AI Act provides additional limitations on the exceptions, expanded on in Section 10 of the Guidelines. Key limitations include:
- ‘Single target’ – RBI systems can only be deployed for the purpose of confirming the identity of a specifically targeted individual (except for the circumstances involving a genuine and present or foreseeable terrorist attack);
- Seriousness – assessment of the possible harm and consequences against the interference with fundamental rights, and inclusion of the offense in Annex II of the AI Act;
- Scale – the number and category of persons affected by interference;
- Probability – likelihood that negative event will occur;
- Geographic restriction – where the system will be deployed or the event may occur;
- Personal scope – defining the categories of persons concerned with the deployment;
- Time limit – duration of deployment must be limited to what is strictly necessary.
Each enumerated exception fulfills a public objective – and is consistent with the overall philosophy of both the AI Act and GDPR of balancing the inherent interest of individuals in the exercise of fundamental rights against the risk of significant harm to the public in specific, factual scenarios. The exceptions to the RBI prohibition also represent an area of deference to the Member States, as they do not function automatically and must be authorized by Member State national laws. As a result, all Member States may not permit precisely the same types of RBI system usage in law enforcement contexts.
5. LED, GDPR and additional safeguards – how does the prohibition interact with other laws?
A significant element of the RBI prohibition is that the prohibited activity is explicitly tied to RBI systems deployed for law enforcement purposes – and law enforcement authorities themselves are, per Article 2(2)(d) of the GDPR, excluded from that scope of that regulation. Instead, national laws implemented by EU Member States to operationalize the LED are the pre-existing restriction on the use of RBI technologies for law enforcement. The Guidelines do specifically observe that, where Member States have made missing persons inquiries an administrative matter and not a criminal one, the Article 5 RBI prohibition would not qualify and the use of RBI systems in such searches would be governed by the GDPR instead.
The use of RBI systems for law enforcement pursuant to a relevant exception is permitted only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 of the AI Act (which imposes the obligation to conduct Fundamental Rights Impact Assessments (FRIA) in relation to high-risk AI systems) and has registered the system in the EU database according to Article 49 of the AI Act. A FRIA must generally be completed before an RBI system is deployed – it cannot be created as an after-the-fact rationale for a pre-determined deployment. The Guidelines note that provisions relating to FRIAs apply only to the Article 5 prohibition on RBIs and not to FRIAs required in connection with high-risk AI systems generally, which will also be informed by a future, still-forthcoming guidance document and template for FRIAs, currently expected this year. The Guidelines also highlight that the FRIA requirement does not replace any existing Data Protection Impact Assessment (DPIA) requirement that may be required under provisions of the LED, GDPR, or the Data Protection Regulation of the EU institutions and bodies (EUDPR), depending on the specific system in question.
The Guidelines attempt to differentiate between a DPIA, which focuses on the risks to rights and freedoms stemming from the processing of individuals’ personal data specifically, and a FRIA, which is a “more general” assessment of how an AI system could impact fundamental rights. The Commission offers additional detail on each of the categories of information a FRIA must contain, which include:
- A description of the RBI use and the deployer’s processes for the use, together with the intended purpose of use;
- The period of use and frequency of use;
- The categories of persons and groups affected by the system;
- The specific risks of harm to the affected persons;
- Human oversight measures; and
- Risk mitigation measures.
Article 5(3) of the AI Act imposes a key further limitation on Member States who wish to deploy RBI systems – each individual use of the system must receive prior authorization from either a judicial or independent administrative authority, and automated decision-making producing an adverse legal effect cannot be based solely on a system’s output. This prior authorization requirement has an extremely limited exception for emergency situations where it is “effectively and objectively impossible to obtain an authorization before commencing use” of the RBI system, and in such circumstances that authorization must still be requested within 24 hours of the use of a system. The Commission makes clear that the “double assessment” requirement of both the FRIA and the prior-use “necessity and proportionality” authorization is an intended consequence of the Act. Member States are also provided guidance on the necessity of deleting any data gathered under a use of the “emergency” authorization exception.
Whether a decision with adverse legal effect is produced solely based on an RBI system’s output, is linked to the human oversight requirements set out in Article 14 of the AI Act. The Commission emphasizes that even with prior authorization, an RBI system may not be deployed where its outputs would produce adverse legal effects (for example, arrest and imprisonment solely on the basis of an individual’s identification by an RBI system, without further checks). Specifically, two natural persons with the necessary competence, training, and authority must separately verify and confirm identification by an RBI system before action is taken on the basis of that identification. Furthermore, each use of an RBI system must be notified to both the market surveillance authority and the national data protection authority.
6. Relevant Enforcement and Key Takeaways
Pre-AI Act data protection enforcement activity relating to law enforcement use of real-time RBI systems in public spaces has been limited. So far, topically related enforcement has exclusively been directed at private-sector biometric identification activity, notably in the constellation of cases connected to the activities of Clearview AI. Of particular note (and discussed further in Blog 4 and Blog 5 of this series) are enforcement actions by the Dutch DPA rejecting an alleged third-party interest in combating crime as a valid lawful basis for processing biometric data, and by Italy’s Garante finding violations of core data protection principles related to fairness and transparency, both resulting in large fines.
The requirement for Member State implementation may still cause significant divergence in practice
Because each member state must draft a separate law specifying which of the three exception categories it opts into, which crimes from Annex II it authorizes, and which authority grants case-by-case approval, there is significant effort required before a single deployment can lawfully occur, and because there is no Europe-wide shared definition of serious criminal offenses, operational consequences may vary.
Forthcoming guidelines will be critical to understanding the operational environment
Due to the required “double assessment” structure for deploying RBI systems pursuant to one of the objections, assuming the Member State legal authorization and review process is satisfied, potential deployers will still need to complete the required Fundamental Rights Impact Assessment for any lawful deployment of an RBI system to commence – and the completion of that step will hinge on a template and guidance document that the Commission has not yet published.
Limits are a feature, not a bug
Taken together, the limited exceptions to the RBI prohibition and detailed, overlapping requirements for their use are clearly designed to create an extremely limited environment for authorizing the deployment of RBI systems, subject to significant oversight by actors outside of their operational environments, given the systems’ potential to impact the fundamental rights and freedoms of individuals. This follows the logic of Article 10 of the LED, which permits processing biometric data for uniquely identifying a natural person only where such is strictly necessary and authorized by Member State law.