Red Lines under EU AI Act: Unpacking the prohibition of emotion recognition in the workplace and education institutions
Blog 6 | Red Lines under the EU AI Act Series
This blog is the sixth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.
The sixth blog in the “Red lines under the EU AI Act” series focuses on unpacking the prohibition on emotion recognition in the workplace and educational institutions, as contained in Article 5(1)(f) AI Act and explored in the Commission’s Guidelines on the topic. This analysis revealed a number of key takeaways:
- Not all emotion recognition AI systems are prohibited. The AI Act prohibits only the use of emotion recognition AI systems in the workplace or related to education institutions;
- The main reason behind the prohibition in the areas of workplace and education institutions lies behind the power imbalance and asymmetric relationships in these contexts, where both workers and students are in particularly vulnerable positions;
- Emotion recognition systems that are not prohibited under this provision are classified as high-risk;
- Emotion recognition systems used for medical and safety purposes in the workplace or education and training institutions are excluded from this prohibition;
- The provision prohibits only the inference of emotions. The inference of intentions, which is included in the definition of “emotion recognition systems” in Article 3(39) AI Act seems to be left out of the prohibition.
Article 5(1)(f) AI Act prohibits AI systems from inferring the emotions of a natural person in the workplace and education institutions based on biometric data, with specific exceptions for medical and safety purposes. Recital 44 AI Act invokes the lack of scientific basis for the functioning of such systems and the key shortcomings such as limited reliability, the lack of specificity and the limited generalisability, which may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons.
Acknowledging the power imbalances in these environments which, combined with the intrusive nature of these systems, could lead to detrimental or unfavorable treatment of certain natural persons or whole groups, this prohibition aims to protect individuals from potentially invasive emotional surveillance. It is important here to note that AI systems for emotion recognition that are not put into use in the areas of workplace or educational institutions do not fall within the scope of this prohibition and qualify as ‘high risk’ under Annex III, paragraph 1, subparagraph c of the AI Act.
It is unclear whether AI systems that do not have as a primary aim the identification or inference of emotions, but have emotion identification or inference as a secondary functionality, are covered by the prohibition. For example, an AI system primarily intended for transcribing meetings that can also infer emotions or intentions, or an AI system that monitors students during a test, but at the same time also identifies emotions.
1. Limited scope: The provision does not prohibit ‘emotion recognition systems’, but only ‘AI systems to infer emotions’
The Guidelines highlight that the prohibition in Article 5(1)(f) AI Act does not refer to emotion recognition systems more generally, but only to “AI systems to infer emotions of a natural person”.
Article 3(39) AI Act defines an ‘emotion recognition system’ as “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”. Three elements can be identified in this definition:
- The AI system must be used for identifying or inferring;
- Emotions or intentions of natural persons;
- Based on the biometric data of natural persons.
Hence, the target of these AI systems are emotions and intentions of individuals, which might be identified or inferred (a lower threshold). The identification or inference must be based on biometric data of the individuals. This definition seems to cover the use of emotion recognition on individuals, leaving out groups.
The prohibition in Article 5(1)(f) AI Act does not refer to ‘emotion recognition systems’, but only to ‘AI systems to infer emotions of a natural person’. Recital 44 further clarifies that prohibition covers AI systems ‘to identify or infer emotions.’ ‘Intensions’ are not mentioned either in the Article 5(1)(f), nor in Recital 44. Hence, it appears that while the definition of ‘emotion recognition systems’ provided in Article 3(39) can serve as a reference point for this prohibition, it does not equate to what the prohibition covers, the prohibition being narrower in scope.
Certain cumulative conditions must be fulfilled for the prohibition to apply:
- The practice must constitute the “placing on the market”, “putting into service for this specific purpose,” or the “use” of an AI system;
- The AI system is used specifically to infer emotions;
- The AI system is in the area of the workplace or education and training institutions;
- AI systems intended for medical or safety reasons are excluded from the prohibition.
All the cumulative conditions listed above must be met simultaneously to trigger the prohibition, a consistent approach of the AI Act’s full set of prohibited practices, an approach which narrows down the scope of the prohibition to very specific use cases.
Recital 18 AI Act provides a non-exhaustive list of emotions referred to in this definition, including happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement. The Recital clarifies that physical states, such as pain or fatigue, are not included in this definition. For example, systems used to detect fatigue in professional pilots or drivers for the purpose of preventing accidents are explicitly excluded from this definition. The mere detection of “readily apparent expressions, gestures, or movements” such as basic facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms, or head, or characteristics of a person’s voice, such as a raised voice or whispering, are not included either unless they are used for identifying or inferring emotions. The Guidelines further clarify that inferring emotions from a written text does not fall within the scope of the prohibition.
What is left unclear with regard to the prohibition’s scope is that there is a thin line between emotions, other readily apparent expressions, and pain or fatigue, which also result in expressions that can be mistaken for emotions. The distinction between mood and emotion (both of which can manifest in different ways) is similarly not made, making it unclear whether mood detection falls within this prohibition. The distinction between these features would require a detailed analysis of many factors and circumstances on a case-by-case basis, rather than only biometric data, making the application of this prohibition in practice challenging and complex.
2. It is unclear whether the inference of intentions is also prohibited
The AI Act does not provide clarification on what it means by ‘intentions’ and how to differentiate them from ‘emotions’ in cases when the same system can identify both. This distinction is important because the prohibition applies only to emotions and does not refer to intentions, whereas the definition of ‘emotion recognition systems’ applies to both.
None of the examples provided in Recital 18 seems to fall within the notion of intentions. While the emotions listed in this Recital represent reactions to situations or environments, the notion of ‘intention’ would have a predictive quality about the future. Additionally, the Commission Guidelines seem to focus solely on emotions, without providing any clarification or definition of ‘intentions’. However, in a non-exhaustive list of examples of emotion recognition, they include the example of “Systems inferring from voice or body gestures, that a student is furious and about to become violent”. While ‘furious’ seems to fall within the notion of ‘emotion’, ‘about to become violent’ makes a prediction about a future action based on the (automatically) detected emotion. This prediction might fall within the notion of ‘intention’ to commit an action in the future but it might also be considered as an identification of the transition from a passive state (emotions) to active (a combination of emotions and intentions), making it difficult to understand whether such a prediction falls within the prohibition. The Guidelines, however, do not seem to make such a distinction.
An example of ‘intentions’ in the workplace could be the detection of an employee’s intention to resign from the job based on their facial expressions during meetings or videocalls. A wide range of intentions, such as intentions to commit a crime, intentions to drop out of school, or even suicidal intentions could fall within this prohibition when detected in the area of the workplace or education. The Guidelines also note that the concept of emotions or intentions should be understood in a broad sense, noting that attitude and emotion are equivalent for the purposes of this prohibition, thus preventing circumvention through changes in terminology.
Another distinction of the prohibition from the definition of ‘emotion recognition systems’ is that the prohibition refers only to the ‘inference of emotions’ as a prohibited practice, whereas the definition of ‘emotion recognition systems’ includes both ‘identification’ and ‘inference’ of emotions.
The Dutch Data Protection Authority (AP) interprets the prohibition as covering both the inference and the identification of emotions and intentions. The Guidelines distinguish between “identification” and “inferring”, clarifying that identification occurs where the processing of the biometric data (for example, of the voice or a facial expression) of a natural person allows to directly compare and identify an emotion with one that has been pre-programmed in the emotion recognition system. “Inferring” involves deduction through analytical processes, including machine learning approaches that learn from data how to detect emotions.
3. The prohibition refers only to emotions deriving strictly from biometric data
According to the Guidelines, this prohibition has a similar scope as the rules applicable to other emotion recognition systems1, while it should be limited to inferences based on a person’s biometric data, as defined in Article 3(39) AI Act. Hence, the prohibition in Article 5(1)(f) refers only to emotions deriving strictly from biometric data.
Biometric data is defined in Article 3(34) AI Act as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. The relationship between the AI Act definition of biometric data and that provided in the GDPR is explored below in section 6 of this blog.
The limited scope of this ban on inference based on biometric data excludes AI systems that perform emotion recognition through other inferences not on the basis of biometric data such as AI systems for crowd control, and AI systems inferring physical states such as pain and fatigue.
4. A broad interpretation of the ‘workplace’ and ‘education institutions’ contexts
The prohibition in Article 5(1)(f) AI Act is limited to emotion recognition systems specifically in the “areas of workplace and educational institutions”. The Guidelines submit that the workplace context should be interpreted broadly, covering any physical or virtual space where the work is performed. The Guidelines also specifically mention training institutions as covered by this prohibition. Training institutions are not mentioned in the AI Act’s provision or any of the related Recitals. Besides simply listing training institutions alongside educational institutions, the guidelines do not provide any further details as to what constitutes training institutions.
Interestingly, the Guidelines clarify that hiring processes also fall within the workplace context for the purpose of this prohibition. Similarly, the Guidelines clarify that educational institutions should encompass all types and levels of education, including admissions procedures, and should also be interpreted broadly, without any limit, in terms of the types or ages of pupils or students or of a specific environment.
Based on the text of Recital 44, this prohibition also covers AI systems for emotion recognition in situations related to the workplace and education. This makes the area of applicability of the prohibition broader while leaving space for interpretation as to what “related to the workplace” might consist of. Such an interpretation may require a case-by-case analysis. The Dutch AP has interpreted this broad notion as including for example, home working environments, online or distance learning, and also the application of emotion recognition for recruitment and selection or application for education. Further clarifications with clearer guidelines might be necessary in this regard, for ensuring legal certainty, while keeping in mind the volatile nature of ‘workplace’.
It is important to note that emotion recognition systems installed in a work environment can also be used for emotion detection of customers rather than employees, such as, for example, to detect suspicious customers. The AI Act prohibition does not apply to such cases. However, the same system might detect the emotions of employees simultaneously with those of customers. The guidelines only superficially touch upon this scenario in an example stating that in such cases, it should be ensured that “no employees are being tracked and there are sufficient safeguards”.
5. Exception(s) to the prohibition: medical and safety reasons
In a Joint Opinion on the AI Act, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) state that the “use of AI to infer emotions of a natural person is highly undesirable and should be prohibited.” In this statement, and in a later EDPS Opinion, they further note that exceptions should be made for “certain well-specified use-cases, namely for health or research purposes”.
The Guidelines note that the exception granted in this prohibition should be narrowly interpreted, limited to what is strictly necessary and proportionate, including limits in time, personal application and scale, and should be accompanied by sufficient safeguards in order to ensure a high-level of fundamental rights protection. The recitals of the AI Act stress that the exception granted in this prohibition applies to AI systems used strictly for medical or safety reasons. For example, a system intended for therapeutic use. As such, therapeutic uses mentioned in Recital 44 AI Act as an exception, should only apply to CE-marked medical devices. The Guidelines note that this exception does not extend to general well-being monitoring, such as stress or burnout detection.
Additionally, the Guidelines note that safety exceptions should be limited to protecting life and health, excluding other interests such as property protection or fraud prevention. “Explicit need” is also mentioned as a requirement for the exception to apply. The Guidelines also highlight that data collected and processed in this context cannot be used for any other purpose, hence linking to the GDPR’s purpose limitation principle.
6. Interplay with the GDPR
The Guidelines mention, in a footnote, that there is a distinction between the definition of biometric data within the AI Act and the definition within the GDPR. The definition provided in the AI Act “does not include the wording ‘which allow or confirm the unique identification’ (the functional use of biometric data), contrary to the definition of biometric data in the GDPR that includes this requirement. As such, the Guidelines conclude that “the GDPR definition of biometric data will apply under data protection rules with regard to the processing of personal data (and when for example Article 9(1) and 9(2) GDPR would be applicable)”. This would mean that the AI Act definition applies in AI contexts, whereas the GDPR definition in data protection contexts.
When reaching this conclusion, the Commission appears to have disregarded recital 14 of the AI Act, according to which the concept of biometric data in the AI Act must be interpreted “in the light of” the concept of biometric data in the GDPR. The clarification of this gap is crucial given the high bar that needs to be met for unique identification.
If we stick to the AI Act definition, the notion of ‘biometric data’ becomes broader, including most emotion recognition systems using biometric data that do not necessarily have the ability to identify the individual. These systems would be left out of the prohibition if the GDPR definition of biometric data is to be applied. This raises the question of whether there needs to be a categorization of biometric data for the purposes of Article 5(1)(f), to further clarify the notion of biometric data to be able to determine without doubt which practices count as emotion recognition under this prohibition.
Data protection regulators have already been treating emotion recognition systems, including those in the workplace, as high-risk under data protection law, even before the AI Act became applicable. In the Budapest Bank case, the Hungarian DPA submits that AI emotion recognition in the workplace poses fundamental rights risks and ordered the Bank to modify its data processing practices to comply with the GDPR, specifically by refraining from analyzing emotions during voice analysis. With regard to the emotion recognition of employees based on voice analysis, the DPA stresses the need for a separate balancing of interests, taking into account their vulnerable position resulting from their subordinate status. It can be noticed that the reasoning behind the ban imposed by Article 5(1)(f) of the AI Act is reminiscent of the reasoning in this case.
However, it is interesting to note that in the Budapest Bank case, the DPA did not explicitly classify the voice-derived data as biometric data under Article 9 GDPR. The DPA treated the data as ordinary personal data, attracting heightened scrutiny due to the AI risk, rather than as special category biometric data triggering the application of Article 9 outright. Nevertheless, the Hungarian DPA specifically referenced the EDPB-EDPS Joint Opinion 5/2021 on the AI Act proposal, highlighting that AI-based emotion recognition systems pose a high risk to the fundamental rights of data subjects.
7. Concluding Reflections
The AI Act’s prohibition is consistent with previous case law of DPAs, on the basis of the GDPR
The prohibition’s premise regarding the power imbalance and the vulnerable position of employees and students is reminiscent of a DPA’s fine in a similar case. As such, in the Budapest Bank case of 2022, the Hungarian DPA found that the use of AI for emotion recognition of employees and consumers breached several of the GDPR’s key principles and obligations.
The definition of ‘biometric data’ in the AI Act does not seem to be aligned with the definition of the same concept in the GDPR, creating confusion as to which definition applies in this prohibition.
The Guidelines give precedence to the definition of biometric data in the AI Act for the purpose of Article 5(1)(f) prohibition, whereas the AI Act itself, in its Recital 14, seems to give precedence to the concept of biometric data in the GDPR.
The prohibition expressly differentiates between emotions derived from biometric data and those derived from other types of analysis not involving biometric data, the latter practice being excluded from the prohibition.
Biometric data is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. The narrow scope of this ban on inference based on biometric data excludes AI systems that perform emotion recognition through other inferences not on the basis of biometric data.
- See Annex III, point 1(c), and Article 50 AI Act. ↩︎