GDPR and the AI Act interplay: Lessons from FPF’s ADM Case-Law Report
In May 2022, the Future of Privacy Forum (FPF) launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision-Making (ADM). Our research highlighted that the GDPR’s protections for individuals against forms of ADM and profiling go significantly beyond Article 22 – which provides for the right of individuals not to be subject to decisions based solely on automated processing that produces legal effects or significantly impacts them, and are currently being applied by courts and Data Protection Authorities (DPAs) alike. These range from detailed transparency obligations to applying the fairness principle to avoid situations of discrimination and strict conditions for valid consent in ADM cases.
As EU lawmakers are now discussing the amendments they would like to include in the European Commission (EC)’s Artificial Intelligence (AI) Act Proposal, what lessons can be drawn from GDPR enforcement precedents–as outlined in the Report–when deciding on the scope and obligations of the Act?
This blog will explore: the link between the GDPR’s provisions as relevant for ADM and the AI Act Proposal (1); how the AI Act’s concepts of providers and users fare compared to the GDPR’s controllers and processors (2); how the AI Act facilitates GDPR compliance for the deployers of AI systems (3); the opportunities to enhance or clarify obligations under the AI Act through the lens of ADM jurisprudence (4); the overlaps between GDPR enforcement precedents and the AI Act’s prohibited practices or high-risk use cases (5); the issue of redress under the GDPR and the AI Act (6); and a compilation of lessons learned from the FPF Report in the context of the debates around the AI Act (7).
Note: when referring to case numbers in this blog, the author is using the numbering of cases in the FPF Report.
- Both the GDPR and the proposed AI Act are grounded on Article 16 TFEU for the protection of personal data
One of the two legal bases used by the EC to justify the AI Act Proposal is Article 16 of the Treaty on the Functioning of the European Union (TFEU), which mandates the EU to lay down the rules relating to the protection of individuals with regard to the processing of personal data. This means that, at least to some extent, the AI Act’s rules would complement the protections afforded to data subjects under the GDPR, which is also based on Article 16 TFEU. In fact, in their 2021 Joint Opinion on the AI Act, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) have suggested making GDPR compliance a precondition for allowing an AI system to enter the European market as a CE marked product under the AI Act.
AI systems that would be regulated under the initial Proposal of the AI Act are the ones that rely on the techniques and approaches mentioned in Annex I of the Proposal (such as machine learning and logic-based approaches). Such techniques and approaches could constitute or enable ADM schemes to be implemented by controllers covered by the GDPR. However, the AI Act is generally agnostic regarding the decision-making scheme vis-à-vis individuals when the deployment of an AI system is at stake. This means that its scope is broader than ADM falling in or outside of the Article 22 GDPR prohibition (i.e., ‘qualifying’ or ‘non-qualifying’ ADM).
- As an illustration, Article 3(1) AI Act mentions that software that can produce “predictions and recommendations” – which courts and DPAs have generally not considered to be fully automated decision-making thus far – may constitute AI systems covered by the AI Act if they use one or more of the techniques and approaches mentioned in Annex I.
- Moreover, the AI Act’s rules under Title III, Chapter 2 and 3 on high-risk AI systems (i.e., the ones that are intended to be used as a safety component of a product, or are themselves products, covered by the Union harmonization legislation listed in Annex II, or that fall under the Annex III list) apply to the systems’ providers, users, importers or distributors even if the final decision that affects a natural person should be taken by a human for the user on the basis of the suggestions, signals, prompts or recommendations provided by the AI system.
However, even where the AI Act does not provide for specific protections for the rights of individuals where AI systems are underpinned by, or result in solely ADM having a legal or similarly significant effect on them, the safeguards provided by Article 22 GDPR will nevertheless kick in. On the other hand, and as mentioned in the Report, even in cases where Article 22 GDPR does not apply to a particular AI system, the rest of the GDPR applies to non-qualifying ADM and generally to the processing of personal data via AI systems. This could include the AI system’s training, validation and input data, as well as the AI system’s outputs, if they qualify as ‘personal data’ under the GDPR, regardless of whether the data are processed by the system’s providers or users. It is also noteworthy that there may be instances of ADM covered by the GDPR that do not involve any use of AI systems, but rather other forms of automating decisions.
- The AI Act’s users are generally the GDPR’s controllers in the AI system’s deployment phase
Instead of focusing on the role of the parties in the AI value chain with regard to the processing of personal data, the AI Act Proposal focuses on the entities which develop and market or that use AI systems for commercial purposes, i.e. ‘providers’ and ‘users’ respectively. Each is assigned specific obligations, but most of the regulatory burden is placed on providers, in particular when it comes to high-risk AI systems (HRAIS) and their conformity assessments.
This way of defining the main actors subject to obligations under the future AI Act may create inconsistency with the GDPR’s definitions, roles, and responsibilities for covered entities. It has also led the EDPB and the EDPS to ask EU policymakers to ensure the AI Act’s obligations are consistent with the roles of controller and processor when personal data processing is concerned. Indeed, ‘providers’ under the AI Act will likely not be considered the data controllers under the GDPR during their AI systems’ deployment phase.
Under the GDPR, the bulk of obligations, liability and accountability for how personal data are processed are assigned to ‘controllers’. It is the AI Act’s ‘users’ who will rather be the ‘controllers’ under the GDPR in the deployment phase of AI systems. Therefore, even if they have a very limited set of duties under the Act (e.g., Article 29), they maintain liability and accountability about how personal data is used by, or resulting from the use of AI systems, under the GDPR. It is more likely that providers would be qualified as “processors” under the GDPR, processing personal data on behalf of users, notably if they provide support or maintenance services for AI systems involving the processing of personal data on users’ behalf and under their instructions.
The situation is different in the development phase of AI systems, where ‘providers’ will likely be considered “controllers” under the GDPR whenever they build AI systems relying on collection, analysis, or any other processing of personal data. The same will be the case for the testing phase of AI systems (e.g. bias monitoring) and for post-market monitoring purposes, which may be legally mandatory under Articles 10 and 61 of the AI Act. In these cases, the status of the provider as controller would derive from the law that imposes data processing duties (i.e., the AI Act), as mentioned in EDPB guidance on the concept of controller (para. 24).
Complex questions further arise in relation to potential joint controllership situations under the GDPR, between “users” and “providers” under the AI Act, in the deployment phase of AI systems. For instance, does the legal obligation that providers have under the AI Act to determine the AI system’s intended purpose and technical data collection lead to the qualification of providers as joint controllers with their customers (users), even if they do not obtain access to the AI system’s input or output data, especially after the Court of Justice of the European Union (CJEU)’s Jehovan todistajat ruling?
- The AI Act facilitates to a certain extent GDPR compliance for ‘users’ of AI systems
References to GDPR compliance in the AI Act proposal are scarce. An example is the authorization granted to providers to process special categories of data covered by Articles 9(1) and 10 GDPR when conducting bias monitoring, detection, and correction in relation to HRAIS (under Article 10(5) of the Act). Another is the obligation for users to use the information received from providers’ HRAIS instructions of use when carrying out Data Protection Impact Assessments (DPIAs) under the GDPR, as per Article 29(6) AI Act. However, as the crux of GDPR obligations for controllers in the commercial deployment phase of AI systems will arguably rest with the systems’ users, it is worth exploring whether the AI Act’s obligations imposed on providers of HRAIS may put users in a better position to comply with the GDPR.
Under Article 13 AI Act, providers have extensive transparency obligations towards their customers (i.e., ‘users’ and, most likely, GDPR ‘controllers’ as explained above), with a view to enable the latter ‘to interpret the system’s output and use it appropriately.’ This transparency comes in the form of instructions of use that should specify, inter alia, the HRAIS’s intended purpose, level of accuracy, performance, specifications for input data, and implemented human oversight measures (as detailed in Article 14 AI Act). Additionally, the HRAIS’s technical documentation that the provider is required to draw up under Article 11 AI Act – and whose elements are listed under Annex IV AI Act – will provide insight about the HRAIS’s general logic, key design choices, main classification choices, relevance of the different parameters, training data sets, and potentially discriminatory impacts, among other features.
Regardless of the wording under Article 29(6) AI Act, users may use the information obtained from providers under Article 13 AI Act and the HRAIS’s technical documentation not only to comply with their duty to carry out a DPIA but to ensure broader alignment with the GDPR and its transparency imperatives. Such information may also prove useful to comply with other GDPR obligations, such as providing notice to data subjects about profiling and ADM and to complete their records of AI-powered data processing activities under Article 30 GDPR.
However, it should be noted that, under the AI Act Proposal, duties for providers only exist with regards to HRAIS, whereas the users/controllers’ above mentioned obligations under the GDPR may apply even where the underlying AI systems are not qualified as such. For example, under the EDPB’s criteria, controllers could still be obliged to carry out a DPIA on an AI system that is not included in the Annex III AI Act list of HRAIS, such as a social media recommender system or an AI system used in the context of online behavioral advertising.
Specifically, with regards to controllers’ notice obligations on profiling and ADM, the FPF Report shows that DPAs and courts in Europe agree that Articles 13(2)(f) and 14(2)(g) GDPR provide for an ex-ante obligation to proactively inform data subjects about the system’s underlying logic, significance and envisaged consequences, and not about the concrete automated decisions that affect them. On the other hand, an obligation to provide decision-level explanations exists where the data subject exercises his or her data access right under Article 15(1)(h) GDPR, as is illustrated by two Austrian DPA decisions (cases 14 and 21 in the Report) and an Icelandic DPA decision (case 38 in the Report). In such instances, DPAs ordered controllers to disclose specific elements of information regarding automated credit or marketing scores attributed to data subjects, notably the algorithm’s parameters or input variables, their effect on the score, and an explanation of why the data subject was assigned a particular score.
Thus, when complying with the GDPR’s transparency obligations, controllers who qualify as users under the AI Act would find immense value in leveraging the sort of information that Articles 11 and 13 of the AI Act mandate providers to make available with regards to their HRAIS. Recent GDPR case law on ADM could make a case for extending providers’ transparency duties beyond HRAIS, and for ensuring the standard of intelligibility for the information AI providers should make available to users is one that enables the latter to comply with their GDPR obligations.
A possible avenue to be considered through the legislative process would be to create a general duty under the AI Act for providers to assist users in their GDPR compliance efforts in relation to the AI systems they sell, even in cases where providers would not act as data processors for the users. Some impetus for this approach may be found in Article 9(4) AI Act, which mandates providers to inform users about the risks that may emerge from the use of the AI system, and to provide them with appropriate training.
- The GDPR’s ADM case law calls for further development or clarification of obligations under the AI Act
Some of the decisions analyzed in the FPF Report may provide indications that the AI Act’s obligations for providers and users – at least when HRAIS are at stake – need further development or clarifications.
Accuracy and transparency: In a landmark ruling, the Slovak Constitutional Court (case 4 in the Report) established that local law should require additional measures to protect individuals when automated assessments are carried out by State agencies. According to the Court, such measures could include: (i) checking the AI system’s quality, including its error rate; (ii) ensuring that the criteria, models, or underlying databases are up-to-date, reliable, and non-discriminatory; and (iii) making individuals aware of the existence, scope and impacts of automated assessments affecting them.
- Measures (i) and (ii) seem to be very close to the data quality, accuracy, robustness, and cybersecurity requirements proposed under Articles 10 and 15 AI Act. However, these obligations are geared towards HRAIS’s providers, and not users/controllers. In its decisions against Deliveroo and Foodinho (cases 3 and 6 in the Report), the Italian DPA fined the controllers for not verifying the accuracy and correctness of their automated rider-management decisions and underlying datasets, although these are not explicit requirements under the GDPR’s Article 22(3). Therefore, and at least for HRAIS, the EU legislator could eliminate legal uncertainty by incorporating data quality and accuracy requirements into Article 29, which sets out users’ obligations in this context. Such a requirement could go beyond merely checking whether the HRAIS’s input data is relevant in view of the intended purpose, as Article 29(3) AI Act currently requires.
- With regards to measure (iii), it should be noted that making individuals aware of the scope and impact of an automated assessment that falls outside of Article 22 GDPR goes beyond Articles 13(2)(f) and 14(2)(g) GDPR. As a rule, DPAs and courts have agreed with the EDPB by stating that the detailed transparency requirements under said provisions only apply to ‘qualifying’ ADM (see Chapter 1.6.c of the Report). Additionally, the Slovak Constitutional Court’s requirement goes further than Article 52 AI Act, which contains disclosure duties for users who deploy certain AI systems that the Proposal considers to be ‘low-risk’, such as emotion recognition systems, biometric categorization systems, and ‘deepfakes’. In the initial text of the AI Act, there are no transparency requirements towards affected persons when HRAIS are at stake, and there are no such obligations for non-high-risk systems other than the ones set out in Article 52 AI Act. Incorporating transparency rights for affected persons in the AI Act, even if only in HRAIS use cases, can reduce information asymmetries between individuals and organizations when decisions are not fully automated (e.g., an AI system whose recommendations merely support human decision-making).
Lawful grounds for data processing: the AI Act sporadically mentions the interplay with the GDPR’s rules on lawful grounds and exemptions from the prohibition on processing special categories of data. Most notably, Article 54 AI Act creates stringent conditions for further processing of personal data in the context of AI regulatory sandboxes, which do not have an obvious connection to the purpose compatibility test in Article 6(4). Additionally, Article 10(5) AI Act authorizes providers of HRAIS to tackle potential biases through the processing of special categories of data, as long as appropriate safeguards are in place. However, it fails to elaborate on the conditions for the collection of personal data from publicly available sources for mandatory training, validation, and testing of HRAIS. The narrow interpretation of ‘manifestly making data public’ assumed by the EDPB and (more recently) by Advocate General Rantos of the CJEU, together with the enforcement actions against Clearview AI (cases 10 to 13 in the Report), may significantly hinder the possibilities for AI providers to scrape data from the web to test their AI models against bias. Obtaining consent from data subjects for those purposes is often unfeasible, and the legitimate interests lawful ground often plays a limited role when sensitive data are at stake.
Article 10(5) of the AI Act could also potentially facilitate compliance with Article 9(2)(g) GDPR, which allows for the processing of special categories of personal data where their processing is necessary for reasons of substantial public interest, as long as it is based on Union or Member State law. Provided that countering bias would be qualified as “substantial public interest”, the Union law specifically providing for an obligation to process sensitive data – which in this case would be the AI Act, needs to provide for suitable measures to safeguard fundamental rights.
This complexity could offer an opportunity for the EU legislator to set boundaries and clear rules on the collection and use of personal data for training, validation, and testing of AI systems, at least for HRAIS.
AI risk management through the lens of ADM: Article 9 AI Act forces providers to establish and maintain a risk management system in relation to their HRAIS, including the ‘identification and analysis of [their] known and foreseeable risks.’ National court decisions on ‘legal or similarly significant effects’ of ADM under Article 22 GDPR may provide useful criteria that providers should consider when conducting risk analysis of HRAIS that could affect natural persons. In its Uber and Ola rulings (cases 17 to 19 in the Report), the District Court of Amsterdam analyzed the impact on drivers of the algorithms the companies were relying on for the functioning of their respective mobile applications providing ride-hailing services. The Court looked into: (i) the sensitivity of the data sets or inferences at stake; (ii) the temporary or definitive nature of the effects on data subjects (or their immediacy); (iii) the effects they would have on the drivers’ conduct or choices; and (iv) the seriousness of the financial impacts potentially involved for individuals. Factors such as these could be codified into Article 9 AI Act as valuable guidance for HRAIS providers’ risk management exercises.
Incorporating human oversight does not rule out the Article 22 GDPR prohibition: a question arises about whether the Article 14 AI Act requirements for the provider to set up human oversight tools for HRAIS would bring such systems outside of Article 22 GDPR. The answer is ‘not necessarily.’ While the prohibition in Article 22 GDPR may apply to AI systems that are not considered ‘high-risk’ under the AI Act, when HRAIS are indeed at stake, Article 14 AI Act only requires providers to incorporate features that enable human oversight, but not to ensure human oversight as a default.
That will be on the user of HRAIS (i.e., most likely the controller under the GDPR) to ensure via organizational arrangements. If the latter does not, then it may be in breach of Article 22 GDPR, if its ADM scheme is covered by the prohibition. Moreover, we have learned from the decision of the Portuguese DPA against a university that used proctoring software to monitor its students during exams (case 26 in the Report), and the court cases involving Dutch gun applicants and Austrian jobseekers (cases 8 and 9 in the Report), that merely having a human in the loop with the power and competence to make final decisions does not necessarily mean that the decision will not be considered ‘solely’ automated, and thus, that Article 22 GDPR does not apply. For that, human decision-makers need to receive clear instructions or training about why and when they should follow the AI system’s recommendations or not.
In that respect, the EU legislator could envision that users have an obligation to inform their human decision-makers about the elements listed under Article 14(4) AI Act, so that the latter are able to make informed decisions based on the HRAIS’s output, and avoid so-called ‘automation bias’.
- Prohibited AI practices and HRAIS overlap with GDPR enforcement precedents on ADM
In general, the AI Act’s Annex III list of HRAIS seems to be based on litigated uses of AI systems by private and public bodies, including some that were analyzed by courts and DPAs under the GDPR and thereunder deemed to be unlawful for a variety of reasons, as the FPF Report shows. Some examples include:
- Cases of biometric identification and categorisation, where DPAs have often found the underlying collection and storage of data, plus the training of the AI system to be in breach of the GDPR’s rules on lawful grounds (e.g., Clearview AI enforcement cases);
- Systems that select students for university admissions (case 25 in the Report), assess or test them (cases 7 and 26);
- Some uses of AI systems in recruitment processes, which may be justified under the exception in Article 22(2)(a) GDPR (case 3 in the Report);
- AI systems used for worker management or for managing ride sharing apps and the service provided by gig workers were the focus of the Italian DPA (cases 3 and 6), and Dutch courts (cases 17 to 19 in the Report);
- Usages of AI to manage public services and benefits (cases 9, 20 27, and 32 in the Report), where DPAs agree that strong data quality and bias monitoring requirements are essential;
- Automated creditworthiness checks through AI (cases 14, 15, 22, 23, and 36 to 39 in the Report), although Annex III excludes from its scope AI systems that are developed by small scale providers for their own use.
Despite that significant overlap, some AI use cases that were investigated by European DPAs and courts have not been included in the Annex III list, including recommender and content moderation systems (case 24 in the Report), online behavioral advertising systems, and systems used by tax authorities to detect potential fraud (cases 4 and 27 in the Report). Likewise, the EC did not include commercial emotion recognition systems in the HRAIS list, in spite of the recent Hungarian DPA decision against a bank that used an AI system to detect the emotions of the customers who contacted its help center, and the fact that such systems are included in paragraph 6(b) of the Annex when used for law enforcement purposes. These enforcement actions could be an early indicator of the future potential enlargement of HRAIS use cases outlined in Annex III.
Some of these use cases may already be prohibited under the AI Act’s Article 5(1), notably where they would rely on ‘subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior’ and may lead to harm, or where they would constitute “social scoring”. The latter could be the case of the SyRi algorithm, which the Dutch Government used when trying to detect instances of benefits fraud in neighborhoods hosting poor or minority groups, and that both the District Court of The Hague and the Dutch DPA deemed to be unlawful (case 27 in the Report). In any case, some of the concepts under Article 5 AI Act (like ‘detrimental or unfavorable treatment’) could be clarified to make the prohibition more certain for AI providers and users, lest the latter will struggle as much as with Article 22 GDPR’s definition of ‘legal or similarly significant effects.’
- The issue of ensuring redress where AI systems not underlined by personal data may significantly affect individuals and communities
It is undeniable that the GDPR provides meaningful rights to individuals who are subject to or affected by AI systems underpinned by processing of personal data. For example, they can access a detailed description of how they were profiled through Article 15, as well as obtain human intervention, express their point of view, and file objections when ‘qualifying ADM’ is at stake. Additionally, providers and users of AI systems have a priori obligations when they act as ‘controllers’ under the GDPR, to implement data minimization, storage limitation, purpose limitation, confidentiality and, most relevant, fairness requirements in how they collect and use personal data, just to name a few of their obligations.
However, when ‘qualifying ADM’ is not at stake, ‘controllers’ do not have an obligation to ensure human intervention or the possibility to contest an AI-powered decision. This is a gap that could potentially be tackled through the AI Act.
On this note, although the AI Act is largely not rights-oriented, Article 52 AI Act on disclosure duties for certain AI systems creates a precedent for enshrining judiciable rights for individuals who are exposed to or targeted by AI systems.
On the other hand, the Slovak Constitutional Court ruling (case 4 in the Report) required the local legislature to enshrine redress rights for individuals to effectively defend themselves against errors of the automated system at issue. Such actions could be possible under the broad effective judicial redress provided by Article 82 GDPR, but only where the underlying processing of personal data would be conducted in breach of GDPR rules such as transparency, fairness, and purpose limitation. For those cases where processing of personal data is not involved, but where AI systems may still significantly impact individuals, the AI Act could potentially fill a redress gap.
- Lessons from the ADM GDPR case law in the AI Act context
Some of our findings in the ADM GDPR Case-Law Report are useful for the debate around the AI Act, as shown above. Putting them in the context of the European Commission’s Proposal to regulate AI and the ensuing legislative process, we found that:
- The AI Act and the GDPR are bound to work in tandem – they are both grounded on Article 16 TFEU and they have many areas where they complement each other, as well as areas where they could be better coordinated so that both their goals are achieved. For instance, the obligations for ‘users’ and ‘providers’ under the AI Act could be further attuned to the GDPR, by enhancing and clarifying transparency requirements of ‘providers’ towards users or by including a general reference to best efforts of providers to facilitate GDPR compliance where ‘users’ are deemed as ‘controllers’.
- Data accuracy and data quality requirements under the GDPR could be strengthened with cross-references in the AI Act for HRAIS, such as in Article 29.
- Further safeguards for the processing of sensitive personal data to counter bias in AI systems could be laid out more clearly for the purpose of protecting fundamental rights.
- Obligations to train ‘humans in the loop’ on what criteria they should take into account to rely on or to overturn a decision resulting from the application of a HRAIS could complement the protections the GDPR affords through Article 22.
- The existing and future GDPR case-law on ADM may be a source for the future enhancement of the HRAIS list provided in Annex III of the AI Act, considering that most of the existing HRAIS list overlaps with ADM systems that were already subject to GDPR enforcement.
- While the GDPR right to an effective judicial redress can potentially cover most situations where AI systems relying on or resulting in personal data infringe principles like fairness, transparency, and purpose limitation, there is room to consider options for further redress, such as in situations where AI systems rely on non-personal data and significantly affect the rights of individuals or communities.