FPF Report: Automated Decision-Making Under the GDPR – A Comprehensive Case-Law Analysis
On May 17, the Future of Privacy Forum launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision Making (ADM). The Report is informed by extensive research covering more than 70 Court judgments, decisions from Data Protection Authorities (DPAs), specific Guidance and other policy documents issued by regulators.
The GDPR has a particular provision applicable to decisions based solely on automated processing of personal data, including profiling, which produces legal effects concerning an individual or similarly affects that individual: Article 22. This provision enshrines one of the “rights of the data subject”, particularly the right not to be subject to decisions of that nature (i.e., ‘qualifying ADM’), which has been interpreted by DPAs as a prohibition rather than a prerogative that individuals can exercise.
However, the GDPR’s protections for individuals against forms of automated decision-making (ADM) and profiling go significantly beyond Article 22. In this respect, there are several safeguards that apply to such data processing activities, notably the ones stemming from the general data processing principles in Article 5, the legal grounds for processing in Article 6, the rules on processing special categories of data (such as biometric data) under Article 9, specific transparency and access requirements regarding ADM under Articles 13 to 15, and the duty to carry out data protection impact assessments in certain cases under Article 35.
This new FPF Report outlines how national courts and DPAs in the European Union (EU)/European Economic Area (EEA) and the UK have interpreted and applied the relevant EU data protection law provisions on ADM so far – before and after the GDPR became applicable -, as well as the notable trends and outliers in this respect. To compile the Report, we have looked into publicly available judicial and administrative decisions and regulatory guidelines across EU/EEA jurisdictions and the UK. It draws from more than 70 cases – 19 court rulings and more than 50 enforcement decisions, individual opinions, or general guidance issued by DPAs, – from a span of 18 EEA Member-States, the UK, and the European Data Protection Supervisor (EDPS). To complement the facts of the cases discussed, we have also looked into press releases, DPAs’ annual reports, and media stories.
Some examples of ADM and profiling activities assessed by EU courts and DPAs and analyzed in the Report include:
- School access and attendance control through Facial Recognition technologies
- Online proctoring in universities and automated grading of students
- Automated screening of job applications
- Algorithmic management of platform workers
- Distribution of social benefits and tax fraud detection
- Automated credit scoring
- Content moderation decisions in social networks
FPF Training: Automated Decision-Making under the GDPR
Ready to get an in-depth understanding of the GDPR’s Automated Decision-Making requirements? Register for our upcoming virtual training session on November 9, where FPF experts will cover the critical elements of Article 22, recent DPA decisions, consent requirements, and more.
Register today!
Our analysis shows that the GDPR as a whole is relevant for ADM cases and has been effectively applied to protect the rights of individuals in such cases, even in situations where the ADM at issue did not meet the high threshold established by Article 22 GDPR. Among those, we found detailed transparency obligations about the parameters that led to an individual automated decision, a broad reading of the fairness principle to avoid situations of discrimination, and strict conditions for valid consent in cases of profiling and ADM.
Moreover, we found that when enforcers are assessing the threshold of applicability for Article 22 (“solely” automated, and “legal or similarly significant effects”), the criteria they use are increasingly sophisticated. This means that:
- Courts and DPAs are looking at the entire organizational environment where ADM is taking place, from the controller’s organizational structure, to reporting lines and the effective training of staff, in order to decide whether a decision was “solely” automated or had meaningful human involvement; and
- Similarly, when assessing the second criterion for the applicability of Article 22, enforcers are looking at whether the input data for an automated decision includes inferences about the behavior of individuals, and whether the decision affects the conduct and choices of the persons targeted, among other multi-layered criteria.
A recent preliminary ruling request sent by an Austrian court in February 2022 to the Court of Justice of the European Union (CJEU) may soon help clarify these concepts, as well as other related to the information which controllers need to give data subjects about ADM’s underlying logic, significance and envisaged consequences for the individual.
The findings of this Report may also serve to inform the discussions about pending legislative initiatives in the EU that regulate technologies or business practices that foster, rely on, or relate to ADM and profiling, such as the AI Act, the Consumer Credits Directive, and the Platform Workers Directive.
On May 20, the authors of the report discussed with prominent European data protection experts some of the most impactful analyzed decisions during an FPF roundtable. These include cases related to the algorithmic management of platform workers in Italy and the Netherlands, the use of automated recruitment and social assistance tools, and creditworthiness assessment algorithms. The discussion also covered pending questions sent by national courts to the CJEU on matters of algorithmic transparency under the GDPR. View a recording of the conversation here, and download the slides here.