The GDPR and the AI Act Interplay: Highlights from FPF and Ada Lovelace Institute’s Joint Event
Authored by Christina Michelakaki, FPF Intern for Global Policy
On November 9, 2022, FPF, along with the Ada Lovelace Institute (Ada), organized a closed roundtable in Brussels where experts met to discuss the lessons that can be drawn from General Data Protection Regulation (GDPR) enforcement precedents when deciding on the scope and obligations of the European Commission (EC)’s Artificial Intelligence (AI) Act Proposal. The event hosted representatives from the European Parliament, civil society organizations, Data Protection Authorities (DPAs), and industry representatives.
The roundtable discussion was based on a comprehensive Report launched by FPF in May 2022 analyzing case law under the GDPR applied to real-life cases involving Automated Decision-Making (ADM). This blog outlines the main conclusions drawn by the speakers on four main topics:
- the complementarity between the AI Act and the GDPR;
- the needed clarity on transparency requirements and AI training;
- the difference between the risk-based approach and an exhaustive high-risk AI list; and
- the need for effective enforcement and redress avenues for affected persons.
FPF’s Managing Director for Europe, Dr. Rob van Eijk, gave opening remarks followed by a short introduction by FPF EU Policy Counsel Sebastião Barros Vale and Ada’s European Adviser, Alexandru Circiumaru.
As outlined in a recent analysis on the matter published by FPF, Barros Vale highlighted that:
- The AI Act and the GDPR share Article 16 of the Treaty on the Functioning of the European Union (TFEU) as a legal basis, so data protection considerations are important when it comes to regulating AI systems under the AI Act Proposal.
- Regarding the AI value chain, the AI Act’s ‘users’ will generally be the ‘controllers’ under the GDPR in the deployment phase of AI systems; during the development phase, providers will likely be the GDPR controllers where they build AI systems relying on the collection, analysis, or any other processing of personal data.
- Where no processing of personal data is involved, or where no GDPR breaches occur, but AI systems still significantly impact individuals, the AI Act could potentially fill redress gaps.
- The existing and future GDPR case law on ADM may inspire the future enlargement of the list of high-risk AI systems (HRAIS) provided in Annex III of the AI Act.
This was followed by short interventions from the participants and a discussion.
- The AI Act and the GDPR have different but complementary scopes and obligations.
Speakers seemed to agree on the fact that not following the GDPR’s distinction between data controllers and processors in the AI Act stems from the fact that the EC aims at regulating AI systems entering the EU market from a product safety perspective.
Industry representatives further explained that trying to understand the requirements of the AI Act from a data protection point of view does not make practical sense. They added that having specific types of AI systems in the Annex III list would not make them lawful per se, notably if they breach GDPR requirements.
Other panelists noted that the AI Act has a broader scope than the GDPR regarding AI systems, as it covers cases where no personal data processing is involved but that may negatively affect individuals.
Moreover, participants questioned whether the AI Act’s human oversight requirements would render Article 22 GDPR on ADM useless – notably, the rights for individuals to obtain human review of automated decisions. Some participants argued that this does not seem to be the case because users are not required by the AI Act Proposal to implement meaningful human oversight when they deploy AI systems. With regard to human oversight, the current draft of the AI Act only requires providers to embed measures into their AI systems that enable users to include human oversight in their deployment scheme.
Additionally, some experts claimed that the interplay between the AI Act and the GDPR was neglected in the original Proposal, which may raise further issues. As an example, the initial text seems to overlook the fact that users (likely controllers under the GDPR) are often very dependent on AI providers when it comes to practical GDPR compliance. Moreover, speakers worried about issues arising in the AI system’s deployment phase – where AI users are in control – and pointed to the approach proposed at the Council of Europe’s Convention on AI as more tailored to the responsibilities of each party.
For another speaker, incorporating broader fundamental rights impact assessments into the AI Act could complement Data Protection Impact Assessments (DPIAs) under the GDPR.
- Transparency requirements and AI training need further clarity
With regard to certain requirements set forth by the AI Act Proposal, speakers conveyed that:
- Training AI systems by using sensitive data, as allowed under Article 10(5) of the AI Act, may be challenging for Business-to-Business companies that do not usually process sensitive data since they would need to ask third parties for access to such data.
- Transparency requirements under the AI Act need to ensure information is intelligible: sharing information that a well-trained expert understands will probably be useless for the average person. On the other hand, sharing too much information about how AI systems work may enable malicious actors to circumvent them (e.g., fraud detection algorithms).
- The difference between the risk-based approach and an exhaustive high-risk AI list
As a pushback to having a specific set of strictly-regulated high-risk AI use cases in Annex III and prohibited practices in Article 5 of the AI Act, some voices suggested mimicking the GDPR’s open clauses and risk-based approach. More specifically, a few speakers agreed that not having a closed list but rather a set of overarching principles and risk assessment requirements would increase providers’ accountability and enable enforcers to verify compliance in a more flexible manner.
Speakers that advocated for such a solution agreed that the concept of ‘risk’ and its underlying assessment criteria should be read in line with the GDPR, which could also provide less prescriptive indications of how to mitigate detected risks.
For other experts in the room, a clear definition of AI along with the list of Annex III is preferable, as it avoids enshrining subjective risk assessment criteria. According to this point of view, relying on providers’ self-assessments to decide what is considered high-risk could benefit larger players with the financial incentives and legal resources to take a less cautious approach to AI development.
Among the participants that defended keeping a high-risk AI list, some called for the ability to easily update the list since novel risky AI use cases are constantly surfacing. One of the experts disagreed that it should fall on the EC to update the list, given the institution’s political driving factors. Instead, the speaker called for a bottom-up approach involving regulators and the public’s participation. Other voices also advocated for incorporating emotion recognition systems and the analysis of biometrics-based data in the list of high-risk AI use cases.
- A need for effective enforcement and redress avenues for affected persons
Lastly, participants touched upon the AI Act’s governance mechanisms and redress avenues.
- With regard to enforcement, some speakers stressed the need for cooperation between different competent regulators on AI and that DPAs should not be the sole or main enforcers of the AI Act. On the other hand, the idea that DPAs should be the main enforcers was also supported by some participants. On the previous point, some experts claimed that DPAs often lack the resources and expertise to deal with AI systems. Participants also advised against having EU Member-States choose their national authority to avoid fragmentation in regulatory approaches. Some speakers called for the centralization of enforcement of AI rules at the EU level while not excluding a potential role for DPAs.
- When it comes to redress mechanisms, some speakers called for enshrining rights for individuals affected by AI systems, notably a right to complain to a supervisory authority and to file judicial claims, including class actions. Others highlighted that the recently proposed AI Liability and revised Product Liability Directives could empower individuals to obtain compensation for harms caused by AI systems.
Further reading:
- FPF’s report: “Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities,” May 2022
- FPF’s blog: “The GDPR and AI Act interplay: lessons from FPF’s ADM case law report,” November 2022
- Ada’s blog: “People, risk and the unique requirements of AI,” March 2022
- Ada’s blog: Regulating AI in Europe: Four problems and four solutions, March 2022