Five ways in which the DPDPA could shape the development of AI in India
India enacted the Digital Personal Data Protection Act, 2023 (DPDPA) on August 11, 2023, a comprehensive data protection law culminating from a landmark Supreme Court decision recognizing a constitutional right to privacy in India, and discussions on multiple drafts spanning over half a decade. 1
The law comes at a time when, globally, there has been an exponential growth in artificial intelligence applications and use-cases, including consumer-facing generative AI systems. As a comprehensive data protection law, the DPDPA will significantly impact how organizations use and process personal data, which in turn affects the development and use of AI. Specifically, AI model developers and deployers will need to carefully consider the DPDPA’s regulatory scope concerning the processing of personal data, the limited grounds for processing, the rights of individuals in respect of their personal data, and the possible exemptions available to train and develop AI systems.
While the Central Government has yet to notify subordinate legislation to the DPDPA (the DPDP Rules), which will operationalize key provisions of the law, we can analyze the DPDPA for an early idea of how it could be applied to AI. While the new law may create challenges for AI training and development through its consent-centric regime, it also contains exemptions for publicly available data, exemptions for research, a limited territorial scope, and risk-based approach to the classification of obligations—an overall approach that is likely to significantly shape the development of AI in India.
1.DPDPA’s consent-centric regime may pose challenges for AI training and development
The DPDPA recognises consent and ‘certain legitimate uses’ as the two grounds for processing personal data. Section 7 of the DPDPA specifies scenarios where personal data can be processed without consent. These include situations where the data principal has voluntarily provided their personal data and has not objected to its use for a specific purpose, as well as cases involving natural disasters, medical emergencies, employment-related matters, and the provision of government services and benefits
This means that the DPDPA creates a consent-centric regime for personal data processing. Notably, it does not recognise other alternative legal bases to consent for processing personal data, such as contractual necessity and legitimate interests, that are provided under other leading data protection laws internationally, such as the General Data Protection Regulation (GDPR) in the EU and Brazil’s Lei Geral de Proteção de Dados (LGPD). Previous work by FPF has identified challenges – for both organizations and individuals – in relying on consent as the primary basis for processing, especially in ensuring that it is provided meaningfully. In the context of AI development, FPF’s report on generative AI governance frameworks in the APAC region highlights the challenges of relying on consent for web crawling and scraping (however, this may not be an issue under the DPDPA for publicly available data – see point 2 below). Specifically, without an established legal relationship with the individuals whose data is scraped, it is practically impossible to identify and contact them to obtain their consent.
Certain sector-specific AI applications and generative AI systems that require curated personal data to develop AI models will need to be trained on personal data that is not publicly available. In such a context, data fiduciaries (i.e., “data controllers” or entities that determine the purposes and means of processing personal data) will likely need to rely on consent as the primary ground for processing personal data. As per the DPDPA, data fiduciaries — in this case, AI developers or deployers — must ensure that consent is accompanied by a notice clearly outlining the personal data being sought, the purpose of processing, and the rights available to the data principal. Furthermore, for personal data collected before the enactment of the DPDPA, data fiduciaries are required to provide notice informing the “data principal” (i.e., data subject, or the person whose personal data are collected or otherwise processed).
2. Exemptions for publicly available data could facilitate training AI models on scraped data, but require caution
A significant provision under the DPDPA is the exclusion of publicly available data entirely from the scope of regulation. According to Section 3(c)(ii) of the DPDPA, the DPDPA does not apply to data that is made publicly available by the “data principal” or any other person legally obligated to make the data publicly available.
This blanket exemption goes further than similar provisions in other data protection laws, which, for instance, only exempt organizations from the obligation to obtain individuals’ consent for processing of their personal data, if the data is publicly available. This is the case in Singapore, whereSection 13 of the Personal Data Protection Act (PDPA), read with the Act’s First Schedule, exempts organizations from the requirement to obtain consent to process personal data, if the data is publicly available. However, unlike the DPDPA, data protection obligations under PDPA continue to apply even when processing publicly available data.
Similarly, Article 13 of China’sPersonal Information Protection Law (PIPL), which, broadly, specifies the grounds for processing personal data, allows the processing of personal data without consent if the data has been disclosed by the individual concerned or has been lawfully disclosed. Such processing must be within reasonable scope and must balance the rights and interests of the individual and the larger public interest.
In Canada, the relevant exemption under the Personal Information Protection and Electronic Documents Act (PIPEDA) only applies to the processing of publicly available information in the circumstances mentioned in the Regulations Specifying Publicly Available Information, SOR/2001-7 (13 December, 2000). The Canadian data protection regulator provides guidance on the interpretation of what could be considered as publicly available.
Of note, the EU’s GDPR does not include any exemptions or even tailored rules applying to publicly available personal data. This is because the whole regulation applies equally to all personal data, including the provisions related to lawful grounds for processing. For instance, with regard to giving notice to data subjects, the GDPR even has a dedicated article that requires notice to be given when personal data was not collected directly from data subjects (Article 14). However, this obligation has an exception where “the provision of such information proves impossible or would involve a disproportionate effort, in particular for processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes”. There is currently an ongoing debate among European regulators on whether processing publicly available personal data particularly under the guise of scraping can be done lawfully without the consent of individuals under the GDPR, with no clear answer yet.2
Globally, the scraping of webpages has come under increased regulatory scrutiny. In August 2023, members of the Global Privacy Assembly’s International Enforcement Cooperation Working Group issued a joint statement urging social media companies and other websites to guard against unlawful scraping of personal information from web pages. In May 2024, the European Union Data Protection Board’s ChatGPT Taskforce, in its report, noted that the automated collection and extraction of certain information from webpages might contain personal data, including sensitive categories of personal data, which could “carry peculiar risks for the fundamental rights and freedoms” of individuals.
Processing of publicly available personal data would not be subject to obligations under the DPDPA to the extent that any personal data contained in the datasets was made publicly available by the data principal or by someone legally required to do so – this may include, for example, personal data from social media platforms and company directories. However, organizations will still need to incorporate appropriate safeguards to ensure that only permissible personal data is scraped and the scraped data does not violate any other applicable laws. At the same time, questions may arise with regard to the applicability of the DPDPA to publicly available personal data that was collected for an initial processing operation, such as training an AI model, but which is not anymore publicly available after being collected.
3.Exemptions for research purposes with clear technical and ethical standards could promote AI research and development
Section 17(2)(b) of the DPDPA also exempts processing of personal data for “research, archiving or statistical purposes” from obligations under the DPDPA. However, this exemption only applies if such processing complies with standards prescribed by the Central Government and is not done to take “any decision specific to a [d]ata [p]rincipal”. To date, the Central Government has not released any standards relating to this provision
By contrast, data protection laws in most jurisdictions do not specifically provide an exemption for processing personal data for research purposes. Instead, they recognize research as a secondary use that does not require a distinct lawful basis for processing than the one originally relied on, or permit non-consensual processing for research, subject to certain conditions.
For instance, in the EU, under the GDPR, secondary use of personal data for archiving, statistical, or scientific research purposes is permissible, provided that ‘appropriate safeguards’ are in place to protect the rights of the data subject. These safeguards include technical and organizational measures aimed at ensuring data minimization. Furthermore, the GDPR allows the processing of sensitive categories of personal data when necessary for scientific or historical research purposes.
In Japan, the Act on the Protection of Personal Information (APPI) exempts consent requirements, in cases of secondary collection and use of personal data, if the data is obtained from an academic research institution and processed jointly with that institution. However, such processing must not be solely for commercial purposes and must not infringe upon the individual’s rights and interests.
In Singapore, the PDPA provides a limited additional basis for the use, collection, and disclosure of personal data for research purposes, if the organization can satisfy the following conditions: (a) the research purpose requires personally identifiable information; (b) there is a clear public benefit to the research; (c) the research results will not be used to make decisions affecting individuals; and (d) the published results do not identify individuals.
It is unclear at this stage if the research exemption under the DPDPA will extend to only academic institutions or also extend to private entities that engage in research. While such an exemption, with clearly outlined standards, could help create quality data sets for model development, it is crucial to have clearly defined technical and ethical standards that can prevent privacy harms.
4.Limited nature of DPDPA’s territorial scope may allow offshore providers of AI systems to engage in unregulated processing of personal data of data principals in India
Like many other global data protection frameworks, the DPDPA has extraterritorial applicability. Section 3(b) of the DPDPA indicates that the DPDPA applies to entities that process personal data outside India, if such processing is connected to any activity which is related to the offering of “goods or services” to data principals in India.
This provision is narrower in scope than similar provisions under other global data protection laws. For example, the GDPR, unlike the DPDPA, also applies extraterritorially to processing which involves ”the monitoring of behavio(u)r” of data subjects within the European Union. In fact, data protection authorities in Europe have fined foreign entities for unlawfully processing the personal data of EU residents, even when those entities have no presence in the region. Of note, under the EU’s AI Act, AI systems used in high-risk use cases3 “should be considered to pose significant risks of harm to the health, safety or fundamental rights if the AI system implies profiling” as defined by the GDPR (Recital 53), linking thus engaging in “profiling” as a component of an AI system to heightened risks to the rights of individuals. Interestingly, the Personal Data Protection Bill, 2019, which was introduced in Indian Parliament and withdrawn in 2022, and the Joint Parliamentary Committee’s version of the data protection bill also extended extraterritorial applicability to any processing that involved the “profiling of data principals within the territory of India”.
This narrower scope permits offshore providers of AI systems, which do not provide goods and services to data principals in India, to profile and monitor the behavior of data principals in India without being subject to any obligations following from the DPDPA. Additionally, such companies may engage in unregulated scraping of publicly available data to train their AI systems, beyond the exception explored above. As highlighted in point 2, publicly available personal data that has not been made available by the data principal or by any other person under a legal obligation still falls under the DPDPA’s scope of regulation. This could include personal data shared by others on blog pages, social media websites, or in public directories, among others. Compliance with the DPDPA obligations in these scenarios does not extend to offshore organizations, as long as they do not engage in activities related to offering goods or services in India.
For the same types of data, all other data fiduciaries must ensure that the data is processed based on permissible grounds and is protected by appropriate security safeguards. Additionally, for personal data collected through consent, data fiduciaries must ensure that data principals are afforded the rights to access, correct, or erase their personal data held by the fiduciary.
5. Classification of significant data fiduciaries with objective criteria would allow a balanced and risk-based approach to data protection obligations relevant to AI systems
The DPDPA adopts a risk-based approach to imposing obligations by introducing a category of data fiduciaries known as ‘Significant Data Fiduciaries’ (SDFs). The DPDPA empowers the Central Government to designate any data fiduciary or class of data fiduciaries as a SDF based on the following factors:
The volume and sensitivity of personal data processed;
The risk posed to the rights of data principals);
The potential impact on the sovereignty and integrity of India;
Risk to electoral democracy;
Security of the state; and
Public order.
In addition to complying with the obligations for data fiduciaries, SDFs are required to:
appoint a resident Data Protection Officer who will serve as the primary point of contact for grievance resolution under the mandatory grievance redressal mechanism;
designate an independent data auditor to conduct regular audits, ensure compliance with data protection obligations, and carry out periodic Data Protection Impact Assessments (DPIA).
The DPIA obligation is particularly relevant to identifying and mitigating risks to privacy and other rights that may be impacted by processing of personal data in the context of training or deploying an AI system.
The Central Government also has the powers to impose additional obligations on SDFs. On the other hand, the Central Government is also empowered to remove notice, data retention limitation, accuracy and obligations for certain data fiduciaries or a class of data fiduciaries, “including startups”.
It is important to note that the DPDPA does not specify objective criteria, such as the categories of personal data that may be considered sensitive, or the volume of data or users required for the classification of SDFs or the easing of certain obligations for data fiduciaries. In the absence of these specific quantitative thresholds, the classification of AI driven companies could be influenced by the Central Government’s perception of the potential threats posed by specific AI applications.
Conclusion
With the AI market in India growing at 25-35% annually and projected to reach a market size of around $17 billion by 2027, the Indian government has recognized this opportunity by allocating over $1.2 billion for the IndiaAI Mission, aimed at developing domestic capabilities to boost the growth of AI in the country. As AI continues to evolve and integrate into various sectors, the DPDPA provides a crucial framework that will influence how organizations develop and deploy AI technologies in India. The law’s exemptions for publicly available data, its over-reliance on consent, and a graded approach to obligations for data fiduciaries present both opportunities and challenges.
The provisions of the DPDPA will only take effect once the government issues a notification under Section 1(2) of the DPDPA. The forthcoming DPDP Rules are expected to clarify and operationalize key aspects of the Act. These include the form and manner of providing notices, breach notification procedures, how data principals can exercise their rights under the DPDPA, and the provisions on procedure and operations of the Data Protection Board. The effectiveness of the law in balancing privacy protections, preventing harms, on one hand, and harnessing the benefits that AI could bring for people and society, on the other hand, will become clearer once these rules are in place.
Edited by: Gabriela Zanfir-Fortuna, Josh Lee Kok Thong, and Dominic Paulger
You can refer to FPF’s previous blogs (here and here) for a brief history and overview of the DPDPA. ↩︎
Does the GDPR Need Fixing? The European Commission Weighs In
The European Commission published its second Report on the General Data Protection Regulation (GDPR) on July 25, 2024, assessing the progress of its impact and effectiveness of application since the Commission’s first Report published in June 2020. The second Report acknowledges relative success of the GDPR in protecting individuals and supporting businesses, while also highlighting areas for improvement, with further progress being called for in supporting stakeholders’ compliance efforts, clearer and more actionable guidance from data protection authorities (DPAs), and achieving more consistent interpretation and enforcement of the GDPR across EU Member States.
This blog surfaces key takeaways from the Commission’s second Report on the GDPR, with an overview and analysis of the findings from various stakeholders, including DPAs. The Report draws conclusions following the past years of GDPR enforcement and applicability, exploring enforcement and the use of cooperation and consistency mechanisms; implementation of the GDPR by Member States and an overview of the exercise of the data subject rights; the GDPR as a cornerstone of the EU’s new legislative rulebook; and international transfers and global cooperation.
1. Enforcement and the use of cooperation and consistency mechanisms are on a growth trend, bringing total fines of 4.2 billion EUR and increased use of corrective measures
In 2020, the Commission’s first Report highlighted the need for a more efficient and harmonized handling of cross-border cases across the EU, resulting in the 2023 Commission proposal for a Regulation on additional procedural rules currently being negotiated by EU legislators.
In its second Report, the Commission assessed recent enforcement activity under the GDPR, highlighting a trend of increased cooperation between DPAs, increased use of the GDPR consistency mechanism and the growing intervention of the European Data Protection Board (EDPB) via its Opinions, with the following highlights:
Almost 2400 case entries were registered in the EDPB’s information exchange system as of 3 November 2023;
Lead DPAs issued approximately 1500 draft decisions with over 990 resulting in final decisions finding GDPR infringements (as of 3 November 2023); and
DPAs from 7 Member States participated in 5 joint operations;
DPAs from 18 Member States raised 289 relevant and reasoned objections, 101 of which were raised by German authorities, with a success rate in reaching consensus varying from 15% (German authorities) to 100% (Polish DPA).
The cases submitted to dispute resolution addressed the legal bases for processing data for behavioral advertising on social media and processing children’s data online.
Regarding the consistency mechanism, the report notes that:
The EDPB has adopted 190 consistency opinions;
9 binding decisions were adopted in dispute resolution, with all instructing the lead DPA to amend its draft decision and others resulting in significant fines;
5 DPAs adopted provisional measures under the urgency procedure (Germany, Finland, Italy, Norway and Spain); and
2 DPAs requested an urgent binding decision by the EDPB under Article 66(2) GDPR, and the EDPB ordered urgent final measures in one case.
The Commission pointed to more robust enforcement activity by DPAs in recent years. DPAs use corrective measures and adopt infringement decisions in complaint-based and own initiative cases. The Report stated that DPAs have imposed “substantial fines in landmark cases against ‘big tech’”. For instance, DPAs have imposed over 6680 fines amounting to approximately EUR 4.2 billion, with Ireland accounting for the highest total fines (EUR 2.8 billion) followed by Luxembourg (EUR 746 million) and France (EUR 131 million). Liechtenstein, Estonia, and Lithuania were reported to have imposed the lowest fines, 9600 EUR, 201000 EUR, and 435000 EUR, respectively. The highest number of fines were imposed in Germany (2106) and Spain (1596). The fewest fines were imposed in Liechtenstein (3), Iceland (15) and Finland (20). Most fines were imposed for (i) infringement of the principles of lawfulness and security of processing, (ii) infringement of the provisions related to processing of special categories of personal data, and (iii) failure to comply with individuals’ rights (Chapter III of the GDPR).
The Report showed that DPAs effectively used “amicable settlement” procedures, with over 20,000 complaints resolved, even though such procedures are unavailable in all Member States. This procedure was commonly used in Austria, Hungary, Luxembourg, and Ireland.
Furthermore, DPAs launched over 20,000 own-initiative investigations and collectively received over 100,000 complaints yearly. In 2022, nine DPAs received over 2000 complaints. Germany (32300), Italy (30880), Spain (15128), the Netherlands (13133), and France (12193) registered the highest number of complaints, while Liechtenstein (40), Iceland (140), and Croatia (271) registered the lowest number. The median time to handle complaints from receipt to closure ranges from 1 to 12 months.
The Report notes that German DPAs launched the highest number of own-initiative investigations, 7647 investigations, followed by Hungary with 3332, Austria with 1681 and France with 1571 investigations.
Besides fines, DPAs used corrective measures such as warnings, reprimands, and orders to comply with the GDPR. In 2022, German DPAs adopted the highest number of decisions imposing corrective measures (3261), followed by Spain (774), Lithuania (308) and Estonia (332). The lowest number of corrective measures was imposed in Liechtenstein (8), Czechia (8), Iceland (10), the Netherlands (17) and Luxembourg (22). Controllers and processors frequently challenge decisions in national courts, most commonly on procedural grounds. For instance, in Romania, all 26 decisions finding an infringement were challenged before the national court, while in the Netherlands, the rate of challenge was reported to be 23%.
2.Implementation of the GDPR by Member States continues to be fragmented
Similar to the 2020 Report, stakeholders still reported fragmentation in the national application of the GDPR, from national legislation to diverging interpretations of the GDPR by DPAs. The concerns regard in particular:
The minimum age for a child’s consent in relation to the offer of information society services to the child;
Introduction by Member States of further conditions concerning the processing of genetic data, biometric data or data concerning health; and
Processing of personal data relating to criminal convictions and offenses.
However, the Report mentions that Member States consider that a limited degree of fragmentation may be acceptable. The specification clauses provided by the GDPR remain beneficial, particularly for processing by public authorities (the Council position states that “the margins left for national legislation to define specific framework for certain type of processing activities, for example when it comes to article 85 and 86 of the GDPR regarding the freedom of expression and information and the right of public access to official documents, remain beneficial and relevant notably for public authorities given the specificity of their processing activities”).
Notably, the Report points out that the interpretation of the GDPR by national DPAs remains fragmented as DPAs continue to adopt diverging interpretations of key data protection concepts, creating legal uncertainty and disrupting the free movement of personal data. Some of the specific issues raised by stakeholders include different views on the appropriate legal basis for processing personal data, diverging opinions on whether an entity is a controller or processor, and, in some cases, DPAs not following the EDPB guidelines or publishing conflicting national guidelines. Some stakeholders also consider that certain DPAs and the EDPB adopt interpretations that deviate from the risk-based approach of the GDPR, mentioning areas such as the interpretation of anonymization, the legal bases of legitimate interest and consent, and the exceptions to the prohibition of automated individual decision-making.
The Commission highlights that it monitors the implementation of the GDPR on an ongoing basis, having launched infringement procedures against Member States on issues concerning the independence of DPAs (e.g., Belgium) or the right to an effective judicial remedy where the DPA does not handle a complaint (e.g., Finland and Sweden). The Commission also regularly requests confidential updates from DPAs on significant cross-border cases, particularly those involving large tech companies.
3. Two-thirds of Europeans have heard of the GDPR, and they are increasingly exercising their Data Subject Rights
A noteworthy mention is that individuals are increasingly familiar with and actively exercise their rights under the GDPR: 72% have heard of the GDPR, with 40% knowing what it is. Awareness is highest in Sweden (92%) and lowest in Bulgaria (59%). Additionally, 68% are aware of a DPA responsible for data protection, with 24% knowing which authority it is. Awareness of DPAs is highest in the Netherlands (82%) and lowest in Austria (56%) and Spain (58%) (2024 Eurobarometer survey as referenced by the Commission’s report). While these statistics show an increased awareness of the existence of data protection rights, understanding of the GDPR still needs to be improved, as evidenced by many trivial or unfounded complaints received by DPAs.
Nonetheless, several user-friendly digital tools have been developed to make it easier for data subjects to exercise their rights. Additionally, by adopting the Data Governance Act the Commission hopes to increase the number of such tools. Industry stakeholders have stated that the right to erasure is increasingly used, while the right to rectification and the right to object are rarely used.
Right of access: The most frequently invoked is the right to access (Art. 15 GDPR). Controllers report that they are challenged with “unfounded or excessive requests”, managing high volumes of requests, and dealing with requests unrelated to data protection. Civil society organizations note that responses to access requests are often delayed or incomplete, while the data received is not always in a readable format. Public authorities claim to have difficulties with resolving the interaction between the right of access and rules on public access to documents.
Right to portability: The Commission has adopted initiatives that facilitate easier switching between services, supporting competition, innovation, and user choice on the right to data portability. The Report makes reference to the role of the Data Act in enhancing data portability for users of smart devices, requiring products or servers to support this technically, and to the Digital Markets Act, which mandates effective data portability for users of core platform services, particularly those provided by “gatekeepers”. Other initiatives, such as the Platform Work Directive, the European Health Data Space Regulation, and the Framework for Financial Data Access Regulation, aim to bolster portability rights in specific sectors. Interestingly, the Report does not include any data on portability-related requests under the GDPR or complaints related to portability.
Right to lodge a complaint: The large number of complaints received shows that there is broad awareness of the right to lodge complaints with DPAs. However, civil society organizations continue to point out inconsistencies in how complaints are handled across Member States. The Commission maintains that its legislative proposal on procedural rules should address these issues. Regarding collective redress, although few Member States have allowed non-profit bodies to take independent action under GDPR Article 80(2), the Representative Actions Directive, effective from June 2023, is expected to harmonize this process by facilitating collective actions for GDPR breaches.
Protection of children’s data: The EU and national authorities have increasingly implemented measures to safeguard children online, notably with the introduction of the Digital Services Act and its provisions to enhance children’s privacy and safety on online platforms. This policy priority has equally reflected in the data protection field, with DPAs working together to promote child protection in advertising and recently fining social media companies for GDPR violations when processing children’s data. Other key developments include the upcoming EDPB guidelines on children’s data processing, and the creation of a task force on age verification to support the development of an EU-wide approach to age verification, under the auspices of the Digital Services Act Board. Age verification will be included in the European Digital Identity Wallet, which should be available to all EU citizens and residents in 2026.
4. The position of DPOs and the availability of soft law tools need improvement
The Commission’s Report focuses on the GDPR’s role in establishing a level playing field, noting how companies have embraced an internal data protection culture, recognizing it as a key competitive factor, thanks to its flexible compliance framework through soft law tools such as Codes of Conduct, certification mechanisms, and standard contractual clauses (SCCs). However, several shortcomings are identified, both from the perspective of stakeholders and regulators. From companies, it is noted that the use of soft law tools needs improvement, arguing that the development of Codes of Conduct has been limited due to bureaucracy and lack of engagement from DPAs. In particular, SMEs report that, despite the benefits of tailored support by DPAs, they still perceive compliance as complex and fear enforcement, as inconsistent approaches remain across Member States. The report calls on DPAs to proactively engage more and provide practical tools and guidance.
EU data protection officers (DPOs) are also addressed by the Commission’s Report: despite being well-regarded as independent experts, several challenges are mentioned, such as difficulties in their appointment, lack of resources, additional non-data protection tasks, and insufficient seniority, with the EDPB calling for enhanced awareness-raising and support from DPAs to ensure that DPOs can effectively perform their duties under the GDPR.
5.The GDPR is described as a cornerstone for the EU’s new legislative rulebook in the digital sphere
Since the 2020 Report, several EU legislative initiatives have complemented or specified GDPR rules to address emerging areas, some of them being proposed specifically to enhance data sharing. The Commission highlights several files, some completed, some still under legislative action: the Digital Services Act, the Digital Markets Act, the AI Act, the Directive on Platform Work, the Political Advertising Regulation, the Interoperable Europe Act, the anti-money laundering package, the Data Governance Act, the Data Act, and the European Health Data Space. Notably, the Commission includes the proposed e-Privacy regulation among the digital policy initiatives building on the GDPR. The report highlights that all new legislation must align with the GDPR and the Court of Justice case law interpreting it.
With multiple digital rules on the horizon, cooperation across various regulatory areas, such as data protection, competition law, consumer law, and cybersecurity, is needed. In its Report, the Commission notes that close cooperation is crucial when addressing issues such as the compatibility of “pay or OK” models with EU law.
New digital regulations often establish specialized structures, such as the Digital Markets Act high-level group and the European Data Innovation Board, to coordinate enforcement. DPAs actively engage with other regulatory bodies through groups and task forces to ensure coherent and complementary actions. However, there is a need for more structured and efficient cooperation, especially for cross-border issues affecting many individuals, while ensuring that each authority remains responsible for compliance within their jurisdiction. The Report highlights that Member States should enhance national-level collaboration to support this.
6.Global ambitions continue with new adequacy decisions, trade agreements featuring data protection provisions, and enforcement cooperation agreements with third countries
The Commission assesses that, since 2020, the concept of “international transfers” under the GDPR has been updated to reflect the CJEU Schrems II ruling, which further clarified the level of protection provided by different transfer instruments to ensure that the GDPR is not undermined, as well as the assessment of the level of protection, with data exporters having to consider both the safeguards set out in the transfer instrument, as well as the relevant aspects of the legal system where the data importer is located. The Report also notes that the Schrems II ruling has also been reflected in the guidance of the EDPB, which updated its “adequacy referential”.
The Commission, therefore, provides a comprehensive update of the next steps in its global cooperation efforts since the Schrems II ruling. Following the invalidation of the adequacy decision for the EU-US Privacy Shield, the EU and the US developed the EU-US Data Privacy Framework: introduced by an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities, the Commission followed suit, adopting an adequacy decision, with a first review set to take place in 2024.
New adequacy decisions in conformity with the latest interpretation have also been adopted, while others are expected soon: The Commission has adopted adequacy decisions for South Korea and the UK (with a “sunset clause” expiring in 2025). Adequacy talks are ongoing with Brazil, Kenya, and international organizations such as the European Patent Organisation. The Commission is also engaging with various countries globally to expand the network of adequacy decisions. Periodic reviews of existing decisions are also taking place, the most recent being Japan in 2024. The Commission also highlights the role played by these decisions as a strategic tool for improving EU relations and promoting regulatory convergence with third countries.
The Report calls for streamlining of the BCR approval process
The Report also praises the development of additional instruments beyond adequacy decisions, such as new SCCs, which introduce updated safeguards aligning with GDPR requirements, a modular approach offering a single entry-point covering various transfer scenarios, increased flexibility for the use by multiple parties, and a practical toolbox to comply with the Schrems II decision. The SCCs were welcomed by stakeholders, with feedback indicating that the SCCs remain the most used tool for transfers by EU data exporters.
The stakeholder feedback points out that model clauses are increasingly central to global data flows, with several jurisdictions having endorsed the EU SCCs as a transfer mechanism under their own data protection laws, with limited formal adaptations to their domestic legal order (for instance, the UK and Switzerland). Other countries have also adopted model clauses that share important common features with the EU SCCs (for example, New Zealand and Argentina). Moreover, the report exemplifies the creation of model clauses by other international and regional organizations or networks, such as the Council of Europe Consultative Committee of Convention 108, the Ibero-American Data Protection Network and the Association of Southeast Asian Nations (ASEAN), noting that this opens up new opportunities to facilitate data flows between different regions based on model clauses and providing the EU-ASEAN Guide on the EU SCCs and ASEAN model clauses as a concrete example.
In addition to SCCs, binding corporate rules (BCRs) remain prominent for data transfers between members of corporate groups or among enterprises engaged in a joint economic activity: since the adoption of the GDPR, the EDPB adopted 80 positive opinions on national decisions approving BCRs. However, the report calls on DPAs to streamline the BCR approval process, which stakeholders describe as long, complex, and detrimental to their broader adoption.
Privacy and Data Protection will Continue to be Featured in Trade Agreements
Highlighting the successful inclusion of data protection safeguards in recent EU agreements with, for example, the UK and Canada, the Report argues that integrating data protection safeguards within international agreements for ensuring effective and secure data flows will continue to be featured in further agreements, highlighting the Second Additional Protocol to the Cybercrime Convention, and the EU-U.S. bilateral negotiations on an agreement on cross-border access to electronic evidence for criminal matters.
The position of the Commission as a proponent of strong provisions to protect privacy and boost digital trade at the World Trade Organization in the ongoing negotiations on the Joint Statement Initiative on electronic commerce is also highlighted, noting that since the GDPR came into force, privacy and data flow provisions have been consistently included in EU free trade agreements, notably in the EU-UK Trade and Cooperation Agreement, in the agreements with Chile, Japan and New Zealand. At the same time, discussions are ongoing with Singapore and South Korea.
The Commission plans to negotiate enforcement cooperation agreements with third countries, such as the G7 members
The Report also details that the Commission has maintained an active role in global privacy discussions on a bilateral (i.e. national governments, regulators, international organizations and especially with EU candidate countries) and multilateral level (i.e., contributing to the Consultative Committee on the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108)), engaging in discussions at G20 and G7, and with regional organizations like ASEAN and the African Union). Over the following years, it remains to be seen how the Commission takes such engagement further, particularly with regard to negotiating enforcement cooperation agreements.
7. Concluding Reflections: next steps for the GDPR?
The report concludes that to achieve the twin goals of GDPR – strong protection for individuals while ensuring the free flow of personal data within the EU and safe data flows outside the EU – there needs to be a focus on:
Robust enforcement: accelerate the adoption of GDPR procedural rules;
Support: proactive support from DPAs to assist SMEs and stakeholders in GDPR compliance;
Consistency: ensure uniform GDPR interpretation and application across the EU;
Effective cooperation: enhance collaboration among regulators;
Global action: advance the Commission’s international strategy on data protection.
The Report notes that EDPB and DPAs are invited to fully use cooperation tools under the GDPR so that dispute resolution is used only as a last resort, and Member States are called to ensure that DPAs maintain full independence and receive adequate resources, including technical expertise, to address emerging technologies and new responsibilities in the context of a growing body of digital legislation. Within this ecosystem, the Commission will address the need for effective cross-regulatory cooperation to ensure consistent application of EU digital rules while respecting DPAs’ roles in the supervision of personal data processing.
Notably, after counting its successes and shortcomings in this second Report, the Commission is not calling for the reopening and updating of the GDPR.
Editors: Dr. Gabriela Zanfir-Fortuna, Bianca-Ioana Marcu
The World’s First Binding Treaty on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: Regulation of AI in Broad Strokes
FPF has published a Two–Page Fact Sheet overview of the Framework Convention on AI.
While efforts to regulate the development and deployment of Artificial Intelligence (AI) systems have, for the most part, unfolded at national or regional level, there has been increased focus on the steps taken by the international community to negotiate and design cross-border regulatory frameworks. It is in this way that the data protection community, technology lawyers, and AI experts now have the crucial task of increasingly looking beyond regional borders for a holistic view of legislative frameworks aiming to regulate AI.
The Framework Convention on AI is one such significant initiative, which is spearheaded by the CoE, an International Organization founded in 1949 with the goal of promoting and advocating for human rights, democracy, and the rule of law. Recognizing that AI systems are developed and deployed across borders, an ad-hoc intergovernmental Committee on Artificial Intelligence (CAI) was established under the auspices of the CoE in January 2022, and tasked with launching a binding legal framework on the development, design, and application of AI systems.
There are several key reasons as to why the treaty is a significant and influential development in the field of global AI law and governance, not only in the context of the CoE and its Member States, but around the world.
Firstly, the Framework Convention was drafted by the CAI, composed of Ministers representing not only the CoE’s 46 Member States, but also of Ministers or high-level representatives from the Governments of the United States, Canada, Mexico, Japan, Israel, Ecuador, Peru, Uruguay, and Argentina. In addition to representatives of prominent human rights groups, the meetings of the CAI and the drafting of the Framework Convention included representatives of the European Commission, the European Data Protection Supervisor, and of the private sector. Inter-governmental and multi-stakeholder participation in the drafting of a cross-border, binding instrument is often a critical factor in determining its impact. Crucially, the Framework Convention will also be open for ratification to countries that are not members of the CoE.
Secondly, the importance of the Framework Convention lies in its scope and content. In addition to general obligations to respect and uphold human rights, it aims to establish a risk-based approach to regulating AI and a number of common principles related to activities within the entire lifecycle of AI systems. Its general principles include, among others, respect for human dignity; transparency and oversight; accountability and responsibility; non-discrimination; and privacy and personal data protection. States Parties to the Framework Convention will have to adopt appropriate legislative and administrative measures which give effect to the provisions of this instrument in their domestic laws. In this way, the Framework Convention has the potential to affect ongoing national and regional efforts to design and adopt binding AI laws, and may be uniquely positioned to advance interoperability.
With this brief overview in mind, this blog post contextualizes the work and mandate of the CAI in the context of the CoE and international law. It follows on to provide an outline of the Framework Convention, its scope, applicability, and key principles, including its risk-based approach. It then highlights its position towards fostering international cooperation in the field of cross-border AI governance through the establishment of a ‘Conference of the Parties.’ The post also draws some initial points of comparison with the EU AI Act and the CoE’s Convention for the Protection of Individuals with Regards to the Processing of Personal Data, otherwise known as Convention 108.
Human Rights Are At The Center of the Council of Europe’s Work, Including the Mandate of the Committee on Artificial Intelligence (CAI)
The CoE comprises 46 Member States, 27 of which are Member States of the European Union, and includes Turkey, Ukraine and the United Kingdom. In addition to its Member States, a number of countries hold the status of “Observer States”, meaning that they can cooperate with the CoE, be a part of its Committees (including the CAI), and become Parties to its Conventions. Observer States include Canada, the United States, Japan, Mexico, and the Holy See. Through the Observer State mechanism, CoE initiatives have an increasingly broader reach well beyond the confines of European borders.
As an International Organization, the CoE has played a key role in the development of binding human rights treaties, including the European Convention on Human Rights (ECHR), and Convention 108. Leveraging its experience in advancing both human rights and a high level of personal data protection, among other issues, the CoE has been well-placed to bring members of the international community together to begin to define the parameters of an AI law that is cross-border in nature.
Since its inception in January 2022, the CAI’s work falls under the human rights pillar of the CoE, as part of the Programme on the Effective Implementation of the ECHR, and the sub-Programme on the freedom of expression and information, media and data protection. It is therefore grounded in existing human rights obligations, including the rights to privacy and personal data protection. In order to grasp the possible impacts of such a treaty, it is crucial to understand how it will function under international law, while drawing a comparison between the Framework Convention on AI and Convention 108.
1.1. International Law in Action to Protect People in the Age of Computing: From Convention 108 to the Framework Convention
Traditionally, international law governs relations between States. It defines States’ legal responsibilities in their conduct with each other, within the States’ boundaries, and in their treatment of individuals. One of the ways in which international law governs the conduct and relations between States is through the drafting and ratification of international conventions or treaties. Treaties are legally binding instruments that govern the rights, duties, and obligations of participating States. Through treaties, international law encompasses many areas including human rights, world trade, economic development, and the processing of personal data.
It is on the basis of this treaty mechanism under international law that the CoE Convention 108 opened for signature on 28 January 1981 as the first legally binding, international instrument in the data protection field. Under Convention 108, States Parties to the treaty are required to take the necessary steps in their domestic legislation to apply its principles to ensure respect in their territory for the fundamental rights of all individuals with regard to the processing of their personal data.
In 2018, the CoE finalized the modernization of Convention 108 through the Amending Protocol CETS No. 223. While the principle-based Convention 108 was designed to be technology-neutral, its modernization was deemed necessary for two key reasons: 1) to address challenges resulting from the use of new information and communication technologies, and 2) to strengthen the Convention’s effective implementation.
Through the process of modernization, Convention 108 is now better recognized as Convention 108+, and as of January 2024 has 55 State Parties. Modernized Convention 108+ is also better aligned with the EU General Data Protection Regulation (GDPR), particularly with the expansion of its Article 9 on rights of the data subject, which now includes the individual right “not to be subject to a decision significantly affecting him or her based solely on automated processing of personal data” (automated decision-making).
As the only international, binding treaty on personal data protection, Convention 108 is an important reference point for the Framework Convention on AI. Already in its Preamble, the Framework Convention makes reference to the privacy rights of individuals and the protection of personal data, as applicable through Convention 108. Furthermore, both Conventions are similarly grounded in human rights and recognize the close interplay between new technologies, personal data processing, and the possible impacts of these on people’s rights.
Notably, and unlike Convention 108, the Framework Convention on AI takes the form of a so-called “framework convention”, a type of legally binding treaty which establishesbroader commitments for its parties. In essence, a framework convention serves as an umbrella document which lays down principles and objectives, while leaving room for stricter and more prescriptive standards and their implementation to domestic legislation.
Framework conventions are effective in creating a coherent treaty regime, while elevating the political will for action and leaving room for consensus on the finer details for a later stage. In this way, and considering that the Framework Convention on AI will also be open for ratification to non-Member States of the CoE, the instrument may become more attractive to a greater number of countries.
The Framework Convention on AI Proposes a Risk-Based Approach and General Principles Focusing on Equality and Human Dignity
2.1. A Harmonized Definition of an AI System
One of the first challenges of international cooperation and rule-making is the need to agree on common definitions. This has been particularly relevant in the context of AI governance and policy, as national, regional and international bodies have consistently negotiated to agree on a common definition for AI. The Framework Convention on AI addresses this in its Article 2, adopting the OECD’s definition of an AI system as a “machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.”
Promoted by one of the leading International Organizations in the global AI governance conversation, the OECD’s definition of an AI system has also been relevant in regional contexts. For example, the EU’s Artificial Intelligence Act (EU AI Act), which was given the final green light on 21 May 2024, adopts a very similar definition of an AI system. Similarly, Brazil’s draft AI Bill also adopts the OECD’s definition, showing the country’s intention to align its legislation with the mounting international consensus on a common definition for AI. It is also worth noting that the United States President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the recently enacted Colorado AI Act also adopt an AI definition that is similar in scope to the OECD definition.
The alignment on definitions is not insignificant, as it is by first agreeing on the subject matter of rule-making that a body of specific, intentional rules and principles can emerge. Furthermore, an initial alignment on definitions can help to already establish common ground for facilitating interoperability between different AI governance frameworks internationally.
2.2. The Framework Convention Only Applies to Public Authorities and Private Actors acting on their behalf
Before outlining the principles and obligations elaborated by the Framework Convention, it is important to establish the treaty’s scope and applicability. Its Article 3 states that the Convention covers “the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and the rule of law.”
Notably, the draft of the Framework Convention on AI from 18 December 2023, which formed the basis for negotiations until its final adoption date in May 2024, made several and consistent references to the lifecycle of an AI system as including the design, development, use and decommissioning stages. However, the finalized Framework Convention on AI makes reference to these stages only once, in its Preamble. With the treaty’s signature and implementation later this year, it still remains to be seen how the lifecycle of an AI system will be interpreted by States Parties in practice, and how this will impact the scope of applicability of the Convention in different countries’ domestic laws.
Regarding scope, Article 3(1)(a) elaborates that each Party to the Framework Convention on AI will have to apply its principles and obligations within the lifecycle of AI systems undertaken by public authorities,or private actors acting on their behalf. Private actors will only fall under the scope of the Convention if they meet two requirements: 1) the country in which they are established or in which they develop or deploy their AI products and services is a State Party to the Convention, and 2) they are designing, developing or deploying artificial intelligence systems on behalf of that State Parties’ public authorities.
Therefore, the Framework Convention does not by itself, once ratified by States Parties, provide obligations for all private actors with a role in the lifecycle of AI systems, unless States Parties decide to extend its scope in national law.
In addition to defining what falls within the scope of the Framework Convention, it similarly defines matters which do not fall under its purview. Article 3(2) provides that a Party to the Convention shall not be required to apply its obligations to activities within the lifecycle of AI systems related to the protection of its national security interests. States Parties are, however, nevertheless under an obligation to comply with applicable international laws and human rights obligations, including for purposes of national security.
The Framework Convention will similarly not apply to research and development activities regarding AI systems not yet made available for use, unless their testing has the potential to interfere with human rights, democracy and the rule of law (Article 3(3)). Finally, the Framework Convention will not apply to matters relating to national defence (Article 3(4)).
2.3. General Obligations and Common Principles Include Accountability, Individual Autonomy, Safe Innovation
Instead of opting for more prescriptive requirements, the Framework Convention on AI opts for establishing a broader, umbrella approach for international AI law, while making specific and continued reference to existing obligations, such as those found in international human rights law.
Articles 4 and 5 of the Framework Convention on AI address the requirements to ensure that activities within the lifecycle of AI systems are consistent with obligations to protect human rights, that they are not used to undermine democratic processes, and that they respect the rule of law. This includes seeking to protect individuals’ fair access and participation in public debate, and their ability to freely form opinions.
In addition, in Articles 7 to 13, seven common principles are elaborated which would apply in relation to activities within the lifecycle of AI systems:
Respect for human dignity and individual autonomy (Article 7);
Maintain measures to ensure that adequate transparency and oversight requirements tailored to specific contexts and risks are in place (Article 8);
Adopt or maintain measures to ensure accountability and responsibility for adverse impacts on human rights, democracy and the rule of law (Article 9);
Ensure that activities within the lifecycle of AI systems respect equality, including gender equality, and the prohibition of discrimination as provided under applicable international or domestic legislation; Article 10 on equality and discrimination also goes beyond by including a positive obligation to maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes in relation to the lifecycle of AI systems (Article 10);
Adopt or maintain measures to ensure that the privacy of individuals and their personal data are protected, including through international laws, standards and frameworks, and that effective guarantees and safeguards are put in place (Article 11);
Take measures to promote the reliability of AI systems and trust in their outputs, which could include requirements related to adequate quality and security (Article 12);
Establish controlled environments for developing, experimenting and testing AI systems under the supervision of competent authorities (Article 13).
The agreed upon principles attempt to strike a balance between stipulating broad, yet effective principles on the one hand, and determining the requirements which should be left to Member States’ discretion within their own jurisdictions and domestic legislation on the other.
Notably, the draft of the Framework Convention from 18 December 2023 included a general principle related to adopting and maintaining measures to preserve health, with the option of adopting a clause to include the protection of the environment in the scope of the principle. Similarly, in the same draft text from 18 December 2023, the previous iteration of above-mentioned Article 12 also included options to specify more prescriptive requirements regarding accuracy, performance, data quality, data integrity, data security, governance, cybersecurity and robustness. Both of these articles were amended over time during negotiations, and did not make it into the final text of the Convention.
A separate Article 21 specifically states that nothing in the Framework Convention shall be construed as limiting, derogating from or otherwise affecting human rights and obligations that may already be guaranteed under other relevant laws. Article 22 goes further to state that the Convention also does not limit the possibility of a State Party to grant wider protection in their domestic law. This is an important addition to the text, particularly at a time in which many countries and regions are drafting and adopting AI legislation.
2.4. The Risk-Based Approach is Different Than That of the EU AI Act, and it Mitigates Adverse Impacts of AI Systems
In its Article 1 on the object and purpose of the treaty, the Framework Convention on AI elaborates that measures implemented in the lifecycle of AI systems shall be “graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy and the rule of law” (emphasis added). In this way, the Framework Convention on AI captures the risk-based approach that has become a familiar component of regulatory discussions and frameworks for AI thus far.
Article 16(1) further outlines what the risk-based approach will entail in practice. It provides that each State Party shall adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems by considering actual and potential harms to human rights, democracy, and the rule of law. Article 16(2) proposes a set of broad requirements for assessing and mitigating risks, including to:
Take due account of the context and intended use of an AI system (Article 16(2)(a));
Take due account of the severity and probability of potential impacts (Article 16(2)(b));
Consider, where appropriate, the perspective of all relevant stakeholders, in particular persons whose rights may be impacted (Article 16(2)(c));
Apply the risk-management requirements iteratively and throughout the lifecycle of AI systems (Article 16(2)(d));
Include monitoring for risks and adverse impacts (Article 16(2)(e));
Include documentation of risks, actual and potential impacts, and on the risk management approach (Article 16(2)(f));
Require testing of artificial intelligence systems before making them available for first use and when they are significantly modified (Article 16(2)(g)).
The risk-based approach principles adopted by the Framework Convention on AI have similarities with obligations we see in the EU AI Act, particularly in relation to requirements for risk monitoring, documentation and testing. However, the Framework Convention does not take a layered approach to risk (from limited risk to high risk) and as such it does not prescribe contexts or use-cases in which AI systems may be prohibited or banned. Rather, in its Article 16(4), the Framework Convention on AI leaves this discretion to each State Party to assess the need for a moratorium, ban or other appropriate measures in respect to certain uses of AI that may be incompatible with human rights.
A Newly Created Body Will Promote International Cooperation on AI Governance
International cooperation and coordination in the field of AI governance has been called upon by many regional and international organizations and fora. Cross-border cooperation is consistently identified as a priority in the work of the OECD, forming one of the core tenets of the OECD AI Principles. Similarly, the United Nations’ High-Level Body on Artificial Intelligence is tasked with advancing an international, multi-stakeholder governance of AI, and calls for interoperability of AI frameworks and continued cooperation. The United Nations Human Rights Office of the High Commissioner recently released its Taxonomy of Human Rights Risks Connected to Generative AI, in the interests of stimulating international dialogue and agreement. At the intergovernmental level, the Group of 7 (G7) approved an international set of guiding principles on AI and a voluntary Code of Conduct for AI developers as part of the Hiroshima AI Process.
The Framework Convention on AI aims to establish its own proposal for furthering international cooperation, on the basis of a two-pronged approach: the first, encompassed in its Article 23, calls for the formation of a “Conference of the Parties”, to be composed of representatives of the Parties to the treaty; and the second, encompassed in its Article 25, through which Parties are to exchange relevant information among themselves, and to assist States that are not Parties to the Convention to act consistently with its requirements with a view to becoming Parties to it. The Preamble similarly recognizes the value of fostering cooperation and of extending such cooperation to other States that share the same values.
In this way, the Framework Convention on AI would encourage both continued cooperation and dialogue at the State Party level, as well as codify the requirement to take an inclusive stance towards countries which are not (yet) Parties to the treaty. This inclusive approach also extends to involving relevant non-State actors in the exchange of information on aspects of AI systems that may have an impact on human rights, democracy, and the rule of law, suggesting ongoing cooperation and exchange with public and private actors.
For an insight into how such continued cooperation may work in practice under the auspices of the Conference of the Parties, we can draw a useful example from the Consultative Committee established under Convention 108. The Consultative Committee is composed of representatives of Parties to the Convention, and observers such as non-Member States, representatives of International Organizations and non-governmental organizations. The Consultative Committee meets three times a year, and is responsible for the interpretation of Convention 108 and for improving its implementation, ensuring that it remains fit-for-purpose and adapting to an ever-growing set of challenges posed by new data processing systems.
Closing Reflections: Future Areas of Interplay?
As the world’s first treaty on artificial intelligence, the CoE’s Framework Convention on AI can help codify the key principles that any national or regional frameworks should include. With a strong foundation in human rights law, including respect for equality and non-discrimination, human dignity and individual autonomy, privacy and personal data protection, the concept behind the Framework Convention on AI is to act as a foundational, umbrella treaty beyond which more prescriptive rules can be adopted at country level.
In this way, complementarity can be achieved between, for example, the Framework Convention on AI and the EU AI Act, and the Framework Convention on AI and Convention 108. In both cases, the EU AI Act and Convention 108 are both instruments which go beyond principles and into prescriptive requirements for the regulation of AI systems and the processing of personal data, respectively. From 5 September 2024, when the Framework Convention will formally open for signature and ratification by States, the breadth of adoption of the treaty beyond CoE Member States should be closely monitored, as well as how the mechanisms for international cooperation on AI regulation progress in practice.
FPF has published a Two–Page Fact Sheetoutlining the scope, key terms, general obligations and common principles, risk-based approach requirements, and guidance on international cooperation.
FPF at CPDP.ai 2024: From Data Protection to Governance of Artificial Intelligence – A Global Perspective
Drawing inspiration from the latest developments in assessing the impacts and regulation of Artificial Intelligence (AI) technologies, the Brussels-based annual Computers, Privacy and Data Protection (CPDP) conference amended its acronym. The 17th edition became CPDP.ai for Computers, Privacy, Data Protection and Artificial Intelligence conference, taking place on 22-24 May.
To govern or to be governed, that is the question– this year, the main theme focused on the key questions of AI governance globally, and a vibrant programme explored current digital regulatory frameworks while navigating the complexity of the interplay with topics of privacy and data protection.
The Future of Privacy Forum (FPF) was present once again, organizing a panel on Global Approaches to AI Regulation: Towards an International Law on AI? FPF staff members also contributed to the conference as speakers in several other panels, having the opportunity to engage on key topics with a great variety of stakeholders from academia, industry, civil society, and regulatory authorities.
The CPDP.ai organizers recorded all the sessions which are available here.
On May 23, FPF’s Policy Manager for Global Privacy, Bianca-Ioana Marcu moderated the FPF-organized panel on Global Approaches to AI Regulation: Towards an International Law on AI? Joining the conversation were Audrey Plonk, Head of Digital Economy Policy Division at the OECD, Emma Redmond, Associate General Counsel at OpenAI, Bruno Bioni, Director and Founder at Data Privacy Brasil, and Gregory Smolynec, Deputy Commissioner Policy and Promotion at the Office of the Privacy Commissioner of Canada (OPC).
This multi-stakeholder, comparative panel explored what we can learn from regional and international approaches to AI regulation, and how these may facilitate a more global, interoperable approach to AI laws. Panelists shared key perspectives:
Bruno Bioni noted that Brazil’s approach to AI regulation is nuanced and context-specific, considering existing asymmetric cultural and power dynamics in Brazil. As such, it incorporates a stronger rights-based approach than, for example, the EU AI Act, by including specific concepts of vulnerability and clauses on the protection of vulnerable groups.
Commissioner Smolynec highlighted Canada’s approach to AI regulation, through a strategic plan that outlines the priorities of the OPC, including protecting privacy with maximum impact using existing laws, as well as how to address and advocate for privacy in a time of rapid technological changes, fostering a culture of privacy and privacy-by-design, and promoting innovation while at the same time leveraging it to protect fundamental rights.
Emma Redmond noted that regulatory alignment and the concept of global harmonization are crucial and that while each piece of regulation has its place and purpose, areas of commonality have to be found.
Audrey Plonk added that in order to talk about coherent approaches to AI regulation across countries and regions, we have to start with agreeing on definitions and terminology, such as the OECD’s definition of an AI system which can now be found in different regional AI laws.
Photo description: Panel titled Global Approaches to AI Regulation: Towards an International Law on AI? (May 23, CPDP.ai)
On May 22, Andreea Șerban, FPF’s Global Privacy and AI Analyst, contributed to a panel titled Fundamental Rights Protection and Artificial Intelligence, organized by Encrypt, a project dedicated to creating a GDPR-friendly, privacy-preserving framework for big data processing. Speakers included Marco Bassini, Assistant Professor at Tilburg Law School, Simona Demková, Assistant professor at Universiteit Leiden, Michèle Finck, Professor of Law and Artificial Intelligence at the University of Tübingen, Andreea Șerban from Future of Privacy Forum, and Giovanni de Gregorio, PLMJ Chair in Law and Technology at Católica Global School of Law who moderated the panel.
The discussions focused on the procedural safeguards for AI-driven decision-making as the key approach to safeguarding fundamental rights protection, the role of the Fundamental Rights Impact Assessments under the EU AI Act, and lessons learned from the GDPR experience that could be leveraged for the implementation of the AI Act further exploring the interplay between the GDPR and the AI Act from a global perspective.
Photo description: Panel titled Fundamental Rights Protection and Artificial Intelligence (May 22, CPDP.ai)
On May 23, Christina Michelakaki, Policy Counsel for Global Privacy at FPF was part of the panel organized by the Centre for IT & IP Law (CiTiP) at KU Leuven, titled Transforming GDPR into a Risk-Based Harm Tool Alongside Specific AI Regulation. Meeting Separate but Complementary Needs, together with Felix Bieker, Legal Researcher at Unabhängiges Landeszentrum für Datenschutz, Nadya Purtova, Professor of Law, Innovation, and Technology at Utrecht University, and moderated by Michiel Fierens, Doctoral researcher at Centre for IT & IP Law, KU Leuven.
The panel explored the challenges in providing legal interoperability and synergies between specific concepts from the GDPR and the EU AI Act. In the ever-developing AI governance regulatory landscape, with a particular focus on the EU AI Act, privacy and data protection norms remain the tools of choice to regulate personal data processing. In this regard, Christina Michelakaki highlighted that the EU AI Act sets a foundational standard, yet it is up to the entities developing and deploying AI technology to keep track of the national initiatives that further develop these provisions, such as Italy’s new draft AI law, as new internal frameworks could create country-specific obligations to be met by these entities.
Photo description: Panel titled Transforming GDPR into a Risk-Based Harm Tool Alongside Specific AI Regulation. Meeting Separate but Complementary Needs? (May 23, CPDP.ai)
On May 24, Rob van Eijk, FPF’s Managing Director for Europe, was part of the panel Where are we heading? Looking into the EU Strategy for Data through the Lens of AI and Data Protection, organized by Meta, together with Luca Bolognini, President of the Italian Institute for Privacy and Data Valorisation, Peter Craddock, Partner at Keller and Heckman, Patricia Vidal, Partner at Uría Menéndez and moderated by Cecilia Alvarez, EMEA Privacy Policy Director at Meta.
The panel discussed AI in the context of a data-oriented regulatory framework, focusing on how the EU could foster AI-driven innovation and competitiveness while ensuring equitable access and benefits. Rob van Eijk presented one of the latest FPF resources, a detailed EU AI Act timeline, and provided an overview of the current EU data-related legislation, the role of the EU AI Act in this framework, and its expected enforcement. The panel recording can be found here.
Photo description: Panel titled Where are we heading? Looking into the EU Strategy for Data through the Lens of AI and Data Protection (May 24, CPDP.ai)
Photo description: Presentation of the FPF EU AI Act Timeline (May 24, CPDP.ai)
Lastly, on May 20, FPF’s Bianca-Ioana Marcu moderated a panel session in the CPDP.ai pre-event on the Global Impact of the EU’s Regulations on Platform, AI and Data Governance: The Case of Brazil, organized by the Law, Science, Technology & Society (LSTS) Research Group at the Vrije Universiteit Brussel and the Fundação Getulio Vargas (FGV) Law School. The event coincided with the launch of FPF’s Issue Brief on the Regulatory Strategies and Priorities of Data Protection Authorities in Latin America: 2024 and Beyond.
Photo description: Panel moderated by FPF’s Bianca-Ioana Marcu, with Alessandro Mantelero (Polytechnic University of Turin; Laura Schertel Mednes (University of Brasilia); Frederico Oliviera da Silva (BEUC); and Marco Almada (European University Institute).
Overall, the CPDP.ai 2024 conference brought together all major key stakeholders in the privacy and digital field for yet another successful gathering of minds, having delivered engaging and challenging discussions on the future of the regulatory landscape in this field and how to best address the innovative and disruptive challenges posed by technological developments, with a special highlight for AI and its interplay with data protection.
Editor: Bianca-Ioana Marcu
Event Recap: FPF X nasscom Webinar Series – Breaking Down Consent Requirements under India’s DPDPA
Following the enactment of India’s Digital Personal Data Protection Act 2023 (DPDPA), the Future of Privacy Forum (FPF) and nasscom (National Association of Software and Service Companies), India’s largest industry association for the information technology sector, co-hosted a 2-part webinar series focused on the consent-centric regime under the DPDP Act. Spread across two days (November 9, 2023 and January 29, 2024), the webinar series comprised four panels that brought together experts from industry, governments, civil society, and the global data privacy community to share their perspectives on operationalizing consent under the DPDPA. This blog post provides an overview of these discussions.
Panel 1 – Designing notices and requests for meaningful consent
The first panel was co-moderated by Bianca Marcu (Policy Manager for Global Privacy, FPF) and Ashish Aggarwal (Vice President for Public Policy, nasscom) They were joined by the following panelists:
Paul Breitbarth, Data Protection Lead, Catawiki & Member of the Data Protection Authority, Jersey.
Eduardo Ustaran, Partner, Global Co-Head of Privacy & Cybersecurity, Hogan Lovells.
Eunjung Han, Consultant, Rouse, Vietnam.
Swati Sinha, APAC, Japan and China Privacy Officer & Senior Counsel, Cisco.
The panel began with a short presentation by Priyanshi Dixit (Senior Policy Associate, nasscom) that introduced the concepts of notice and consent under the DPDPA. During the discussion, panelists emphasized the importance of clear, understandable written notices and discussed other design choices to ensure that consent is “free, specific, informed, unconditional, and unambiguous”. To this end, Swati Sinha highlightedconsent notices for different categories of cookies under the EU General Data Protection Regulation (GDPR), and granular notices with separate tick boxes in South Korea and China as examples of how data fiduciaries under the DPDPA could design notices to enable individuals to make informed decisions. However, Swati also stressed that consent forms should not bundle different purposes or come with pre-ticked boxes. Eduardo Ustaran observed that the introduction of strict consent requirements in many new data protection laws internationally has transformed the act of giving consent from a passive action into a more active and affirmative one. Eduardo also stressed the importance of ensuring that consent was clearly and freely given and maintaining clear records.
Adding to this, Paul Breitbarth suggested that visuals such as videos and images could help make the information in notices more accessible, particularly given that long text-based notices might not be convenient for individuals using mobile devices. Paul used the example of airline safety videos as an effective method for presenting notices, with voiceovers and subtitles to ensure accessibility for a broader audience. However, Paul cautioned that it is always advisable to include written notices alongside such visual representations.
The panelists also highlighted challenges to relying on consent as a basis for processing personal data, such as varying levels of digital literacy, the risk of “consent fatigue,” and the use of deceptive design choices (such as pre-ticked consent boxes). The discussions therefore considered alternatives to consent under different data protection laws. The panelists highlighted that in Europe, consent is not always the most popular legal basis for processing personal data as under the GDPR consent is one of several equal bases for processing personal data. The panelists also considered that in jurisdictions whose data protection laws emphasize consent over other legal bases, organizations may face difficulties in ensuring that consent is meaningful. Eunjung Han cited Vietnam’s recent Personal Data Protection Decree as an example of a framework that emphasizes consent and could potentially limit businesses’ ability to process personal data for their operations. She also noted that industry stakeholders in Vietnam are engaging in conversations with the government to share global practices where business necessity serves as a legal basis for processing.
Regarding regulatory actions, the panelists noted that regulators initially offer guidance and support to industry but over time, may transition to initiating enforcement actions. As final takeaways, panelists stressed the importance of accountability and emphasized the need to clearly identify usage of personal data, only collect personal data that is necessary for a specific purpose, and adhere to data protection principles.
Panel 2 – Examining consent and its alternatives
The second panel was co-moderated by Gabriela Zanfir-Fortuna (Vice President for Global Privacy, FPF) and Ashish Aggarwal (Vice President for Public Policy, nasscom). They were joined by the following panelists:
Francis Zhang, Deputy Director, Data Policy, PDPC Singapore.
Leandro Y. Aguirre, Deputy Privacy Commissioner, Philippines National Privacy Commission.
Kazimierz Ujazdowski, Member of Cabinet, European Data Protection Supervisor.
Varun Sen Bahl (Manager, nasscom) set the context for the panel discussion through a brief presentation, outlining various alternatives to consent under the DPDP Act: legitimate uses (section 7) and exemptions (sections 17(1) and 17(2)).
Throughout the discussion, the panelists drew from their experiences with their respective data protection laws: Singapore’s Personal Data Protection Act (PDPA), the Philippines’ Data Privacy Act (DPA), and the EU’s GDPR. In particular, a common experience shared by the three panelists was that they had all faced questions on the interpretation of alternative bases to consent in their respective jurisdictions. They noted that this was an evolving trend and suggested that it would likely extend to India as well.
Panelists noted that some data protection authorities were proactively promoting alternative legal bases to consent. This need arose because organizations in their jurisdictions were over-relying on consent as the de facto default legal basis for processing personal data, leading to “consent fatigue” for data subjects. For instance, Francis Zhang explained that Singapore amended its PDPA in 2020 to include new alternatives to consent that aim to strike a balance between individual and business interests.
Gabriela highlighted the similarities between section 15(1) of Singapore’s PDPA and section 7(a) of the DPDP Act. Both provisions allow for consent to be deemed where an individual voluntarily shares their personal data within an organization. In this context, Francis Zhang shared Singapore’s experience with this provision and explained that it was intended to apply in scenarios where consent can be inferred from the individual’s conduct, such as sharing payment details in a transaction or health information during a health check-up.
Reflecting on his experience in Europe, Kazimierz Ujazdowski observed that data protection authorities tend to be reactive as they are constrained by the resources at their disposal. He suggested that Indian regulators could be better prepared than the ones in Europe at the time of the enactment of the GDPR by proactively identifying practices that are likely to adversely affect users. He also highlighted the importance of taking a strategic approach to map areas of risk requiring regulatory attention. Deputy Commissioner Aguirre emphasized the need for India’s Data Protection Board to establish effective mechanisms to offer guidance regarding the interpretation of key legal provisions and how to comply with them. He highlighted that effective communication between regulators and industries was crucial for anticipating lapses and promoting compliance. He also explained that complaints and awareness efforts during the transition period before the Philippines’ DPA took effect helped to refine the Philippines’ data protection legal frameworks.
Panel 3 – Realizing the ‘consent manager’ model
The third panel was focused on the novel concept of consent managers introduced under the DPDPA and was moderated by Malavika Raghavan (Senior Fellow, FPF) and Varun Sen Bahl (nasscom). They were joined by the following panelists:
Vikram Pagaria, Joint Director, National Health Authority of India.
Bertram D’Souza, CEO, Protean AA and Convener, AA Steering Committee, Sahamati Foundation.
Malte Beyer-Katzenberger, Policy Officer, European Commission.
Rahul Matthan, Partner – TMT, Trilegal.
Ashish Aggarwal, Head of Public Policy, nasscom.
To kick off the discussions, Varun Sen Bahl provided a quick overview of the provisions on “consent managers” under the DPDPA.The law defines a “consent manager” as a legal entity or individual who acts as a single point of contact for data principals (i.e., data subjects) to give, manage, review, and withdraw consent through an accessible, transparent, and interoperable platform. Consent managers must be registered with the Data Protection Board of India (once established) and will be subject to obligations under forthcoming subordinate legislation to the DPDPA.
As the concept of a consent manager is not found in other legislation in India or internationally, there has been a great deal of speculation as to what form consent managers will take, and what role they will play in India’s technology ecosystem, once the DPDPA and its subordinate legislation are fully implemented.
The discussion among panelists touched upon the evolving role of consent managers and their potential impact under the DPDPA.
Rahul Matthan highlighted two concepts from existing consent management frameworks in India: the “account aggregator” framework in the financial sector, and the National Health Authority’s Ayushman Bharat Digital Mission (ABDM) in the health sector that could serve as potential operational models for consent managers under the DPDPA. He also suggested that these initiatives could facilitate data portability, even though the DPDPA does not expressly recognize such a right. He also anticipated that forthcoming subordinate legislation would clarify how these existing initiatives will interface with consent managers under the DPDPA.
Bertram D’Souza and Vikram Pagaria provided background on how these two sectoral initiatives function in India.
Bertram noted that in India’s financial sector, account aggregators currently enable users to manage their consent with over 100 financial institutions, including banks, mutual funds, and pension funds and enable users to manage their consent. Several different account aggregators exist on the market today, but must register with the Reserve Bank of India to obtain an operational license.
Vikram highlighted how ABDM enables users in the health sector to access their health records and consent to requests from various different entities (such as hospitals, laboratories, clinics, or pharmacies) to access that data. Users can also control the type of health record to be shared and the duration for which the data needs to be shared. Vikram also noted that approximately 500 million individuals have consented to create their Health IDs (Ayushman Bharat Health Account), with around 300 million health records linked to these IDs.
Malte Beyer-Katzenberger drew parallels between these existing sectoral initiatives in India and the EU’s Data Governance Act (DGA), a regulation that establishes a framework to facilitate data-sharing across sectors and between EU countries. He explained how the DGA evolved from business models trying to solve problems around personal data management and consent management. In this context, he noted that EU regulators are keen to collaborate with India on the shared objectives of empowering users with their data and enabling data portability.
Ashish highlighted that the value of consent managers lies in providing users a technological means to seamlessly give and withdraw consent. He also saw scope for data fiduciaries to rely on consent managers as a tool to safeguard against liability and regulatory action. When asked about what business model consent managers would adopt, Bertram noted that it is an evolving space and the market in which consent managers will operate is extremely fragmented. While he anticipated that based on his experience with account aggregators, consent managers would initially be funded by India’s technology ecosystem system, they may eventually shift to a user-paid model. The panelists also highlighted the need to obtain “buy-in” from data fiduciaries and ensure that they are accountable towards users towards users). Malte also pondered how consent managers could achieve scale in the absence of a legislative mandate requiring their use.
Rahul Matthan highlighted the immense potential of the market for consent managers in India, noting that as of January 2024, account aggregators have processed 40 million consent requests, twice the number from August of the previous year. Though account aggregators are not mandatory for users, Rahul noted that the convenience and efficiency that they offer is likely to encourage people to opt into using these services, whether they are within the formal financial system or outside it. Agreeing with this, Bertram highlighted the need for consent managers to focus on enhancing user experience and foster cross-sectoral collaborations.
In his concluding remarks, Ashish underscored the importance of striking a balance by allowing the industry to develop the existing account aggregators framework while ensuring that use of this framework is optional for consumers. He agreed that the account aggregator framework is likely to influence the development of consent managers under the DPDPA, and suggested that there may also be use cases for similar frameworks in other areas and sectors, such as in e-commerce, to address deceptive design patterns.
Panel 4 – Operationalizing ‘verifiable parental consent’ in India
The final panel in the webinar series was focused on examining the requirements for verifiable consent for processing the personal data of children under the DPDPA. The panel was co-moderated by Christina Michelakaki (Policy Counsel for Global Privacy, FPF) and Varun Sen Bahl and they were joined by the following panelists:
Kieran Donovan, Founder, k-ID.
Rakesh Maheshwari, Former Head of the Cyber Laws and Data Governance Division, Ministry of Electronics and Information Technology.
Iqsan Sirie, Partner, TMT, Assegaf Hamzah & Partners, Indonesia.
Vrinda Bhandari, Advocate – Supreme Court of India.
Varun Sen Bahl presented a brief overview of verifiable parental consent under the DPDPA. Specifically, the legislation requires data fiduciaries to seek verifiable consent from the parent or lawful guardian when processing the personal data of minors aged eighteen years or below or persons with disabilities. However, the Act empowers India’s Central Government to:
exempt specific classes of data fiduciaries from this requirement for certain purposes; and /or
reduce the age of consent for data fiduciaries that can prove they process children’s personal data in a ‘verifiably safe’ manner.
The forthcoming subordinate legislation under the DPDPA is expected to provide further detail on how these provisions will be implemented.
Building on the presentation, the panelists shed light on the complexities surrounding parental consent requirements under different data protection laws. Iqsan Sirie drew parallels between India’s DPDPA and Indonesia’s recently enacted Personal Data Protection Law, which also introduced parental consent requirements for processing children’s data that will only be clarified through enactment of secondary regulation. Iqsan cited guidelines issued by Indonesia’s Child Protection Commission as “soft law” which businesses could refer to when developing online services.
Rakesh Maheshwari explained that the Indian Government’s intent in introducing these measures in the DPDPA was to address concerns regarding children’s safety, albeit while providing the Central Government flexibility in implementing these measures.
Vrinda Bhandari focused on the forthcoming subordinate legislation to the DPDPA and stressed that any method for verifying parental consent must be risk-based and proportionate. Specifically, she highlighted privacy risks and low digital literacy as challenges in introducing such tech-based solutions. First, she pointed out that biometric-based verification methods, such as India’s national resident ID number (Aadhaar) or any other government-issued ID that captures sensitive personal data, could pose security risks, depending on who can access this information. Second, she noted that the majority of Indians belong to a mobile-first generation, where parents may not be digitally literate. Although Vrinda cited tokenization as a good alternative, she questioned whether it would be feasible to implement it in India, given the costs and technical complexity of deploying this solution.
Drawing from his expertise at K-ID, which aids developers in safely authenticating and safeguarding children’s online privacy, Kieran Donovan highlighted the array of methods for implementing age-gating, ranging from simple email verifications to advanced third-party services aimed at privacy preservation. He discussed the use of payment transactions, SMS 2-factor authentication, electronic signatures, and question-based approaches designed to gauge user maturity.He also pointed out that only 4 of the 103 countries requiring parental consent specify the exact method for verifying parental consent. Healsospoke about the challenges faced by businesses in implementing age-gating measures, including the cost per transaction and resistance from users to sophisticated verification methods.
Comparing India’s DPDPA with the Children’s Online Privacy Protection Act (COPPA) Bailey Sanchez noted that the age for consent in this context is 13 years in the US and is applicable only for services directed at children. Bailey also observed that it is not straightforward to demonstrate compliance under the COPPA. However, the Federal Trade Commission proactively updates the approved methods for parental verification and also works with industry to review new methods that reflect technological advancements. Christina spoke about the legal position on children’s consent in the EU under GDPR, and the challenges in relying on other legal bases for processing children’s data.
As final takeaways, the discussion touched on the importance of regulatory guidance and risk-based intervention that incentivizes stakeholders to participate actively. Overall, panelists noted that a nuanced approach balancing privacy protection and practical considerations is essential for effective implementation of parental consent requirements globally.
To conclude the webinar series, Josh Lee Kok Thong (Managing Director for APAC, FPF) expressed his gratitude to all the panelists, viewers, and hosts (from FPF and nasscom) for their active participation, extending a special note of thanks for their contributions.
Conclusion
In the run up to the notification of the subordinate legislation which will enforce key provisions of the DPDPA, the FPF x nasscom webinar series aimed to foster an active discussion that captured the insights of regulators, industry, academia, and civil society from within India and beyond. Going forward, FPF will play an active role in building on these conversations.
RECs Report: Towards a Continental Approach to Data Protection in Africa
On July 28, 2022, the African Union (AU) released its long-awaited African Union Data Policy Framework (DPF), which strives to advance the use of data for development and innovation, while safeguarding the interests of African countries. The DPF’s vision is to unlock the potential of data for the benefit of Africans, to “improve people’s lives, safeguard collective interests, protect (digital) rights and drive equitable socio-economic development.” One of the key mechanisms that the DPF seeks to leverage to achieve this vision is the harmonization of member states’ digital data governance systems to create a single digital market for Africa. It identifies a range of focus areas that would greatly benefit from harmonization, including data governance, personal information protection, e-commerce, and cybersecurity.
In order to promote cohesion and harmonization of data-related regulations across Africa, the DPF recommends leveraging existing regional institutions and associations that are already in existence to create unified policy frameworks for their member states. In particular, the framework emphasizes the role of Africa’s eight Regional Economic Communities (RECs) to harmonize data policies and serve as a strong pillar for digital development by drafting model laws, supporting capacity building, and engaging in continental policy formulation. This report provides an overview of these regional and continental initiatives, seeking to better clarify the state of data protection harmonization in Africa and to educate practitioners about future harmonization efforts through the RECs. Section 1 begins by providing a brief history of policy harmonization in Africa before introducing the RECs and explaining their connection to digital regulation. Section 2 dives into the four regional data protection frameworks created by some of the RECs and identifies key similarities and differences between the instruments. Finally, Section 3 of the report analyzes regional developments in the context of the Malabo Convention through a comparative and critical analysis and, lastly, provides a roadmap for understanding future harmonization trends. It concludes that while policy harmonization remains a key imperative in the continent, divergences and practical limitations exist in the current legal frameworks of member states.
The seventh edition of the Brussels Privacy Symposium, jointly co-organized by the Future of Privacy Forum and the Brussels Privacy Hub, took place at the U-Residence of the Vrije Universiteit Brussel campus on November 14, 2023. The Symposium presented a key opportunity for a global, interdisciplinary convening to discuss one of the most important topics facing Europe’s digital society today and in the years to come: “Understanding the EU Data Strategy Architecture: Common Threads – Points of Juncture – Incongruities.”
With the program of the Symposium, the organizers aimed to transversally explore three key topics that cut through the Data Strategy legislative package of the EU and the General Data Protection Regulation (GDPR), painting an intricate picture of interplay that leaves room for tension, convergence, and the balancing of different interests and policy goals pursued by each new law. Throughout the day, participants debated the possible paradigm shift introduced by the push for access to data in the Data Strategy Package, the network of impact assessments from the GDPR to the Digital Services Act (DSA) and EU AI Act, and debated the future of enforcement of a new set of data laws in Europe. Attendees were welcomed by Dr Gianclaudio Malgieri, Associate Professor of Law & Technology at Leiden University and co-Director of the Brussels Privacy Hub, and Jules Polonetsky, CEO at the Future of Privacy Forum. In addition to three expert panels, the Symposium opened with Keynote addresses by Commissioner Didier Reynders, European Commissioner for Justice, and Wojciech Wiewiórowski, the European Data Protection Supervisor. Commissioner Reynders specifically highlighted that the GDPR remains the “cornerstone of the EU digital regulatory framework” when it comes to the processing of personal data, while Supervisor Wiewiórowski cautioned that “we need to ensure the data protection standards that we fought for, throughout many years, will not be adversely impacted by the new rules.” In the afternoon, attendees engaged in a brainstorming exercise in four different breakout sessions, and the Vice-Chair of the European Data Protection Board (EDPB), Irene Loizidou Nikolaidou, gave her closing remarks to end the conference.
The following Report outlines some of the most important outcomes from the day’s conversations, highlighting the ways and places in which the EU Data Strategy Package overlaps, interacts, supports, or creates tension with key provisions of the GDPR. The Report is divided into six sections: the above general introduction; the ensuing section which provides a summary of the Opening Remarks; the next three sections which provide insights into the panel discussions; and the sixth and final section which provides a brief summary of the EDPB Vice-Chair’s Closing Remarks.
FPF and OneTrust Release Collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide & Infographic
Today, the Future of Privacy Forum (FPF) and OneTrust released a collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide and accompanying Infographic. Conformity Assessments are a key and overarching accountability tool introduced in the proposed EU Artificial Intelligence Act (EU AIA or AIA) for high-risk AI systems.
Conformity Assessments are expected to play a significant role in the governance of AI in the EU, and the Guide and Infographic provide a step-by-step explanation of what a Conformity Assessment is–designed for individuals at organizations responsible for the legal obligation to perform one–along with a roadmap outlining the series of steps for conducting a Conformity Assessment.
The Guide and Infographic can serve as an essential resource for organizations who want to prepare for compliance with the EU AIA’s final text, which is expected to be adopted by the end of 2023 and become applicable in late 2025.
Information and background about the proposed EU AI Act & Conformity Assessments. The proposed EU AIA is a risk-based regulation with enhanced obligations for high-risk AI systems, including the obligation to conduct Conformity Assessments. In the EU context, the Conformity Assessment obligation is not new: the EU AIA aims to align with the processes and requirements found in laws that fall under the New Legislative Framework (NLF), and Conformity Assessments are also part of several EU laws on product safety, such as the General Product Safety Regulation, the Machinery Regulation, or the in vitro diagnostic Medical Devices Regulation.
The Conformity Assessment applicability for AI systems. A Conformity Assessment is the process of verifying and/or demonstrating that a high-risk AI system complies with the requirements enumerated under Title III, Chapter 2 of the EU AIA. The first step in the Conformity Assessment journey is determining whether an organization’s AI system falls under the Conformity Assessment legal obligation, and the Guide and Infographic include a flowchart of questions for an organization to answer in order to determine whether they need to comply with the Conformity Assessment obligation.
Conformity Assessment requirements for high-risk AI systems. The Guide describes each Conformity Assessment requirement, its meaning, and at what phase of the AI system’s life cycle each requirement should be met. These requirements include Risk Management System; Data and Data Governance; Technical Documentation; Record Keeping; Transparency Obligations; Human Oversight; Accuracy, Robustness and Cybersecurity.
Overview of EU Plans for Standards & Presumption of Conformity. The European Commission is looking to obtain standards that provide “procedures and processes for conformity assessment activities related to AI systems and quality management systems of AI providers.” Such standards will be crucial to developing operational guidance for the implementation of Conformity Assessments and are expected to facilitate compliance with the technical obligations prescribed by the EU AIA. Given that the EU AIA is still under negotiation, the draft standardization request that was issued by the European Commission in December 2022 may be amended when the AIA is finally adopted.
For more information about the EU AIA, Conformity Assessments, and the Guide and Infographic, please contact Katerina Demetzou at [email protected].
How Data Protection Authorities are De Facto Regulating Generative AI
The Istanbul Bar Association IT Law Commission published Dr. Gabriela Zanfir-Fortuna’s article, “How Data Protection Authorities are De Facto Regulating Generative AI,” in their August monthly AI Working Group Bulletin, “Law in the Age of Artificial Intelligence” (Yapay Zekâ Çağinda Hukuk).
Generative AI took the world by storm in the past year, with services like ChatGPT becoming “the fastest growing consumer application in history.” For generative AI applications to be trained and function immense amounts of data, including personal data, are necessary. It should be no surprise that Data Protection Authorities (‘DPAs’) were the first regulators around the world to take action, from opening investigations to actually issuing orders imposing suspension of the services where they found breaches of data protection law.
Their concerns span from the lack of a justification (a lawful ground) for processing personal data used for training the AI models, lack of transparency about the personal data used for training, and about how the personal data collected while users are interacting with the AI service is used, lack of avenues to exercise data subject rights such as access, erasure, and objection, impossibility to exercise the right of correcting inaccurate personal data when it comes to the output generated by such AI services, insufficient data security measures, unlawfully processing sensitive personal data and children’s data, to not applying data protection by design and by default.
Global Overview of DPA Investigations into Generative AI
Defined broadly, DPAs are supervisory authorities vested with the power to enforce comprehensive data protection law in their jurisdictions. In the past six months, as the popularity of generative AI was growing among consumers and businesses around the world, DPAs started opening investigations into how the providers of such services are complying with legal obligations related to how personal data are collected and used, as provided in their respective national data protection law. Their efforts are focusing currently on OpenAI as the provider of ChatGPT. Only two of the investigations have resulted until now in official enforcement action, be it preliminary, in Italy and South Korea. Here is a list of known open investigations, their timeline, and key concerns:
The Italian DPA (Garante) issued an emergency order on 30 March 2023, to block OpenAI from processing personal data of people in Italy. The Garante laid out several potential violations of provisions of the General Data Protection Regulation (‘GDPR’), including lawfulness, transparency, rights of the data subject, processing personal data of children, and data protection by design and by default. It lifted the prohibition a month later, after OpenAI announced changes as required by the DPA. An investigation on substance is still ongoing.
In the aftermath of the Italian order, the European Data Protection Board created a task force to “foster cooperation and exchange information” in relation to handling complaints and investigations into OpenAI and ChatGPT at EU level, on 13 April 2023.
The Federal Office of the Privacy Commissioner (OPC) of Canada announced on 4 April 2023, that it has launched an investigation into ChatGPT following a complaint that the service is processing personal data without consent. On 25 May, the OPC announced that it will investigate ChatGPT jointly with the provincial privacy authorities of British Columbia, Quebec, and Alberta, expanding the investigation to also look into whether OpenAI has respected obligations related to openness and transparency, access, accuracy, and accountability, as well as purpose limitation.
The Ibero-American Network of DPAs, reuniting supervisory authorities from 21 Spanish and Portuguese-speaking countries in Latin America and Europe, announced on 8 May 2023 that it initiated a coordinated action in relation to ChatGPT.
Japan’s Personal Information Protection Commission (PPC) published a warning issued to OpenAI on 1June 2023 which highlighted it should not collect sensitive personal data from users of ChatGPT or other persons without obtaining consent, and it should give notice in Japanese about the purpose for which it collects personal data from users and non-users.
The Brazilian DPA announced on 27 July 2023 that it has started an investigation into how ChatGPT is complying with the Lei Geral de Proteção de Dados (LGPD) after receiving a complaint, and after reports in the media arguing that the service as provided is not compliant with the country’s comprehensive data protection law.
The US Federal Trade Commission (FTC) has opened an investigation into ChatGPT in July 2023 to see whether its provider has engaged in “unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers” in violation of Section 5 of the FTC Act.
The South Korean Personal Information Protection Commission (PIPC) announced on 27 July 2023 that it imposed an administrative fine of 3.6 million KRW (approximately 3,000 USD) against OpenAI for failure to notify a data breach in relation to its payment procedure. At the same time, the PIPC issued a list of instances of non-compliance with the country’s Personal Information Protection Act related to transparency, lawful grounds for processing (absence of consent), lack of clarity related to the controller-processor relationship, and issues related to the absence of parental consent for children younger than 14. The PIPC gave OpenAI a month and a half, until 15 September 2023, to bring the processing of personal data into compliance.
This survey of investigations into how a generative AI service provider is complying with data protection law in jurisdictions around the world reveals significant commonalities among their legal obligations and how they are applicable to processing of personal data through this new technology. There is also overlap among concerns that DPAs have about generative AI’s impact on the rights of people in relation to their personal data. This provides good ground for collaboration and coordination among supervisory authorities as regulators of generative AI.
G7 DPAs Issue Statement on Generative AI, Distilling Key Data Protection Concerns Across Jurisdictions
In this spirit, the DPAs of the G7 members adopted in Tokyo, on 21 June 2023, a Statement on generative AI which lays out their key areas of concern related to how the technology processes personal data. The Commissioners started their statement by acknowledging that “there are growing concerns that generative AI may present risks and potential harms to privacy, data protection, and other fundamental human rights if not properly developed and regulated.”
The key areas of concern highlighted in the Statement considered the use of personal data at various stages of developing and deploying AI systems, including a focus on datasets used to train, validate, and test generative AI models, the interactions of individuals with generative AI tools and also the content generated by them. For each of these stages, the issue of a lawful ground for processing was raised. Security safeguards against inverting a generative AI model to extract or reproduce personal data originally processed in data sets used to train the model were also added as a key area of concern, as well as putting in place mitigation and monitoring measures to ensure personal data generated through such tools are accurate, complete and up-to-date, free from discriminatory, unlawful, or otherwise unjustifiable effects.
Other areas of concern mentioned were transparency to promote openness and explainability; production of technical documentation across the AI development lifecycle; technical and organizational measures in the application of the rights of individuals such as access, erasure, correction, and the right not to be subject to solely automated decision-making that has a significant effect on the individual; accountability measures to ensure appropriate levels of responsibility across the AI supply chain; and limiting collection of personal data to what is necessary to fulfill a specified task.
A key recommendation spelled out in the Statement, but also emerging from the investigations above, is for developers and providers to embed privacy in the design, conception, operation, and management of new products and services that use generative AI technologies, and to document their choices in a Data Protection Impact Assessment.
EU’s Digital Services Act Just Became Applicable: Outlining Ten Key Areas of Interplay with the GDPR
DSA: What’s in a Name?
The European Union’s (EU) Digital Services Act (DSA) is a first-of-its-kind regulatory framework, with which the bloc hopes to set an international benchmark for regulating online intermediaries and improving online safety. The DSA establishes a range of legal obligations, from content removal requirements, prohibitions to engage in manipulative design and to display certain online advertising targeted to users profiled on the basis of sensitive characteristics, to sweeping accountability obligations requiring audits of algorithms and assessments of systemic risks for the largest of platforms.
The DSA is part of the EU’s effort to expand its digital regulatory framework to address the challenges posed by online services. It reflects the EU’s regulatory approach of comprehensive legal frameworks which strive to protect fundamental rights, including in digital environments. The DSA should not be read by itself: it is applicable on top of the EU’s General Data Protection Regulation (GDPR), alongside the Digital Markets Act (DMA), as well as other regulations and directives of the EU’s Data Strategy legislative package.
The Act introduces strong protections against both individual and systemic harms online, and also places digital platforms under a unique new transparency and accountability framework. To address the varying levels of risks and responsibilities associated with different types of digital services, the Act distinguishes online intermediaries depending on the type of business service, size, and impact, setting up different levels of obligations.
Given the structural and “systemic” significance of certain firms in the digital services ecosystem, the regulation places stricter obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). These firms will have to abide by higher transparency standards, provide access to (personal) data to competent authorities and researchers, and identify, analyze, assess, and mitigate systemic risks linked to their services. Such systemic risks have been classified into four different categories (Recitals 80-84): illegal content; fundamental rights (freedom of expression, media pluralism, children’s rights, consumer protection, and non-discrimination, inter alia); public security and electoral/democratic processes; and public health protection, with a specific focus on minors, physical and mental well-being, and gender-based violence.
The European Commission designated VLOPs and VLOSEs earlier this year (see Table 1), based on criteria laid out in the DSA and a threshold number of 45 million monthly users across the EU. The DSA obligations for these designated online platforms became applicable on August 25, 2023, with the exception of a transparency database whose publication was postponed for a month following complaints. The full regulation becomes applicable for all covered starting on February 17, 2024.
* The companies that sought to challenge their designation as ‘VLOPs’ – the European General Court will be addressing these challenges and will determine whether the European Commission’s designation shall be upheld.
However, VLOPs and VLOSEs are not the only regulated entities. All intermediaries that offer their services to users based in the EU, including online platforms such as app stores, collaborative economy platforms, and social media platforms, fall within the scope of the regulation, regardless of their number of users. Notably, micro and small-sized enterprises that do not meet the VLOP/VLOSE criteria, as defined by EU law, are exempted from some of the legal obligations. While “regular” online platforms may have scaled down requirements compared to VLOPs/VLOSEs, their new legal obligations are nonetheless significant and include, among others, transparency regarding their recommendation systems, setting up internal complaint-handling mechanisms, prohibitions on designing their platforms in a way that deceives or manipulates users, and prohibitions on presenting ads based on profiling using special categories of personal data, including personal data of minors.
All providers of intermediary services, including online platforms, covered by the DSA are also “controllers” under the GDPR to the extent that they process personal data and decide on the means and purposes of such processing. As a consequence, they have to comply with both these legal frameworks at the same time. While the DSA stipulates, pursuant to Recital 10, that the GDPR and the ePrivacy Directive serve as governing rules for personal data protection, some DSA provisions intertwine with GDPR obligations in complex ways, requiring further analysis. For instance, some of the key obligations in the DSA refer to “profiling” as defined by the GDPR, while others create a legal requirement for VLOPs and VLOSEs to give access to personal data to researchers or competent authorities.
After a brief overview of the scope of application of the DSA and a summary of its key obligations based on the type of covered entity (see Table 2), this blog maps out ten key areas where the DSA and the GDPR interact in consequential ways and reflects on the impact of this interaction on the enforcement of the DSA. The ten interplay areas we are highlighting are:
Manipulative design in online interfaces;
Targeted advertising based on sensitive data;
Targeted advertising and protection of minors;
Recommender systems free-of-profiling;
Recommender systems and advertising transparency;
Access to data for researchers and competent authorities;
Takedown of illegal content;
Risk Assessments;
Compliance function and the DSA legal representative;
Intermediary liability and the obligation to provide information.
The DSA Applies to Intermediary Services of Various Types and Sizes and Has Broad Extraterritorial Effect
The DSA puts in place a horizontal framework of layered responsibilities targeted at different types of online intermediary services, including:
Including “mere conduit services” (e.g. internet access, content delivery networks, WiFi hotspots); “caching services” (e.g. automatic, intermediate, and temporary storage of information); and “hosting services” (e.g. cloud and web-hosting services).
(2)Online platform services
Providers bringing together sellers and consumers, such as online marketplaces, app stores, collaborative economy platforms, social media platforms, and providers that disseminate information to the public.
(3) Very Large Online Platforms (VLOPs)
Reaching at least 45 million active recipients in the EU on a monthly basis (10% of the EU population).
(4)Very Large Online Search Engines (VLOSEs)
Reaching at least 45 million active recipients in the EU on a monthly basis (10% of the EU population).
Recitals 13 and 14 of the DSA highlight the importance of “disseminating information to the public” as a benchmark for which online platforms fall under the scope of the Regulation and the specific category of hosting services. For instance, Recital 14 explains that emails or private messaging services fall outside the definition of online platforms “as they are used for interpersonal communication between a finite number of persons determined by the sender of the communication.” However, the DSA obligations for online platforms may still apply to them if such services “allow the making available of information to a potentially unlimited number of recipients, … such as through public groups or open channels.”
Important carve-outs are made in the DSA for micro and small-sized enterprises, as defined by EU law, that do not meet the VLOP/VLOSE criteria. These firms are exempted from some of the legal obligations, in particular from making available an annual report on the content moderation they engage in, as well as the more substantial additional obligations imposed on providers of online platforms in Articles 20 to 28 – such as the prohibition to display ads based on profiling conducted on special categories of personal data, and obligations for platforms allowing consumers to conclude distance contracts with traders in Articles 29 to 32.
These carve-outs come in contrast with the broad applicability of the GDPR to entities of all sizes. This means, for instance, that even if micro and small-sized enterprises that are online platforms do not have to comply with the prohibitions related to displaying ads based on profiling using special categories of personal data and profiling of minors, they continue to fall under the scope of the GDPR and its requirements that impact such profiling.
The DSA has extra-territorial effect and global coverage, similar to the GDPR, since it captures companies regardless of whether they are established in the EU or not, as long as the recipients of their services have their place of establishment or are located in the EU (Article 2).
The DSA Just Became Applicable to VLOPs and VLOSEs and Will Continue to Roll Out to All Online Platforms
The Act requires that platforms and search engines publish their average monthly number of active users/recipients with the EU-27 (Article 24 – check the European Commission’s guidance on the matter). The first round of sharing those numbers was due on February 17, 2023. Based on the information shared through that exercise, the Commission designated the VLOPs and VLOSEs with additional obligations because of the “systemic risks that they pose to consumers and society,” (Article 33). The designation announcement was made public on April 25.
Four months after the designation, on August 25, 2023, the DSA provisions became applicable to VLOPs and VLOSEs through Article 92. This means that the designated platforms must already implement their obligations, such as conducting risk assessments, increasing transparency of recommender systems, and offering an alternative feed of content not subject to recommender systems based on profiling (see an overview of their obligations in Table 2).
As of February 17, 2024, all providers of intermediary services must comply with a set of general obligations (Articles 11-32), with certain exceptions for micro and small enterprises as explained above.
Table2 – List of DSA Obligations as Distributed Among Different Categories of Intermediary Service Providers
Pillar obligations
Set of Obligations
Intermediary Services
Hosting Services
Online Platforms
VLOPs/VLOSEs
Transparency measures
Transparency reporting (Article 15)
🚩
🚩
🚩
🚩
Requirements on terms and conditions wrt fundamental rights (Article 14)
🚩
🚩
🚩
🚩
Statement of reasons (Article 17)
🚩
🚩
Notice-and-action and obligation to provide information to users (Article 16)
🚩
🚩
🚩
Recommender system transparency (Articles 27 and 38)
🚩
🚩
User-facing transparency of online advertising (Article 24)
🚩
🚩
Online advertising transparency (Article 39)
🚩
User choice for access to information (Article 42)
🚩
Oversight structure to address the complexity of the online intermediary services ecosystem
Cooperation with national authorities following orders (Article 11)
🚩
🚩
🚩
🚩
Points of contact for recipients of service (Article 12) and, where necessary, legal representatives (Article 13)
🚩
🚩
🚩
🚩
Internal complaint and handling system (Article 20) and redress mechanism (Article 32) and out-of-court dispute settlement (Article 21)
🚩
🚩
Independent auditing and public accountability (Article 37)
🚩
Option for recommender systems not based on profiling (Article 38)
🚩
Supervisory fee (Article 43)
🚩
Crisis response mechanism and cooperation process (Article 36)
🚩
Manipulative Design
Online interface design and organization (Article 25)
🚩
🚩
Measures to counter illegal goods, services, or content online
Trusted flaggers (Article 22)
🚩
🚩
Measures and protection against misuse (Article 23)
🚩
🚩
Targeted advertising based on sensitive data (Article 26)
🚩
🚩
Online protection of minors (Article 28)
🚩
🚩
Traceability of traders (Articles 30-32)
🚩
🚩
Reporting criminal offenses (Article 18)
🚩
🚩
Risk management obligations and compliance officer (Article 41)
🚩
🚩
Risk assessment and mitigation of risks (Articles 34-35)
🚩
Codes of conduct (Articles 45-47)
🚩
Access to data for researchers
Data sharing with authorities and researchers (Article 40)
🚩
From Risk Assessments to Profiling and Transparency Requirements – Key Points of Interplay Between the DSA and GDPR
While the DSA and the GDPR serve different purposes and objectives at face value, ultimately both aim to protect fundamental rights in a data-driven economy and society, on the one hand, and reinforce the European single market, on the other hand. The DSA aims to establish rules for digital services and their responsibilities toward content moderation and combating systemic risks, so as to ensure user safety, safeguard fairness and trust in the digital environment, and enhance a “single market for digital services.” Notably, providing digital services is inextricably linked to processing data, including personal data. The GDPR seeks to protect individuals in relation to how their personal data is processed, ensuring that such processing respects their fundamental rights, while at the same time seeking to promote the free movement of personal data within the EU.
While the two regulations do not have the same taxonomy of regulated actors, the broad scope of the GDPR’s definitions of “controllers” and “processing of personal data” are such that all intermediaries covered by the DSA are also controllers under the GDPR in relation to any processing of personal data they engage in and for which they establish the means and purposes of processing. Some intermediaries might also be “processors” under the GDPR in specific situations, a fact that needs to be assessed on a case-by-case basis. Overall, this overlap triggers the application of both regulations, with the GDPR seemingly taking precedence over most of the DSA (Recital 10 of the DSA), with the exception of the intermediary liability rules in the DSA as the updated eCommerce Directive, which take precedence over the GDPR (Article 2(4) of the GDPR).
The DSA mentions the GDPR 19 times in its text across recitals and articles, with “profiling” as defined by the GDPR playing a prominent role in core obligations for all online platforms. These include the two prohibitions to display ads based on profiling that use sensitive personal data or the data of minors, and the obligation that any VLOPs and VLOSEs that use recommender systems must provide at least one option for their recommender systems not based on profiling. The GDPR plays an additional role in setting the definition for sensitive data (“special categories of data”) in its Article 9, which the DSA specifically refers to for the prohibition of displaying ads based on profiling done on such data. In addition to these cross-references, where it will be essential to apply the two legal frameworks consistently, there are other areas of overlap that create complexity for compliance, at the minimum, but also risks for inconsistencies (such as the DSA risk assessment processes and the GDPR Data Protection Impact Assessment). Additional overlaps may confuse individuals concerned regarding the best legal framework to rely on for removing their personal data from online platforms, as the DSA sets up a framework for takedown requests for illegal content that may also include personal data and the GDPR provides individuals with the right to obtain erasure of their personal data in specific contexts.
In this complex web of legal provisions, here are the elements of interaction between the two legal frameworks that stand out. As the applicability of the DSA rolls out on top of GDPR compliance programs and mechanisms, other such areas may surface.
Manipulative Design (or “Dark Patterns”) in Online Interfaces
These are practices that “materially distort or impair, either on purpose or in effect, the ability of recipients of the service to make autonomous and informed choices or decisions,” per Recital 67 DSA. Both the GDPR and the DSA address these practices, either directly or indirectly. The GDPR, on the one hand, offers protection against manipulative design in cases that involve processing of personal data. The protections are relevant for complying with provisions detailing lawful grounds for processing, requiring data minimization, setting out how valid consent can be obtained and withdrawn, or how controllers must apply Data Protection by Design and by Default when building their systems and processes.
Building on this ground, Article 25 of the DSA, read in conjunction with Recital 67, includes a ban on providers of online platforms to “design, organize or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.” The ban seems to be applicable only to online platforms as defined in Article 3(i) of the DSA, as a subcategory of the wide spectrum of intermediary services. Importantly, the DSA specifies that the ban on dark patterns does not apply to practices covered by the Unfair Commercial Practices Directive (UCPD) or the GDPR. Article 25(3) of the DSA highlights that the Commission is empowered to issue guidelines on how the ban on manipulative design applies to specific practices, so further clarity is expected. And since the protection vested by the GDPR against manipulative design will remain relevant and primarily applicable, it will be essential for consistency that these guidelines are developed in close collaboration with Data Protection Authorities (DPAs).
Targeted Advertising Based on Sensitive Data
Article 26(3) and Recital 68 of the DSA underline a prohibition of the providers of online platforms to “present” ads to users stemming from profiling them, as defined by Article 4(4) of the GDPR, based on sensitive personal data, as defined by Article 9 of the GDPR. Such personal data include race, religion, health status, and sexual orientation, among others on a limited list. However, it is important to mention that case law from the Court of Justice of the EU (CJEU) may further complicate the application of this provision. In particular, Case C-184/20 OT, in a judgment published a year ago, expanded “special categories of personal data” under the GDPR to also cover any personal data from which a sensitive characteristic may be inferred. Additionally, the very recent CJEU judgment in Case C-252/21 Meta v. Bundeskartellamtmakes important findings regarding how social media services as a category of online platforms can lawfully engage in profiling of their users pursuant to the GDPR, including for personalized ads. While the DSA prohibition is concerned with “presenting” ads based on profiling using sensitive data, rather than with the activity of profiling itself, it must be read in conjunction with the obligations in the GDPR for processing personal data for profiling and with the relevant CJEU case-law. To this end, the European Data Protection Board has published relevant guidelines for automated decision-making and profiling in general, but also specifically on targeting of social media users.
Targeted Advertising and Protection of Minors
Recital 71 of the GDPR already provides that solely automated decision-making, including profiling, with legal or similarly significant effects should not apply to children – a rule that is relevant for any type of context, such as educational services, and not only for online platforms. The DSA enhances this protection when it comes to online platforms, prohibiting the presentation of ads on their interface based on profiling by using personal data of users “when they are aware with reasonable certainty that the recipient of the service is a minor” (Article 28 of the DSA). Additionally, in line with the principle of data minimization provided by Article 5(1) of the GDPR, this DSA prohibition should not lead the provider of the online platform to “maintain, acquire or process” more personal data than it already has in order to assess if the recipient of the service is a minor. While this provision addresses all online platforms, VLOPs and VLOSEs are expected to take “targeted measures to protect the rights of the child, including age verification and parental control tools” as part of their obligation in Article 35(1)(j) to put in place mitigation measures tailored to their specific systemic risks identified following the risk assessment process. As highlighted in a recent FPF infographic and report on age assurance technology, age verification measures may require processing of additional personal data than what the functioning of the online service requires, which could be at odds with the data minimization principles in the absence of additional safeguards. This is an example where the two regulations complement each other.
In recent years, DPAs have been increasingly regulating the processing of personal data of minors. For instance, in the EU, the Irish Data Protection Commission published Fundamentals for a Child-Oriented Approach to Data Processing, the Italian Garante often includes the protection of children in its high-profile enforcement decisions (see, for instance, the TikTok and ChatGPT cases), and the CNIL in France published recommendations to enhance the protection of children online and launched several initiatives to enhance digital rights of children. This is another area where collaboration with DPAs will be very important for consistent application of the DSA.
Recommender Systems and Advertising Transparency
A significant area of overlap between the DSA and the GDPR relates to transparency. A key purpose of the DSA is to increase overall transparency related to online platforms, manifesting through several obligations, while transparency related to how one’s personal data are processed is an overarching principle of the GDPR. Relevant areas for this principle in the GDPR are found in Article 5, through extensive notice obligations in Articles 13 and 14, data access obligations in Article 15, and underpinned by modalities on how to communicate to individuals in Article 12. Two of the DSA obligations that increase transparency are laid out in Article 27, which imposes on providers of online platforms transparency related to how recommender systems work, and in Article 26, which imposes transparency related to advertising on online platforms. To implement the latter obligation, the DSA requires, per Recital 68, that the “recipients of a service should have information directly accessible from the online interface where the advertisement is presented, on the main parameters used for determining that a specific advertisement is presented to them, providing meaningful explanations of the logic used to that end, including when this is based on profiling.”
As for transparency related to recommender systems, Recital 70 of the DSA explains that online platforms should consistently ensure that users are appropriately informed about how recommender systems impact the way information is displayed and can influence how information is presented to them. “They should clearly present the parameters for such recommender systems in an easily comprehensible manner” to ensure that the users “understand how information is prioritized for them,” including where information is prioritized “based on profiling and their online behavior.” Notably, Articles 13(2)(f) and 14(2)(g) of the GDPR require that notices to individuals whose personal data is processed include “meaningful information about the logic involved, as well as the significance and the envisaged consequences” of automated decision-making, including profiling. These provisions should be read and applied together, complementing each other, to ensure consistency. This is another area where collaboration between DPAs and the enforcers of the DSA would be desirable. To understand the way in which DPAs have been applying this requirement so far, this case-law overview on automated decision-making under the GDPR published by the Future of Privacy Forum last year is helpful.
Recommender Systems Free-of-Profiling
“Profiling” as defined by the GDPR also plays an important role in one of the key obligations of VLOPs and VLOSEs: to offer users an alternative feed of content not based on profiling. Technically, this stems from an obligation in Article 38 of the DSA for VLOPs and VLOSEs to “provide at least one option for each of their recommender systems which is not based on profiling.” The DSA explains in Recital 70 that a core part of an online platform’s business is the manner in which information is prioritized and presented on its online interface to facilitate and optimize access to information for users: “This is done, for example, by algorithmically suggesting, ranking and prioritizing information, distinguishing through text or other visual representations, or otherwise curating information provided by recipients.”
The DSA text further explains that “such recommender systems can have a significant impact on the ability of recipients to retrieve and interact with information online, including to facilitate the search of relevant information,” as well as playing an important role “in the amplification of certain messages, the viral dissemination of information and the stimulation of online behavior.” Additionally, as part of their obligations to assess and mitigate risks on their platforms, VLOPs and VLOSEs may need to adjust the design of their recommender systems. Recital 94 of the DSA explains that they could achieve this “by taking measures to prevent or minimize biases that lead to the discrimination of persons in vulnerable situations, in particular where such adjustment is in accordance with Article 9 of the GDPR,” where Article 9 establishes conditions for processing sensitive personal data.
Access to Data for Researchers and Competent Authorities
Article 40 of the DSA includes an obligation for VLOPs and VLOSEs to provide access to the data necessary to monitor their compliance with the regulation to competent authorities (Digital Services Coordinators designated at the national level in the EU Member State of their establishment or the European Commission). This includes access to data related to algorithms, based on a reasoned request and within a reasonable period specified in the request. Additionally, they also have an obligation to provide access to vetted researchers following a request of their Digital Services Coordinator of establishment “for the sole purpose of conducting research that contributes to the detection, identification, and understanding of systemic risks” in the EU, and “to the assessment of the adequacy, efficiency, and impacts of the risk mitigations measures.” This obligation presupposes that the platforms may be required to explain the design, logic of the functioning, and the testing of their algorithmic systems, in accordance with Article 40 and its corresponding Recital 34.
Providing access to online platforms’ data entails, in virtually all cases, providing access to personal data as well, which brings this processing under the scope of the GDPR and triggers its obligations. Recital 98 of the DSA highlights that providers and researchers alike should pay particular attention to safeguarding the rights of individuals related to the processing of personal data granted by the GDPR. Recital 98 adds that “providers should anonymize or pseudonymize personal data except in those cases that would render impossible the research purpose pursued.” Notably, the data access obligations in the DSA are subject to further specification through delegated acts, to be adopted by the European Commission. These acts are expected to “lay down the specific conditions under which such sharing of data with researchers can take place” in compliance with the GDPR, as well as “relevant objective indicators, procedures and, where necessary, independent advisory mechanisms in support of sharing of data.” This is another area where the DPAs and the DSA enforcers should closely collaborate.
Takedown of Illegal Content
Core to the DSA are obligations for hosting services, including online platforms, to remove illegal content: Article 16 of the DSA outlines this obligation based on a notice-and-action mechanism initiated at the notification of any individual or entity. The GDPR confers rights on individuals to request erasure of their personal data (Article 17 of the GDPR) under certain conditions, as well as the right to request rectification of their data (Article 16 of the GDPR). These rights of the “data subject” under the GDPR aim to strengthen individuals’ control over how their personal data is collected, used, and disseminated. Article 3(h) of the DSA defines “illegal content” as “any information that, in itself or in relation to an activity … is not in compliance with Union law or the law of any Member State…, irrespective of the precise subject matter or nature of that law.” As a result, to the extent that “illegal content” as defined by the DSA is also personal data, an individual may potentially use either of the avenues, depending on how the overlap of the two provisions is further clarified in practice. Notably, one of the grounds for obtaining erasure of personal data is if “the personal data has been unlawfully processed,” and therefore processed not in compliance with the GDPR, which is Union law.
Article 16 of the DSA highlights an obligation for hosting services, including online platforms, to put mechanisms in place to facilitate the submission of sufficiently precise and adequately substantiated notices. Article 12 of the GDPR, on another hand, requires controllers to facilitate the exercise of data subject rights, including erasure, and to communicate information on the action taken without undue delay and in any case no longer than one month after receiving the request. The DSA does not prescribe a specific timeline to deal with notices for removal of illegal content, other than “without undue delay.” All hosting services and online platforms whose activity falls under the GDPR have internal processes set up to respond to data subject requests, which could potentially be leveraged in setting up mechanisms to remove illegal content pursuant notices as requested by the DSA. However, a key differentiator is that in the DSA content removal requests can also come from authorities (see Article 9 of the DSA) and from “trusted flaggers” (Article 22), in addition to any individual or entity – each of these situations under their own conditions. In contrast, erasure requests under the GDPR can only be submitted by data subjects (individuals whose personal data is processed), either directly or through intermediaries acting on their behalf. DPAs may also impose the erasure of personal data, but only as a measure pursuant to an enforcement action.
VLOPs/VLOSEs will have to additionally design mitigation measures ensuring the adoption of content moderation processes, including the speed and quality of processing notices related to specific types of illegal content and its expeditious removal.
Risk Assessments
The DSA, pursuant to Article 34, obliges VLOPs/VLOSEs to conduct a risk assessment at least once per year to identify, analyze, and assess “systemic risks stemming from the design or functioning of their service and its related systems,” including algorithmic systems. The same entities are very likely subject to the obligation to conduct a Data Protection Impact Assessment (DPIA) under Article 35 of the GDPR, as at least some of their processing operations, like using personal data for recommender systems or profiling users based on personal data to display online advertising, meet the criteria that trigger the DPIA obligation. A DPIA is required in particular where processing of personal data “using new technologies, and taking into account the nature, scope, context, and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons.”
There are four systemic risks that the DSA asks to be included in the risk assessment: dissemination of illegal content; any actual or foreseeable negative effects on the exercise of specific fundamental rights, among which the right to respect for private life and the right to the protection of personal data are mentioned; any actual or foreseeable negative effects on civic discourse, electoral processes and public security; and any actual foreseeable negative effects in relation to gender-based violence, the protection of public health and minors, and serious negative consequences to the person’s physical and mental well-being.
Among the elements that a DPIA under the GDPR must include is “an assessment of the risks to the rights and freedoms of data subjects” that may arise from how controllers process personal data through new technologies, such as algorithmic systems. Other elements that must be included are the measures envisaged to address these risks, similar to how Article 35 of the DSA requires VLOPs/VLOSEs to put mitigation measures in place tailored to the identified risks. The EDPB has also published guidelines on how to conduct DPIAs.
When conducting the risk assessments required by the DSA, VLOPs/VLOSEs must take into account whether and how specific factors enumerated in Article 34(2) influence any of the systemic risks mentioned. Most factors to consider are linked to how VLOPs/VLOSEs process personal data, such as the design of their algorithmic systems, the systems for selecting and presenting advertisements, and generally their data-related practices.
Both DSA risk assessments and DPIAs are ex-ante risk assessment obligations and both involve some level of engagement with supervisory authorities. The scope of the assessments differ, with the DSA focused on systemic risks and risks that go beyond impact on fundamental rights, and the GDPR’s DPIA focused on any risks that novel processing of personal data may pose on fundamental rights and freedoms and on assessments unique to data protection. However, they also have areas of clear overlap where processing of personal data is involved. DPIAs can potentially feed into DSA risk assessments, and the two processes should be implemented consistently.
Compliance Function and theDSA Legal Representative
Under the DSA, in accordance with Article 41, the designated VLOPs/VLOSEs will be obliged to establish a “compliance function,” which can be composed of several compliance officers. This function must be (i) independent from their operational functions; (ii) allocated with sufficient authority, stature and resources; and must have (iii) access to the management body of the provider to monitor the compliance of that provider with the DSA. On top of that, the compliance function will have to cooperate with the Digital Services Coordinator of the establishment, ensure that all risks are identified through the risk assessments and that the mitigation measures are effective, as well as inform and advise the management and employees of the provider in relation to DSA obligations.
All providers of the services designated as VLOPs and VLOSEs who are also controllers under the GDPR are under an obligation to appoint a Data Protection Officer (DPO), as they very likely meet the criteria required by Article 37 of the GDPR due to the nature and scope of their processing activities involving personal data. There are similarities between the compliance function and the DPO, including their independence, reporting to the highest management level, their key task to monitor compliance with the whole regulation that creates their role, or their task to cooperate with the competent supervisory authorities. Appointing two independent roles that have a powerful internal position and with roles that may overlap to a certain extent will require consistency and coordination, which can be supported by further guidance from DPAs and DSA supervisory authorities.
Another role in the application of the two regulations that has many similarities is the role of a “representative” in the EU, in the situations of extraterritorial applicability of the DSA and the GDPR covering entities that do not have an establishment in the EU. In the DSA, this obligation pertains to all online service providers, pursuant to Article 13. If they are processing personal data in the context of targeting their services to individual recipients in the EU or if they monitor the recipients’ behavior, the service provider triggers the extraterritorial application of the GDPR as well. In such cases, they also need to appoint a GDPR representative, in accordance with Article 27. Under the GDPR, the representative acts as a mere “postal box” or point of correspondence between the non-EU controller and processor on one hand and DPAs or data subjects on the other hand, with liability that does not go beyond its own statutory obligations. In contrast, Article 13(3) of the DSA suggests that the “legal representative” could be held liable for failures of the intermediary service providers to comply with the DSA. Providers must mandate their legal representatives for the purpose of being addressed “in addition to or instead of” them by competent authorities, per Article 13(2) of the DSA.
Recital 44 of the DSA clarifies that the obligation to appoint a “sufficiently mandated” legal representative “should allow for the effective oversight and, where necessary, enforcement of this regulation in relation to those providers.” The legal representative must have “the necessary powers and resources to cooperate with the relevant authorities” and the DSA envisages that there may be situations where providers even appoint in this role “a subsidiary undertaking of the same group as the provider, or its parent undertaking, if that subsidiary or parent undertaking is established in the Union.” Recital 44 of the DSA also clarifies that the legal representative may also only function as a point of contact, “provided the relevant requirements of this regulation are complied with.” This could mean that if other structures are in place to ensure an entity on behalf of the provider can be held liable for non-compliance by a provider with the DSA, the representative can also function just as a “postal box.”
Intermediary Liability and the Obligation to Provide Information
Finally, the GDPR and the DSA intersect in areas where data protection, privacy, and intermediary liability overlap.
The GDPR, per Article 2, stresses that its provisions shall be read without prejudice to the e-Commerce Directive (2000/31/EC), in particular, to “the liability rules of intermediary service providers in Articles 12 to 15 of that Directive”. However, the DSA, pursuant to its Article 89, stipulates that while Articles 12 to 15 of the e-Commerce Directive become null, relevant “references to Articles 12 to 15 of Directive 2000/31/EC shall be construed as references to Articles 4, 5, 6 and 8 of this Regulation, respectively.”
The DSA deals with the liability of intermediary services providers, especially through Articles 4 to 10. With respect to Article 10, addressing orders to provide information, the DSA emphasizes the strong envisaged cooperation between intermediary service providers, national authorities, and also the Digital Services Coordinators as enforcers. This could potentially involve the sharing of information, including in certain cases that already collected personal data, in order to combat illegal content online. The GDPR actively passes the baton on intermediary liability on the DSA, but in the eventuality of data sharing and processing, the intermediary service providers should ensure that they comply with the protections of the GDPR (in particular sections 2 and 3). This overlap signals yet another instance where the two Regulations will be complementary to each other, this time in the case of intermediary liability and the obligation to provide information.
The DSA Will Be Enforced Through a Complex Web of Authorities, And The Interplay With The GDPR Complicates It
Enforcement in such a complex space will be challenging. In a departure from the approach promoted by the GDPR, where enforcement is ensured primarily at the national level and through the One Stop Shop mechanism for cross-border cases coordinated through the European Data Protection Board, the DSA centralizes enforcement of the DSA at the EU level when it comes to VLOPs and VLOSEs, leaving it in the hands of the European Commission. However, Member States will also be playing a role in ensuring enforcement of the DSA against the intermediary services providers who are not VLOPs and VLOSEs. Each Member State must designate one or more competent authorities for the enforcement of the DSA, and if they designate more, they must choose one to be appointed as their Digital Services Coordinator (DSC). The deadline to designate DSCs is February 2024. Challenges come with the designation of national competent authorities left to the Member States, as it seems that there is no consistent approach related to what type of authority will be most appropriately positioned to enforce the Act. Not all Member States have appointed their DSCs for the time being, but there is a broad spectrum of enforcers that Member States plan to rely on, creating a scattered landscape.
Table 3 – Authorities Designated or Considered for Designation as Digital Services Coordinators Across the EU Member States (Source: Euractiv)
Digital Services Coordinators
Member States
Media Regulator
Belgium, Hungary, Ireland and Slovakia
Consumer Protection Authority
Finland and the Netherlands
Telecoms Regulator
Czech Republic, Germany, Greece, Italy, Poland, Slovenia and Sweden
Competition Authority
Spain
The Digital Services Coordinators will be closely collaborating and coordinating with the European Board for Digital Services, which will be undertaking an advisory capacity (Articles 61-63 of the DSA), in order to ensure consistent cross-border enforcement. Member States are also tasked to adopt national rules on penalties applicable to infringements of the DSA, including fines that can go up to 6% of the annual worldwide turnover of the provider of intermediary services concerned in the preceding financial year (Article 52 of the DSA). Complaints can be submitted to DSCs by recipients of the services and by any body, organization, or association mandated to exercise rights conferred by the DSA to recipients. With respect to VLOPs and VLOSEs, the European Commission can issue fines not exceeding 6% of the annual worldwide turnover in the preceding year, following decisions of non-compliance which can also ask platforms to take necessary measures to remediate the infringements. Moreover, the Commission can also order interim measures before an investigation is completed, where there is an urgency due to the risk of serious damage to the recipients of the service.
The recipients of the service, including users of online platforms, also have a right to seek compensation from providers of intermediary services for damages or loss they suffer due to infringements of the DSA (Article 54 of the DSA). The DSA also applies in out-of-court dispute resolution mechanisms with regard to decisions of online platforms related to illegal content (Article 21 of the DSA), independent audits in relation to how VLOPs/VLOSEs comply with their obligations (Article 37 of DSA), and voluntary codes of conduct adopted at the Union level to tackle various systemic risks (Article 45), including codes of conduct for online advertising (Article 46) and for accessibility to online services (Article 47).
The newly established European Centre for Algorithmic Transparency (ECAT) also plays a role in this enforcement equation. The ECAT will be supporting the Commission in its assessment of VLOPs/VLOSEs with regard to risk management and mitigation obligations. Moreover, it will be particularly relevant to issues pertaining to recommender systems, information retrieval, and search engines. The ECAT will use a principles-based approach to assessing fairness, accountability, and transparency. However, the DSA is not the only regulation relevant to the use of algorithms and AI by platforms: the GDPR, the upcoming Digital Markets Act, the EU AI Act, and the European Data Act add to this complicated landscape.
The various areas of interplay between the DSA and the GDPR outlined above require consistent interpretation and application of the law. However, there is no formal role recognized in the enforcement and oversight structure of the DSA for cooperation or coordination, specifically among DPAs, the European Data Protection Board, or the European Data Protection Supervisor. This should not be an impediment to setting up processes for such cooperation and coordination within their respective competencies, as the rollout of the DSA will likely reveal the complexity of the interplay between the two legislative frameworks even beyond the ten areas outlined above.