Event Recap: FPF X nasscom Webinar Series – Breaking Down Consent Requirements under India’s DPDPA

Following the enactment of India’s Digital Personal Data Protection Act 2023 (DPDPA), the Future of Privacy Forum (FPF) and nasscom (National Association of Software and Service Companies), India’s largest industry association for the information technology sector, co-hosted a 2-part webinar series focused on the consent-centric regime under the DPDP Act. Spread across two days (November 9, 2023 and January 29, 2024), the webinar series comprised four panels that brought together experts from industry, governments, civil society, and the global data privacy community to share their perspectives on operationalizing consent under the DPDPA. This blog post provides an overview of these discussions. 

Panel 1 – Designing notices and requests for meaningful consent 

The first panel was co-moderated by Bianca Marcu (Policy Manager for Global Privacy, FPF) and Ashish Aggarwal (Vice President for Public Policy, nasscom) They were joined by the following panelists: 

  1. Paul Breitbarth, Data Protection Lead, Catawiki & Member of the Data Protection Authority, Jersey.
  2. Eduardo Ustaran, Partner, Global Co-Head of Privacy & Cybersecurity, Hogan Lovells.
  3. Eunjung Han, Consultant, Rouse, Vietnam.
  4. Swati Sinha, APAC, Japan and China Privacy Officer & Senior Counsel, Cisco.

The panel began with a short presentation by Priyanshi Dixit (Senior Policy Associate, nasscom) that introduced the concepts of notice and consent under the DPDPA. During the discussion, panelists emphasized the importance of clear, understandable written notices and discussed other design choices to ensure that consent is “free, specific, informed, unconditional, and unambiguous”. To this end, Swati Sinha highlighted consent notices for different categories of cookies under the EU General Data Protection Regulation (GDPR), and granular notices with separate tick boxes in South Korea and China as examples of how data fiduciaries under the DPDPA could design notices to enable individuals to make informed decisions. However, Swati also stressed that consent forms should not bundle different purposes or come with pre-ticked boxes. Eduardo Ustaran observed that the introduction of strict consent requirements in many new data protection laws internationally has transformed the act of giving consent from a passive action into a more active and affirmative one. Eduardo also stressed the importance of ensuring that consent was clearly and freely given and maintaining clear records. 

Adding to this, Paul Breitbarth suggested that visuals such as videos and images could help make the information in notices more accessible, particularly given that long text-based notices might not be convenient for individuals using mobile devices. Paul used the example of airline safety videos as an effective method for presenting notices, with voiceovers and subtitles to ensure accessibility for a broader audience. However, Paul cautioned that it is always advisable to include written notices alongside such visual representations. 

The panelists also highlighted challenges to relying on consent as a basis for processing personal data, such as varying levels of digital literacy, the risk of “consent fatigue,” and the use of deceptive design choices (such as pre-ticked consent boxes). The discussions therefore considered alternatives to consent under different data protection laws. The panelists highlighted that in Europe, consent is not always the most popular legal basis for processing personal data as under the GDPR consent is one of several equal bases for processing personal data. The panelists also considered that in jurisdictions whose data protection laws emphasize consent over other legal bases, organizations may face difficulties in ensuring that consent is meaningful. Eunjung Han cited Vietnam’s recent Personal Data Protection Decree as an example of a framework that emphasizes consent and could potentially limit businesses’ ability to process personal data for their operations. She also noted that industry stakeholders in Vietnam are engaging in conversations with the government to share global practices where business necessity serves as a legal basis for processing.

Regarding regulatory actions, the panelists noted that regulators initially offer guidance and support to industry but over time, may transition to initiating enforcement actions. As final takeaways, panelists stressed the importance of accountability and emphasized the need to clearly identify usage of personal data, only collect personal data that is necessary for a specific purpose, and adhere to data protection principles. 

Panel 2 – Examining consent and its alternatives

The second panel was co-moderated by Gabriela Zanfir-Fortuna (Vice President for Global Privacy, FPF) and Ashish Aggarwal (Vice President for Public Policy, nasscom). They were joined by the following panelists:

  1. Francis Zhang, Deputy Director, Data Policy, PDPC Singapore.
  2. Leandro Y. Aguirre, Deputy Privacy Commissioner, Philippines National Privacy Commission.
  3. Kazimierz Ujazdowski, Member of Cabinet, European Data Protection Supervisor.

Varun Sen Bahl (Manager, nasscom) set the context for the panel discussion through a brief presentation, outlining various alternatives to consent under the DPDP Act: legitimate uses (section 7) and exemptions (sections 17(1) and 17(2)).

Throughout the discussion, the panelists drew from their experiences with their respective data protection laws: Singapore’s Personal Data Protection Act (PDPA), the Philippines’ Data Privacy Act (DPA), and the EU’s GDPR. In particular, a common experience shared by the three panelists was that they had all faced questions on the interpretation of alternative bases to consent in their respective jurisdictions. They noted that this was an evolving trend and suggested that it would likely extend to India as well. 

Panelists noted that some data protection authorities were proactively promoting alternative legal bases to consent. This need arose because organizations in their jurisdictions were over-relying on consent as the de facto default legal basis for processing personal data, leading to “consent fatigue” for data subjects. For instance, Francis Zhang explained that Singapore amended its PDPA in 2020 to include new alternatives to consent that aim to strike a balance between individual and business interests. 

Gabriela highlighted the similarities between section 15(1) of Singapore’s PDPA and section 7(a) of the DPDP Act. Both provisions allow for consent to be deemed where an individual voluntarily shares their personal data within an organization. In this context, Francis Zhang shared Singapore’s experience with this provision and explained that it was intended to apply in scenarios where consent can be inferred from the individual’s conduct, such as sharing payment details in a transaction or health information during a health check-up.

Reflecting on his experience in Europe, Kazimierz Ujazdowski observed that data protection authorities tend to be reactive as they are constrained by the resources at their disposal. He suggested that Indian regulators could be better prepared than the ones in Europe at the time of the enactment of the GDPR by proactively identifying practices that are likely to adversely affect users. He also highlighted the importance of taking a strategic approach to map areas of risk requiring regulatory attention. Deputy Commissioner Aguirre emphasized the need for India’s Data Protection Board to establish effective mechanisms to offer guidance regarding the interpretation of key legal provisions and how to comply with them. He highlighted that effective communication between regulators and industries was crucial for anticipating lapses and promoting compliance. He also explained that complaints and awareness efforts during the transition period before the Philippines’ DPA took effect helped to refine the Philippines’ data protection legal frameworks.

Panel 3 – Realizing the ‘consent manager’ model

The third panel was focused on the novel concept of consent managers introduced under the DPDPA and was moderated by Malavika Raghavan (Senior Fellow, FPF) and Varun Sen Bahl (nasscom). They were joined by the following panelists:

  1. Vikram Pagaria, Joint Director, National Health Authority of India. 
  2. Bertram D’Souza, CEO, Protean AA and Convener, AA Steering Committee, Sahamati Foundation. 
  3. Malte Beyer-Katzenberger, Policy Officer, European Commission. 
  4. Rahul Matthan, Partner – TMT, Trilegal.
  5. Ashish Aggarwal, Head of Public Policy, nasscom.

To kick off the discussions, Varun Sen Bahl provided a quick overview of the provisions on “consent managers” under the DPDPA.The law defines a “consent manager” as a legal entity or individual who acts as a single point of contact for data principals (i.e., data subjects) to give, manage, review, and withdraw consent through an accessible, transparent, and interoperable platform. Consent managers must be registered with the Data Protection Board of India (once established) and will be subject to obligations under forthcoming subordinate legislation to the DPDPA.

As the concept of a consent manager is not found in other legislation in India or internationally, there has been a great deal of speculation as to what form consent managers will take, and what role they will play in India’s technology ecosystem, once the DPDPA and its subordinate legislation are fully implemented. 

The discussion among panelists touched upon the evolving role of consent managers and their potential impact under the DPDPA. 

Rahul Matthan highlighted two concepts from existing consent management frameworks in India: the “account aggregator” framework in the financial sector, and the National Health Authority’s Ayushman Bharat Digital Mission (ABDM) in the health sector that could serve as potential operational models for consent managers under the DPDPA. He also suggested that these initiatives could facilitate data portability, even though the DPDPA does not expressly recognize such a right. He also anticipated that forthcoming subordinate legislation would clarify how these existing initiatives will interface with consent managers under the DPDPA.

Bertram D’Souza and Vikram Pagaria provided background on how these two sectoral initiatives function in India.

Bertram noted that in India’s financial sector, account aggregators currently enable users to manage their consent with over 100 financial institutions, including banks, mutual funds, and pension funds and enable users to manage their consent. Several different account aggregators exist on the market today, but must register with the Reserve Bank of India to obtain an operational license. 

Vikram highlighted how ABDM enables users in the health sector to access their health records and consent to requests from various different entities (such as hospitals, laboratories, clinics, or pharmacies) to access that data. Users can also control the type of health record to be shared and the duration for which the data needs to be shared. Vikram also noted that approximately 500 million individuals have consented to create their Health IDs (Ayushman Bharat Health Account), with around 300 million health records linked to these IDs.

Malte Beyer-Katzenberger drew parallels between these existing sectoral initiatives in India and the EU’s Data Governance Act (DGA), a regulation that establishes a framework to facilitate data-sharing across sectors and between EU countries. He explained how the DGA evolved from business models trying to solve problems around personal data management and consent management. In this context, he noted that EU regulators are keen to collaborate with India on the shared objectives of empowering users with their data and enabling data portability.  

Ashish highlighted that the value of consent managers lies in providing users a technological means to seamlessly give and withdraw consent. He also saw scope for data fiduciaries to rely on consent managers as a tool to safeguard against liability and regulatory action. When asked about what business model consent managers would adopt, Bertram noted that it is an evolving space and the market in which consent managers will operate is extremely fragmented. While he anticipated that based on his experience with account aggregators, consent managers would initially be funded by India’s technology ecosystem system, they may eventually shift to a user-paid model. The panelists also highlighted the need to obtain “buy-in” from data fiduciaries and ensure that they are accountable towards users towards users). Malte also pondered how consent managers could achieve scale in the absence of a legislative mandate requiring their use.

Rahul Matthan highlighted the immense potential of the market for consent managers in India, noting that as of January 2024, account aggregators have processed 40 million consent requests, twice the number from August of the previous year. Though account aggregators are not mandatory for users, Rahul noted that the convenience and efficiency that they offer is likely to encourage people to opt into using these services, whether they are within the formal financial system or outside it. Agreeing with this, Bertram highlighted the need for consent managers to focus on enhancing user experience and foster cross-sectoral collaborations. 

In his concluding remarks, Ashish underscored the importance of striking a balance by allowing the industry to develop the existing account aggregators framework while ensuring that use of this framework is optional for consumers. He agreed that the account aggregator framework is likely to influence the development of consent managers under the DPDPA, and suggested that there may also be use cases for similar frameworks in other areas and sectors, such as in e-commerce, to address deceptive design patterns.

Panel 4 – Operationalizing ‘verifiable parental consent’ in India

The final panel in the webinar series was focused on examining the requirements for verifiable consent for processing the personal data of children under the DPDPA. The panel was co-moderated by Christina Michelakaki (Policy Counsel for Global Privacy, FPF) and Varun Sen Bahl and they were joined by the following panelists:

  1. Kieran Donovan, Founder, k-ID. 
  2. Rakesh Maheshwari, Former Head of the Cyber Laws and Data Governance Division, Ministry of Electronics and Information Technology.
  3. Iqsan Sirie, Partner, TMT, Assegaf Hamzah & Partners, Indonesia. 
  4. Vrinda Bhandari, Advocate – Supreme Court of India. 
  5. Bailey Sanchez, Senior Counsel, Youth & Education Privacy, Future of Privacy Forum. 

Varun Sen Bahl presented a brief overview of verifiable parental consent under the DPDPA. Specifically, the legislation requires data fiduciaries to seek verifiable consent from the parent or lawful guardian when processing the personal data of minors aged eighteen years or below or persons with disabilities. However, the Act empowers India’s Central Government to: 

The forthcoming subordinate legislation under the DPDPA is expected to provide further detail on how these provisions will be implemented.

Building on the presentation, the panelists shed light on the complexities surrounding parental consent requirements under different data protection laws. Iqsan Sirie drew parallels between India’s DPDPA and Indonesia’s recently enacted Personal Data Protection Law, which also introduced parental consent requirements for processing children’s data that will only be clarified through enactment of secondary regulation. Iqsan cited guidelines issued by Indonesia’s Child Protection Commission as “soft law” which businesses could refer to when developing online services. 

Rakesh Maheshwari explained that the Indian Government’s intent in introducing these measures in the DPDPA was to address concerns regarding children’s safety, albeit while providing the Central Government flexibility in implementing these measures. 

Vrinda Bhandari focused on the forthcoming subordinate legislation to the DPDPA and stressed that any method for verifying parental consent must be risk-based and proportionate. Specifically, she highlighted privacy risks and low digital literacy as challenges in introducing such tech-based solutions. First, she pointed out that biometric-based verification methods, such as India’s national resident ID number (Aadhaar) or any other government-issued ID that captures sensitive personal data, could pose security risks, depending on who can access this information. Second, she noted that the majority of Indians belong to a mobile-first generation, where parents may not be digitally literate. Although Vrinda cited tokenization as a good alternative, she questioned whether it would be feasible to implement it in India, given the costs and technical complexity of deploying this solution.

Drawing from his expertise at K-ID, which aids developers in safely authenticating and safeguarding children’s online privacy, Kieran Donovan highlighted the array of methods for implementing age-gating, ranging from simple email verifications to advanced third-party services aimed at privacy preservation. He discussed the use of payment transactions, SMS 2-factor authentication, electronic signatures, and question-based approaches designed to gauge user maturity. He also pointed out that only 4 of the 103 countries requiring parental consent specify the exact method for verifying parental consent. He also spoke about the challenges faced by businesses in implementing age-gating measures, including the cost per transaction and resistance from users to sophisticated verification methods. 

Comparing India’s DPDPA with the Children’s Online Privacy Protection Act (COPPA) Bailey Sanchez noted that the age for consent in this context is 13 years in the US and is applicable only for services directed at children. Bailey also observed that it is not straightforward to demonstrate compliance under the COPPA. However, the Federal Trade Commission proactively updates the approved methods for parental verification and also works with industry to review new methods that reflect technological advancements. Christina spoke about the legal position on children’s consent in the EU under GDPR, and the challenges in relying on other legal bases for processing children’s data. 

As final takeaways, the discussion touched on the importance of regulatory guidance and risk-based intervention that incentivizes stakeholders to participate actively. Overall, panelists noted that a nuanced approach balancing privacy protection and practical considerations is essential for effective implementation of parental consent requirements globally.

To conclude the webinar series, Josh Lee Kok Thong (Managing Director for APAC, FPF) expressed his gratitude to all the panelists, viewers, and hosts (from FPF and nasscom) for their active participation, extending a special note of thanks for their contributions.

Conclusion

In the run up to the notification of the subordinate legislation which will enforce key provisions of the DPDPA, the FPF x nasscom webinar series aimed to foster an active discussion that captured the insights of regulators, industry, academia, and civil society from within India and beyond. Going forward, FPF will play an active role in building on these conversations.

RECs Report: Towards a Continental Approach to Data Protection in Africa

On July 28, 2022, the African Union (AU) released its long-awaited African Union Data Policy Framework (DPF), which strives to advance the use of data for development and innovation, while safeguarding the interests of African countries. The DPF’s vision is to unlock the potential of data for the benefit of Africans, to “improve people’s lives, safeguard collective interests, protect (digital) rights and drive equitable socio-economic development.” One of the key mechanisms that the DPF seeks to leverage to achieve this vision is the harmonization of member states’ digital data governance systems to create a single digital market for Africa. It identifies a range of focus areas that would greatly benefit from harmonization, including data governance, personal information protection, e-commerce, and cybersecurity.  

In order to promote cohesion and harmonization of data-related regulations across Africa, the DPF recommends leveraging existing regional institutions and associations that are already in existence to create unified policy frameworks for their member states. In particular, the framework emphasizes the role of Africa’s eight Regional Economic Communities (RECs) to harmonize data policies and serve as a strong pillar for digital development by drafting model laws, supporting capacity building, and engaging in continental policy formulation.
This report provides an overview of these regional and continental initiatives, seeking to better clarify the state of data protection harmonization in Africa and to educate practitioners about future harmonization efforts through the RECs. Section 1 begins by providing a brief history of policy harmonization in Africa before introducing the RECs and explaining their connection to digital regulation. Section 2 dives into the four regional data protection frameworks created by some of the RECs and identifies key similarities and differences between the instruments. Finally, Section 3 of the report analyzes regional developments in the context of the Malabo Convention through a comparative and critical analysis and, lastly, provides a roadmap for understanding future harmonization trends. It concludes that while policy harmonization remains a key imperative in the continent, divergences and practical limitations exist in the current legal frameworks of member states.

Brussels Privacy Symposium 2023 Report

The seventh edition of the Brussels Privacy Symposium, jointly co-organized by the Future of Privacy Forum and the Brussels Privacy Hub, took place at the U-Residence of the Vrije Universiteit Brussel campus on November 14, 2023. The Symposium presented a key opportunity for a global, interdisciplinary convening to discuss one of the most important topics facing Europe’s digital society today and in the years to come: “Understanding the EU Data Strategy Architecture: Common Threads – Points of Juncture – Incongruities.” 

With the program of the Symposium, the organizers aimed to transversally explore three key topics that cut through the Data Strategy legislative package of the EU and the General Data Protection Regulation (GDPR), painting an intricate picture of interplay that leaves room for tension, convergence, and the balancing of different interests and policy goals pursued by each new law. Throughout the day, participants debated the possible paradigm shift introduced by the push for access to data in the Data Strategy Package, the network of impact assessments from the GDPR to the Digital Services Act (DSA) and EU AI Act, and debated the future of enforcement of a new set of data laws in Europe. 
Attendees were welcomed by Dr Gianclaudio Malgieri, Associate Professor of Law & Technology at Leiden University and co-Director of the Brussels Privacy Hub, and Jules Polonetsky, CEO at the Future of Privacy Forum. In addition to three expert panels, the Symposium opened with Keynote addresses by Commissioner Didier Reynders, European Commissioner for Justice, and Wojciech Wiewiórowski, the European Data Protection Supervisor. Commissioner Reynders specifically highlighted that the GDPR remains the “cornerstone of the EU digital regulatory framework” when it comes to the processing of personal data, while Supervisor Wiewiórowski cautioned that “we need to ensure the data protection standards that we fought for, throughout many years, will not be adversely impacted by the new rules.” In the afternoon, attendees engaged in a brainstorming exercise in four different breakout sessions, and the Vice-Chair of the European Data Protection Board (EDPB), Irene Loizidou Nikolaidou, gave her closing remarks to end the conference.

The following Report outlines some of the most important outcomes from the day’s conversations, highlighting the ways and places in which the EU Data Strategy Package overlaps, interacts, supports, or creates tension with key provisions of the GDPR. The Report is divided into six sections: the above general introduction; the ensuing section which provides a summary of the Opening Remarks; the next three sections which provide insights into the panel discussions; and the sixth and final section which provides a brief summary of the EDPB Vice-Chair’s Closing Remarks.

Editor: Alexander Thompson

FPF and OneTrust Release Collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide & Infographic 

Today, the Future of Privacy Forum (FPF) and OneTrust released a collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide and accompanying Infographic. Conformity Assessments are a key and overarching accountability tool introduced in the proposed EU Artificial Intelligence Act (EU AIA or AIA) for high-risk AI systems.

Conformity Assessments are expected to play a significant role in the governance of AI in the EU, and the Guide and Infographic provide a step-by-step explanation of what a Conformity Assessment is–designed for individuals at organizations responsible for the legal obligation to perform one–along with a roadmap outlining the series of steps for conducting a Conformity Assessment.

The Guide and Infographic can serve as an essential resource for organizations who want to prepare for compliance with the EU AIA’s final text, which is expected to be adopted by the end of 2023 and become applicable in late 2025.

Key aspects of the Guide and Infographic include:  

For more information about the EU AIA, Conformity Assessments, and the Guide and Infographic, please contact Katerina Demetzou at [email protected].

How Data Protection Authorities are De Facto Regulating Generative AI

The Istanbul Bar Association IT Law Commission published Dr. Gabriela Zanfir-Fortuna’s article, “How Data Protection Authorities are De Facto Regulating Generative AI,” in their August monthly AI Working Group Bulletin, “Law in the Age of Artificial Intelligence” (Yapay Zekâ Çağinda Hukuk).

Generative AI took the world by storm in the past year, with services like ChatGPT becoming “the fastest growing consumer application in history.” For generative AI applications to be trained and function immense amounts of data, including personal data, are necessary. It should be no surprise that Data Protection Authorities (‘DPAs’) were the first regulators around the world to take action, from opening investigations to actually issuing orders imposing suspension of the services where they found breaches of data protection law.

Their concerns span from the lack of a justification (a lawful ground) for processing personal data used for training the AI models, lack of transparency about the personal data used for training, and about how the personal data collected while users are interacting with the AI service is used, lack of avenues to exercise data subject rights such as access, erasure, and objection, impossibility to exercise the right of correcting inaccurate personal data when it comes to the output generated by such AI services, insufficient data security measures, unlawfully processing sensitive personal data and children’s data, to not applying data protection by design and by default. 

Global Overview of DPA Investigations into Generative AI 

Defined broadly, DPAs are supervisory authorities vested with the power to enforce comprehensive data protection law in their jurisdictions. In the past six months, as the popularity of generative AI was growing among consumers and businesses around the world, DPAs started opening investigations into how the providers of such services are complying with legal obligations related to how personal data are collected and used, as provided in their respective national data protection law. Their efforts are focusing currently on OpenAI as the provider of ChatGPT. Only two of the investigations have resulted until now in official enforcement action, be it preliminary, in Italy and South Korea. Here is a list of known open investigations, their timeline, and key concerns:

This survey of investigations into how a generative AI service provider is complying with data protection law in jurisdictions around the world reveals significant commonalities among their legal obligations and how they are applicable to processing of personal data through this new technology. There is also overlap among concerns that DPAs have about generative AI’s impact on the rights of people in relation to their personal data. This provides good ground for collaboration and coordination among supervisory authorities as regulators of generative AI.

G7 DPAs Issue Statement on Generative AI, Distilling Key Data Protection Concerns Across Jurisdictions

In this spirit, the DPAs of the G7 members adopted in Tokyo, on 21 June 2023, a Statement on generative AI which lays out their key areas of concern related to how the technology processes personal data. The Commissioners started their statement by acknowledging that “there are growing concerns that generative AI may present risks and potential harms to privacy, data protection, and other fundamental human rights if not properly developed and regulated.”

The key areas of concern highlighted in the Statement considered the use of personal data at various stages of developing and deploying AI systems, including a focus on datasets used to train, validate, and test generative AI models, the interactions of individuals with generative AI tools and also the content generated by them. For each of these stages, the issue of a lawful ground for processing was raised. Security safeguards against inverting a generative AI model to extract or reproduce personal data originally processed in data sets used to train the model were also added as a key area of concern, as well as putting in place mitigation and monitoring measures to ensure personal data generated through such tools are accurate, complete and up-to-date, free from discriminatory, unlawful, or otherwise unjustifiable effects.

Other areas of concern mentioned were transparency to promote openness and explainability; production of technical documentation across the AI development lifecycle; technical and organizational measures in the application of the rights of individuals such as access, erasure, correction, and the right not to be subject to solely automated decision-making that has a significant effect on the individual; accountability measures to ensure appropriate levels of responsibility across the AI supply chain; and limiting collection of personal data to what is necessary to fulfill a specified task. 

A key recommendation spelled out in the Statement, but also emerging from the investigations above, is for developers and providers to embed privacy in the design, conception, operation, and management of new products and services that use generative AI technologies, and to document their choices in a Data Protection Impact Assessment.

EU’s Digital Services Act Just Became Applicable: Outlining Ten Key Areas of Interplay with the GDPR

DSA: What’s in a Name?

The European Union’s (EU) Digital Services Act (DSA) is a first-of-its-kind regulatory framework, with which the bloc hopes to set an international benchmark for regulating online intermediaries and improving online safety. The DSA establishes a range of legal obligations, from content removal requirements, prohibitions to engage in manipulative design and to display certain online advertising targeted to users profiled on the basis of sensitive characteristics, to sweeping accountability obligations requiring audits of algorithms and assessments of systemic risks for the largest of platforms. 

The DSA is part of the EU’s effort to expand its digital regulatory framework to address the challenges posed by online services. It reflects the EU’s regulatory approach of comprehensive legal frameworks which strive to protect fundamental rights, including in digital environments. The DSA should not be read by itself: it is applicable on top of the EU’s General Data Protection Regulation (GDPR), alongside the Digital Markets Act (DMA), as well as other regulations and directives of the EU’s Data Strategy legislative package.

The Act introduces strong protections against both individual and systemic harms online, and also places digital platforms under a unique new transparency and accountability framework. To address the varying levels of risks and responsibilities associated with different types of digital services, the Act distinguishes online intermediaries depending on the type of business service, size, and impact, setting up different levels of obligations. 

Given the structural and “systemic” significance of certain firms in the digital services ecosystem, the regulation places stricter obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). These firms will have to abide by higher transparency standards, provide access to (personal) data to competent authorities and researchers, and identify, analyze, assess, and mitigate systemic risks linked to their services. Such systemic risks have been classified into four different categories (Recitals 80-84): illegal content; fundamental rights (freedom of expression, media pluralism, children’s rights, consumer protection, and non-discrimination, inter alia); public security and electoral/democratic processes; and public health protection, with a specific focus on minors, physical and mental well-being, and gender-based violence. 

The European Commission designated VLOPs and VLOSEs earlier this year (see Table 1), based on criteria laid out in the DSA and a threshold number of 45 million monthly users across the EU. The DSA obligations for these designated online platforms became applicable on August 25, 2023, with the exception of a transparency database whose publication was postponed for a month following complaints. The full regulation becomes applicable for all covered starting on February 17, 2024.

Table 1 – Mapping the 19 Designated Companies by the European Commission (April 2023) 

Type of serviceCompanyDigital ServiceType of designation







Social Media 
AlphabetYoutubeVLOP
MetaFacebookVLOP
MetaInstagramVLOP
BytedanceTikTokVLOP
MicrosoftLinkedInVLOP
SnapSnapchatVLOP
PinterestPinterestVLOP
X (formerly known as Twitter)X (formerly known as Twitter)VLOP

App Stores 
AlphabetGoogle App StoreVLOP
AppleApple App StoreVLOP
WikiWikimediaWikimediaVLOP




Marketplaces
Amazon*Amazon MarketplaceVLOP
AlphabetGoogle ShoppingVLOP
AlibabaAliExpressVLOP
Booking.comBooking.comVLOP
Zalando*ZalandoVLOP
MapsAlphabetGoogle MapsVLOP

Search
AlphabetGoogle SearchVLOSE
MicrosoftBingVLOSE

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 

* The companies that sought to challenge their designation as ‘VLOPs’ – the European General Court will be addressing these challenges and will determine whether the European Commission’s designation shall be upheld. 

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 

However, VLOPs and VLOSEs are not the only regulated entities. All intermediaries that offer their services to users based in the EU, including online platforms such as app stores, collaborative economy platforms, and social media platforms, fall within the scope of the regulation, regardless of their number of users. Notably, micro and small-sized enterprises that do not meet the VLOP/VLOSE criteria, as defined by EU law, are exempted from some of the legal obligations. While “regular” online platforms may have scaled down requirements compared to VLOPs/VLOSEs, their new legal obligations are nonetheless significant and include, among others, transparency regarding their recommendation systems, setting up internal complaint-handling mechanisms, prohibitions on designing their platforms in a way that deceives or manipulates users, and prohibitions on presenting ads based on profiling using special categories of personal data, including personal data of minors.

All providers of intermediary services, including online platforms, covered by the DSA are also “controllers” under the GDPR to the extent that they process personal data and decide on the means and purposes of such processing. As a consequence, they have to comply with both these legal frameworks at the same time. While the DSA stipulates, pursuant to Recital 10, that the GDPR and the ePrivacy Directive serve as governing rules for personal data protection, some DSA provisions intertwine with GDPR obligations in complex ways, requiring further analysis. For instance, some of the key obligations in the DSA refer to “profiling” as defined by the GDPR, while others create a legal requirement for VLOPs and VLOSEs to give access to personal data to researchers or competent authorities. 

After a brief overview of the scope of application of the DSA and a summary of its key obligations based on the type of covered entity (see Table 2), this blog maps out ten key areas where the DSA and the GDPR interact in consequential ways and reflects on the impact of this interaction on the enforcement of the DSA. The ten interplay areas we are highlighting are:

The DSA Applies to Intermediary Services of Various Types and Sizes and Has Broad Extraterritorial Effect

The DSA puts in place a horizontal framework of layered responsibilities targeted at different types of online intermediary services, including:

(1) Intermediary services offering network infrastructureIncluding “mere conduit services” (e.g. internet access, content delivery networks, WiFi hotspots); “caching services” (e.g. automatic, intermediate, and temporary storage of information); and “hosting services” (e.g. cloud and web-hosting services).
(2) Online platform servicesProviders bringing together sellers and consumers, such as online marketplaces, app stores, collaborative economy platforms, social media platforms, and providers that disseminate information to the public.
(3) Very Large Online Platforms (VLOPs)Reaching at least 45 million active recipients in the EU on a monthly basis (10% of the EU population).
(4) Very Large Online Search Engines (VLOSEs) Reaching at least 45 million active recipients in the EU on a monthly basis (10% of the EU population).


Recitals 13 and 14 of the DSA highlight the importance of “disseminating information to the public” as a benchmark for which online platforms fall under the scope of the Regulation and the specific category of hosting services. For instance, Recital 14 explains that emails or private messaging services fall outside the definition of online platforms “as they are used for interpersonal communication between a finite number of persons determined by the sender of the communication.” However, the DSA obligations for online platforms may still apply to them if such services “allow the making available of information to a potentially unlimited number of recipients, … such as through public groups or open channels.”

Important carve-outs are made in the DSA for micro and small-sized enterprises, as defined by EU law, that do not meet the VLOP/VLOSE criteria. These firms are exempted from some of the legal obligations, in particular from making available an annual report on the content moderation they engage in, as well as the more substantial additional obligations imposed on providers of online platforms in Articles 20 to 28 – such as the prohibition to display ads based on profiling conducted on special categories of personal data, and obligations for platforms allowing consumers to conclude distance contracts with traders in Articles 29 to 32. 

These carve-outs come in contrast with the broad applicability of the GDPR to entities of all sizes. This means, for instance, that even if micro and small-sized enterprises that are online platforms do not have to comply with the prohibitions related to displaying ads based on profiling using special categories of personal data and profiling of minors, they continue to fall under the scope of the GDPR and its requirements that impact such profiling. 

The DSA has extra-territorial effect and global coverage, similar to the GDPR, since it captures companies regardless of whether they are established in the EU or not, as long as the recipients of their services have their place of establishment or are located in the EU (Article 2). 

The DSA Just Became Applicable to VLOPs and VLOSEs and Will Continue to Roll Out to All Online Platforms

The Act requires that platforms and search engines publish their average monthly number of active users/recipients with the EU-27 (Article 24 – check the European Commission’s guidance on the matter). The first round of sharing those numbers was due on February 17, 2023. Based on the information shared through that exercise, the Commission designated the VLOPs and VLOSEs with additional obligations because of the “systemic risks that they pose to consumers and society,” (Article 33). The designation announcement was made public on April 25.

Four months after the designation, on August 25, 2023, the DSA provisions became applicable to VLOPs and VLOSEs through Article 92. This means that the designated platforms must already implement their obligations, such as conducting risk assessments, increasing transparency of recommender systems, and offering an alternative feed of content not subject to recommender systems based on profiling (see an overview of their obligations in Table 2).

As of February 17, 2024, all providers of intermediary services must comply with a set of general obligations (Articles 11-32), with certain exceptions for micro and small enterprises as explained above.

Table 2 – List of DSA Obligations as Distributed Among Different Categories of Intermediary Service Providers

Pillar obligationsSet of ObligationsIntermediary ServicesHosting ServicesOnline PlatformsVLOPs/VLOSEs













Transparency measures 
Transparency reporting (Article 15)🚩🚩🚩🚩
Requirements on terms and conditions wrt fundamental rights (Article 14)🚩🚩🚩🚩
Statement of reasons (Article 17)🚩🚩
Notice-and-action and obligation to provide information to users (Article 16)🚩🚩🚩
Recommender system transparency (Articles 27 and 38)🚩🚩
User-facing transparency of online advertising (Article 24)🚩🚩
Online advertising transparency (Article 39)🚩
User choice for access to information (Article 42)🚩










Oversight structure to address the complexity of the online intermediary services ecosystem
Cooperation with national authorities following orders (Article 11)🚩🚩🚩🚩
Points of contact for recipients of service (Article 12) and, where necessary, legal representatives (Article 13)🚩🚩🚩🚩
Internal complaint and handling system (Article 20) and redress mechanism (Article 32) and out-of-court dispute settlement (Article 21)🚩🚩
Independent auditing and public accountability (Article 37)🚩
Option for recommender systems not based on profiling (Article 38) 🚩
Supervisory fee (Article 43)🚩
Crisis response mechanism and  cooperation process (Article 36)🚩

Manipulative Design
Online interface design and organization (Article 25) 🚩🚩











Measures to counter illegal goods, services, or content online
Trusted flaggers (Article 22)🚩🚩
Measures and protection against misuse (Article 23)🚩🚩
Targeted advertising based on sensitive data (Article 26)🚩🚩
Online protection of minors (Article 28)🚩🚩
Traceability of traders (Articles 30-32)🚩🚩
Reporting criminal offenses (Article 18)🚩🚩
Risk management obligations and compliance officer (Article 41)🚩🚩
Risk assessment and mitigation of risks (Articles 34-35)🚩
Codes of conduct (Articles 45-47)🚩
Access to data for researchers Data sharing with authorities and researchers (Article 40)🚩

From Risk Assessments to Profiling and Transparency Requirements – Key Points of Interplay Between the DSA and GDPR

While the DSA and the GDPR serve different purposes and objectives at face value, ultimately both aim to protect fundamental rights in a data-driven economy and society, on the one hand, and reinforce the European single market, on the other hand. The DSA aims to establish rules for digital services and their responsibilities toward content moderation and combating systemic risks, so as to ensure user safety, safeguard fairness and trust in the digital environment, and enhance a “single market for digital services.” Notably, providing digital services is inextricably linked to processing data, including personal data. The GDPR seeks to protect individuals in relation to how their personal data is processed, ensuring that such processing respects their fundamental rights, while at the same time seeking to promote the free movement of personal data within the EU.  

While the two regulations do not have the same taxonomy of regulated actors, the broad scope of the GDPR’s definitions of  “controllers” and “processing of personal data” are such that all intermediaries covered by the DSA are also controllers under the GDPR in relation to any processing of personal data they engage in and for which they establish the means and purposes of processing. Some intermediaries might also be “processors” under the GDPR in specific situations, a fact that needs to be assessed on a case-by-case basis. Overall, this overlap triggers the application of both regulations, with the GDPR seemingly taking precedence over most of the DSA (Recital 10 of the DSA), with the exception of the intermediary liability rules in the DSA as the updated eCommerce Directive, which take precedence over the GDPR (Article 2(4) of the GDPR). 

The DSA mentions the GDPR 19 times in its text across recitals and articles, with “profiling” as defined by the GDPR playing a prominent role in core obligations for all online platforms. These include the two prohibitions to display ads based on profiling that use sensitive personal data or the data of minors, and the obligation that any VLOPs and VLOSEs that use recommender systems must provide at least one option for their recommender systems not based on profiling. The GDPR plays an additional role in setting the definition for sensitive data (“special categories of data”) in its Article 9, which the DSA specifically refers to for the prohibition of displaying ads based on profiling done on such data. In addition to these cross-references, where it will be essential to apply the two legal frameworks consistently, there are other areas of overlap that create complexity for compliance, at the minimum, but also risks for inconsistencies (such as the DSA risk assessment processes and the GDPR Data Protection Impact Assessment). Additional overlaps may confuse individuals concerned regarding the best legal framework to rely on for removing their personal data from online platforms, as the DSA sets up a framework for takedown requests for illegal content that may also include personal data and the GDPR provides individuals with the right to obtain erasure of their personal data in specific contexts.  

In this complex web of legal provisions, here are the elements of interaction between the two legal frameworks that stand out. As the applicability of the DSA rolls out on top of GDPR compliance programs and mechanisms, other such areas may surface. 

  1. Manipulative Design (or “Dark Patterns”) in Online Interfaces 

These are practices that “materially distort or impair, either on purpose or in effect, the ability of recipients of the service to make autonomous and informed choices or decisions,” per Recital 67 DSA. Both the GDPR and the DSA address these practices, either directly or indirectly. The GDPR, on the one hand, offers protection against manipulative design in cases that involve processing of personal data. The protections are relevant for complying with provisions detailing lawful grounds for processing, requiring data minimization, setting out how valid consent can be obtained and withdrawn, or how controllers must apply Data Protection by Design and by Default when building their systems and processes. 

Building on this ground, Article 25 of the DSA, read in conjunction with Recital 67, includes a ban on providers of online platforms to “design, organize or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.” The ban seems to be applicable only to online platforms as defined in Article 3(i) of the DSA, as a subcategory of the wide spectrum of intermediary services. Importantly, the DSA specifies that the ban on dark patterns does not apply to practices covered by the Unfair Commercial Practices Directive (UCPD) or the GDPR. Article 25(3) of the DSA highlights that the Commission is empowered to issue guidelines on how the ban on manipulative design applies to specific practices, so further clarity is expected. And since the protection vested by the GDPR against manipulative design will remain relevant and primarily applicable, it will be essential for consistency that these guidelines are developed in close collaboration with Data Protection Authorities (DPAs).

  1. Targeted Advertising Based on Sensitive Data

Article 26(3) and Recital 68 of the DSA underline a prohibition of the providers of online platforms to “present” ads to users stemming from profiling them, as defined by Article 4(4) of the GDPR, based on sensitive personal data, as defined by Article 9 of the GDPR. Such personal data include race, religion, health status, and sexual orientation, among others on a limited list. However, it is important to mention that case law from the Court of Justice of the EU (CJEU) may further complicate the application of this provision. In particular, Case C-184/20 OT, in a judgment published a year ago, expanded “special categories of personal data” under the GDPR to also cover any personal data from which a sensitive characteristic may be inferred. Additionally, the very recent CJEU judgment in Case C-252/21 Meta v. Bundeskartellamtmakes important findings regarding how social media services as a category of online platforms can lawfully engage in profiling of their users pursuant to the GDPR, including for personalized ads. While the DSA prohibition is concerned with “presenting” ads based on profiling using sensitive data, rather than with the activity of profiling itself, it must be read in conjunction with the obligations in the GDPR for processing personal data for profiling and with the relevant CJEU case-law. To this end, the European Data Protection Board has published relevant guidelines for automated decision-making and profiling in general, but also specifically on targeting of social media users.   

  1. Targeted Advertising and Protection of Minors

Recital 71 of the GDPR already provides that solely automated decision-making, including profiling, with legal or similarly significant effects should not apply to children – a rule that is relevant for any type of context, such as educational services, and not only for online platforms. The DSA enhances this protection when it comes to online platforms, prohibiting the presentation of ads on their interface based on profiling by using personal data of users “when they are aware with reasonable certainty that the recipient of the service is a minor” (Article 28 of the DSA). Additionally, in line with the principle of data minimization provided by Article 5(1) of the GDPR, this DSA prohibition should not lead the provider of the online platform to “maintain, acquire or process” more personal data than it already has in order to assess if the recipient of the service is a minor. While this provision addresses all online platforms, VLOPs and VLOSEs are expected to take “targeted measures to protect the rights of the child, including age verification and parental control tools” as part of their obligation in Article 35(1)(j) to put in place mitigation measures tailored to their specific systemic risks identified following the risk assessment process. As highlighted in a recent FPF infographic and report on age assurance technology, age verification measures may require processing of additional personal data than what the functioning of the online service requires, which could be at odds with the data minimization principles in the absence of additional safeguards. This is an example where the two regulations complement each other. 

In recent years, DPAs have been increasingly regulating the processing of personal data of minors. For instance, in the EU, the Irish Data Protection Commission published Fundamentals for a Child-Oriented Approach to Data Processing, the Italian Garante often includes the protection of children in its high-profile enforcement decisions (see, for instance, the TikTok and ChatGPT cases), and the CNIL in France published recommendations to enhance the protection of children online and launched several initiatives to enhance digital rights of children. This is another area where collaboration with DPAs will be very important for consistent application of the DSA.

  1. Recommender Systems and Advertising Transparency

A significant area of overlap between the DSA and the GDPR relates to transparency. A key purpose of the DSA is to increase overall transparency related to online platforms, manifesting through several obligations, while transparency related to how one’s personal data are processed is an overarching principle of the GDPR. Relevant areas for this principle in the GDPR are found in  Article 5, through extensive notice obligations in Articles 13 and 14, data access obligations in Article 15, and underpinned by modalities on how to communicate to individuals in Article 12. Two of the DSA obligations that increase transparency are laid out in Article 27, which imposes on providers of online platforms transparency related to how recommender systems work, and in Article 26, which imposes transparency related to advertising on online platforms. To implement the latter obligation, the DSA requires, per Recital 68, that the “recipients of a service should have information directly accessible from the online interface where the advertisement is presented, on the main parameters used for determining that a specific advertisement is presented to them, providing meaningful explanations of the logic used to that end, including when this is based on profiling.” 

As for transparency related to recommender systems, Recital 70 of the DSA explains that online platforms should consistently ensure that users are appropriately informed about how recommender systems impact the way information is displayed and can influence how information is presented to them. “They should clearly present the parameters for such recommender systems in an easily comprehensible manner” to ensure that the users “understand how information is prioritized for them,” including where information is prioritized “based on profiling and their online behavior.” Notably, Articles 13(2)(f) and 14(2)(g) of the GDPR require that notices to individuals whose personal data is processed include “meaningful information about the logic involved, as well as the significance and the envisaged consequences” of automated decision-making, including profiling. These provisions should be read and applied together, complementing each other, to ensure consistency. This is another area where collaboration between DPAs and the enforcers of the DSA would be desirable. To understand the way in which DPAs have been applying this requirement so far, this case-law overview on automated decision-making under the GDPR published by the Future of Privacy Forum last year is helpful.    

  1. Recommender Systems Free-of-Profiling

“Profiling” as defined by the GDPR also plays an important role in one of the key obligations of VLOPs and VLOSEs: to offer users an alternative feed of content not based on profiling. Technically, this stems from an obligation in Article 38 of the DSA for VLOPs and VLOSEs to “provide at least one option for each of their recommender systems which is not based on profiling.” The DSA explains in Recital 70 that a core part of an online platform’s business is the manner in which information is prioritized and presented on its online interface to facilitate and optimize access to information for users: “This is done, for example, by algorithmically suggesting, ranking and prioritizing information, distinguishing through text or other visual representations, or otherwise curating information provided by recipients.” 

The DSA text further explains that “such recommender systems can have a significant impact on the ability of recipients to retrieve and interact with information online, including to facilitate the search of relevant information,” as well as playing an important role “in the amplification of certain messages, the viral dissemination of information and the stimulation of online behavior.” Additionally, as part of their obligations to assess and mitigate risks on their platforms, VLOPs and VLOSEs may need to adjust the design of their recommender systems. Recital 94 of the DSA explains that they could achieve this “by taking measures to prevent or minimize biases that lead to the discrimination of persons in vulnerable situations, in particular where such adjustment is in accordance with Article 9 of the GDPR,” where Article 9 establishes conditions for processing sensitive personal data. 

  1. Access to Data for Researchers and Competent Authorities

Article 40 of the DSA includes an obligation for VLOPs and VLOSEs to provide access to the data necessary to monitor their compliance with the regulation to competent authorities (Digital Services Coordinators designated at the national level in the EU Member State of their establishment or the European Commission). This includes access to data related to algorithms, based on a reasoned request and within a reasonable period specified in the request. Additionally, they also have an obligation to provide access to vetted researchers following a request of their Digital Services Coordinator of establishment “for the sole purpose of conducting research that contributes to the detection, identification, and understanding of systemic risks” in the EU, and “to the assessment of the adequacy, efficiency, and impacts of the risk mitigations measures.” This obligation presupposes that the platforms may be required to explain the design, logic of the functioning, and the testing of their algorithmic systems, in accordance with Article 40 and its corresponding Recital 34. 

Providing access to online platforms’ data entails, in virtually all cases, providing access to personal data as well, which brings this processing under the scope of the GDPR and triggers its obligations. Recital 98 of the DSA highlights that providers and researchers alike should pay particular attention to safeguarding the rights of individuals related to the processing of personal data granted by the GDPR. Recital 98 adds that “providers should anonymize or pseudonymize personal data except in those cases that would render impossible the research purpose pursued.” Notably, the data access obligations in the DSA are subject to further specification through delegated acts, to be adopted by the European Commission. These acts are expected to “lay down the specific conditions under which such sharing of data with researchers can take place” in compliance with the GDPR, as well as “relevant objective indicators, procedures and, where necessary, independent advisory mechanisms in support of sharing of data.” This is another area where the DPAs and the DSA enforcers should closely collaborate. 

  1. Takedown of Illegal Content

Core to the DSA are obligations for hosting services, including online platforms, to remove illegal content: Article 16 of the DSA outlines this obligation based on a notice-and-action mechanism initiated at the notification of any individual or entity. The GDPR confers rights on individuals to request erasure of their personal data (Article 17 of the GDPR) under certain conditions, as well as the right to request rectification of their data (Article 16 of the GDPR). These rights of the “data subject” under the GDPR aim to strengthen individuals’ control over how their personal data is collected, used, and disseminated. Article 3(h) of the DSA defines “illegal content” as “any information that, in itself or in relation to an activity … is not in compliance with Union law or the law of any Member State…, irrespective of the precise subject matter or nature of that law.” As a result, to the extent that “illegal content” as defined by the DSA is also personal data, an individual may potentially use either of the avenues, depending on how the overlap of the two provisions is further clarified in practice. Notably, one of the grounds for obtaining erasure of personal data is if “the personal data has been unlawfully processed,” and therefore processed not in compliance with the GDPR, which is Union law. 

Article 16 of the DSA highlights an obligation for hosting services, including online platforms, to put mechanisms in place to facilitate the submission of sufficiently precise and adequately substantiated notices. Article 12 of the GDPR, on another hand, requires controllers to facilitate the exercise of data subject rights, including erasure, and to communicate information on the action taken without undue delay and in any case no longer than one month after receiving the request. The DSA does not prescribe a specific timeline to deal with notices for removal of illegal content, other than “without undue delay.” All hosting services and online platforms whose activity falls under the GDPR have internal processes set up to respond to data subject requests, which could potentially be leveraged in setting up mechanisms to remove illegal content pursuant notices as requested by the DSA. However, a key differentiator is that in the DSA content removal requests can also come from authorities (see Article 9 of the DSA) and from “trusted flaggers” (Article 22), in addition to any individual or entity – each of these situations under their own conditions. In contrast, erasure requests under the GDPR can only be submitted by data subjects (individuals whose personal data is processed), either directly or through intermediaries acting on their behalf. DPAs may also impose the erasure of personal data, but only as a measure pursuant to an enforcement action.  

VLOPs/VLOSEs will have to additionally design mitigation measures ensuring the adoption of content moderation processes, including the speed and quality of processing notices related to specific types of illegal content and its expeditious removal. 

  1. Risk Assessments

The DSA, pursuant to Article 34, obliges VLOPs/VLOSEs to conduct a risk assessment at least once per year to identify, analyze, and assess “systemic risks stemming from the design or functioning of their service and its related systems,” including algorithmic systems. The same entities are very likely subject to the obligation to conduct a Data Protection Impact Assessment (DPIA) under Article 35 of the GDPR, as at least some of their processing operations, like using personal data for recommender systems or profiling users based on personal data to display online advertising, meet the criteria that trigger the DPIA obligation. A DPIA is required in particular where processing of personal data “using new technologies, and taking into account the nature, scope, context, and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons.” 

There are four systemic risks that the DSA asks to be included in the risk assessment: dissemination of illegal content; any actual or foreseeable negative effects on the exercise of specific fundamental rights, among which the right to respect for private life and the right to the protection of personal data are mentioned; any actual or foreseeable negative effects on civic discourse, electoral processes and public security; and any actual foreseeable negative effects in relation to gender-based violence, the protection of public health and minors, and serious negative consequences to the person’s physical and mental well-being. 

Among the elements that a DPIA under the GDPR must include is “an assessment of the risks to the rights and freedoms of data subjects” that may arise from how controllers process personal data through new technologies, such as algorithmic systems. Other elements that must be included are the measures envisaged to address these risks, similar to how Article 35 of the DSA requires VLOPs/VLOSEs to put mitigation measures in place tailored to the identified risks. The EDPB has also published guidelines on how to conduct DPIAs.

When conducting the risk assessments required by the DSA, VLOPs/VLOSEs must take into account whether and how specific factors enumerated in Article 34(2) influence any of the systemic risks mentioned. Most factors to consider are linked to how VLOPs/VLOSEs process personal data, such as the design of their algorithmic systems, the systems for selecting and presenting advertisements, and generally their data-related practices. 

Both DSA risk assessments and DPIAs are ex-ante risk assessment obligations and both involve some level of engagement with supervisory authorities. The scope of the assessments differ, with the DSA focused on systemic risks and risks that go beyond impact on fundamental rights, and the GDPR’s DPIA focused on any risks that novel processing of personal data may pose on fundamental rights and freedoms and on assessments unique to data protection. However, they also have areas of clear overlap where processing of personal data is involved. DPIAs can potentially feed into DSA risk assessments, and the two processes should be implemented consistently. 

  1. Compliance Function and the DSA Legal Representative 

Under the DSA, in accordance with Article 41, the designated VLOPs/VLOSEs will be obliged to establish a “compliance function,” which can be composed of several compliance officers. This function must be (i) independent from their operational functions; (ii) allocated with sufficient authority, stature and resources; and must have (iii) access to the management body of the provider to monitor the compliance of that provider with the DSA. On top of that, the compliance function will have to cooperate with the Digital Services Coordinator of the establishment, ensure that all risks are identified through the risk assessments and that the mitigation measures are effective, as well as inform and advise the management and employees of the provider in relation to DSA obligations. 

All providers of the services designated as VLOPs and VLOSEs who are also controllers under the GDPR are under an obligation to appoint a Data Protection Officer (DPO), as they very likely meet the criteria required by Article 37 of the GDPR due to the nature and scope of their processing activities involving personal data. There are similarities between the compliance function and the DPO, including their independence, reporting to the highest management level, their key task to monitor compliance with the whole regulation that creates their role, or their task to cooperate with the competent supervisory authorities. Appointing two independent roles that have a powerful internal position and with roles that may overlap to a certain extent will require consistency and coordination, which can be supported by further guidance from DPAs and DSA supervisory authorities. 

Another role in the application of the two regulations that has many similarities is the role of a “representative” in the EU, in the situations of extraterritorial applicability of the DSA and the GDPR covering entities that do not have an establishment in the EU. In the DSA, this obligation pertains to all online service providers, pursuant to Article 13. If they are processing personal data in the context of targeting their services to individual recipients in the EU or if they monitor the recipients’ behavior, the service provider triggers the extraterritorial application of the GDPR as well. In such cases, they also need to appoint a GDPR representative, in accordance with Article 27. Under the GDPR, the representative acts as a mere “postal box” or point of correspondence between the non-EU controller and processor on one hand and DPAs or data subjects on the other hand, with liability that does not go beyond its own statutory obligations. In contrast, Article 13(3) of the DSA suggests that the “legal representative” could be held liable for failures of the intermediary service providers to comply with the DSA. Providers must mandate their legal representatives for the purpose of being addressed “in addition to or instead of” them by competent authorities, per Article 13(2) of the DSA. 

Recital 44 of the DSA clarifies that the obligation to appoint a “sufficiently mandated” legal representative “should allow for the effective oversight and, where necessary, enforcement of this regulation in relation to those providers.” The legal representative must have “the necessary powers and resources to cooperate with the relevant authorities” and the DSA envisages that there may be situations where providers even appoint in this role “a subsidiary undertaking of the same group as the provider, or its parent undertaking, if that subsidiary or parent undertaking is established in the Union.” Recital 44 of the DSA also clarifies that the legal representative may also only function as a point of contact, “provided the relevant requirements of this regulation are complied with.” This could mean that if other structures are in place to ensure an entity on behalf of the provider can be held liable for non-compliance by a provider with the DSA, the representative can also function just as a “postal box.”      

  1. Intermediary Liability and the Obligation to Provide Information

Finally, the GDPR and the DSA intersect in areas where data protection, privacy, and intermediary liability overlap.

The GDPR, per Article 2, stresses that its provisions shall be read without prejudice to the e-Commerce Directive (2000/31/EC), in particular, to “the liability rules of intermediary service providers in Articles 12 to 15 of that Directive”. However, the DSA, pursuant to its Article 89, stipulates that while Articles 12 to 15 of the e-Commerce Directive become null, relevant “references to Articles 12 to 15 of Directive 2000/31/EC shall be construed as references to Articles 4, 5, 6 and 8 of this Regulation, respectively.”

The DSA deals with the liability of intermediary services providers, especially through Articles 4 to 10. With respect to Article 10, addressing orders to provide information, the DSA emphasizes the strong envisaged cooperation between intermediary service providers, national authorities, and also the Digital Services Coordinators as enforcers. This could potentially involve the sharing of information, including in certain cases that already collected personal data, in order to combat illegal content online. The GDPR actively passes the baton on intermediary liability on the DSA, but in the eventuality of data sharing and processing, the intermediary service providers should ensure that they comply with the protections of the GDPR (in particular sections 2 and 3). This overlap signals yet another instance where the two Regulations will be complementary to each other, this time in the case of intermediary liability and the obligation to provide information. 

The DSA Will Be Enforced Through a Complex Web of Authorities, And The Interplay With The GDPR Complicates It

Enforcement in such a complex space will be challenging. In a departure from the approach promoted by the GDPR, where enforcement is ensured primarily at the national level and through the One Stop Shop mechanism for cross-border cases coordinated through the European Data Protection Board, the DSA centralizes enforcement of the DSA at the EU level when it comes to VLOPs and VLOSEs, leaving it in the hands of the European Commission. 
However, Member States will also be playing a role in ensuring enforcement of the DSA against the intermediary services providers who are not VLOPs and VLOSEs. Each Member State must designate one or more competent authorities for the enforcement of the DSA, and if they designate more, they must choose one to be appointed as their Digital Services Coordinator (DSC). The deadline to designate DSCs is February 2024. Challenges come with the designation of national competent authorities left to the Member States, as it seems that there is no consistent approach related to what type of authority will be most appropriately positioned to enforce the Act. Not all Member States have appointed their DSCs for the time being, but there is a  broad spectrum of enforcers that Member States plan to rely on, creating a scattered landscape.

Table 3 – Authorities Designated or Considered for Designation as Digital Services Coordinators Across the EU Member States (Source: Euractiv)

Digital Services CoordinatorsMember States
Media RegulatorBelgium, Hungary, Ireland and Slovakia
Consumer Protection AuthorityFinland and the Netherlands
Telecoms RegulatorCzech Republic, Germany, Greece, Italy, Poland, Slovenia and Sweden
Competition AuthoritySpain

The Digital Services Coordinators will be closely collaborating and coordinating with the European Board for Digital Services, which will be undertaking an advisory capacity (Articles 61-63 of the DSA), in order to ensure consistent cross-border enforcement. Member States are also tasked to adopt national rules on penalties applicable to infringements of the DSA, including fines that can go up to 6% of the annual worldwide turnover of the provider of intermediary services concerned in the preceding financial year (Article 52 of the DSA). Complaints can be submitted to DSCs by recipients of the services and by any body, organization, or association mandated to exercise rights conferred by the DSA to recipients. With respect to VLOPs and VLOSEs, the European Commission can issue fines not exceeding 6% of the annual worldwide turnover in the preceding year, following decisions of non-compliance which can also ask platforms to take necessary measures to remediate the infringements. Moreover, the Commission can also order interim measures before an investigation is completed, where there is an urgency due to the risk of serious damage to the recipients of the service.  

The recipients of the service, including users of online platforms, also have a right to seek compensation from providers of intermediary services for damages or loss they suffer due to infringements of the DSA (Article 54 of the DSA). The DSA also applies in out-of-court dispute resolution mechanisms with regard to decisions of online platforms related to illegal content (Article 21 of the DSA), independent audits in relation to how VLOPs/VLOSEs comply with their obligations (Article 37 of DSA), and voluntary codes of conduct adopted at the Union level to tackle various systemic risks (Article 45), including codes of conduct for online advertising (Article 46) and for accessibility to online services (Article 47).  

The newly established European Centre for Algorithmic Transparency (ECAT) also plays a role in this enforcement equation. The ECAT will be supporting the Commission in its assessment of VLOPs/VLOSEs with regard to risk management and mitigation obligations. Moreover, it will be particularly relevant to issues pertaining to recommender systems, information retrieval, and search engines. The ECAT will use a principles-based approach to assessing fairness, accountability, and transparency. However, the DSA is not the only regulation relevant to the use of algorithms and AI by platforms: the GDPR, the upcoming Digital Markets Act,  the EU AI Act, and the European Data Act add to this complicated landscape. 

The various areas of interplay between the DSA and the GDPR outlined above require consistent interpretation and application of the law. However, there is no formal role recognized in the enforcement and oversight structure of the DSA for cooperation or coordination, specifically among DPAs, the European Data Protection Board, or the European Data Protection Supervisor. This should not be an impediment to setting up processes for such cooperation and coordination within their respective competencies, as the rollout of the DSA will likely reveal the complexity of the interplay between the two legislative frameworks even beyond the ten areas outlined above. 

Editor: Alexander Thompson

The Digital Personal Data Protection Act of India, Explained

Authors: Raktima Roy, Gabriela Zanfir-Fortuna

Raktima Roy is a Privacy Attorney with several years of experience in India and holds an LLM in Law and Technology from Georgetown University, as well as an FPF Global Privacy Intern.

The Digital Personal Data Protection Act of India (DPDP) sprinted through its final stages last week after several years of debates, postponements and negotiations, culminating with its publication in the Official Gazette on Friday, August 11, 2023. In just over a week, the Bill passed the lower and upper Houses of the Parliament and received Presidential assent. India, the most populous country in the world with more than 1.4 billion people, is the largest democracy and the 19th country among the G20 members to pass a comprehensive personal data protection law – which it did during its tenure holding the G20 Presidency.

The adoption of the DPDP Bill in the Parliament comes 6 years after Justice K.S. Puttaswamy v Union of India, a landmark case in which the Supreme Court of India recognized a fundamental right to privacy in India, including informational privacy, within the “right to life” provision of India’s Constitution. In this judgment, a nine-judge bench of the Supreme Court urged the Indian Government to put in place “a carefully structured regime” for the protection of personal data. As part of India’s ongoing efforts to create this regime, there have been several rounds of expert consultations and reports, and two previous versions of the bill were introduced in the Parliament in 2019 and 2022. A brief history of the law is available here

The law as enacted is transformational. It has a broad scope of application, borrowing from the EU’s General Data Protection Regulation (GDPR) approach when defining “personal data” and extending coverage to all entities who process personal data regardless of size or private status. The law also has significant extraterritorial application. The DPDP creates far reaching obligations, imposing narrowly defined lawful grounds for processing any personal data in a digital format, establishing purpose limitation obligations and their corollary – a duty to erase the data once the purpose is met, with seemingly no room left for secondary uses of personal data, and creates a set of rights for individuals whose personal data are collected and used, including rights to notice, access and erasure. The law also creates a supervisory authority, the Data Protection Board of India (Board), which has the power to investigate complaints and issue fines, but does not have the power to issue guidance or regulations. 

At the same time, the law provides significant exceptions for the central government and other government bodies, the degree of exemption depending on their function (such as law enforcement). Other exemptions include those for most publicly available personal data, processing for research and statistical purposes, and processing the personal data of foreigners by companies in India pursuant a contract with a foreign company (such as outsourcing companies). Some processing by startups may also be exempt, if notified by the government. The Act also empowers the central government to act upon a notification by the Board and request access to any information from an entity processing personal data, an intermediary (as defined by the Information Technology Act, 2000 – the “IT Act”) or from the Board, as well as to order suspension of access of the public to specific information. The Central Government is also empowered to adopt a multitude of “rules” (similar to regulations under US state privacy laws) that detail the application of the law. 

It is important to note that the law will not come into effect until the government provides notice of an effective date. The DPDP Act does not contain a mandated transitional period akin to the two-year gap between the 2016 enactment of the GDPR and its entry into force in May 2018. Rather, it empowers the Government to determine the dates on which different sections of the Act will come into force, including the sections governing the formation of the new Board that will oversee compliance with the law. 

This blog will lay out the most important aspects of the DPDP Act, understanding nonetheless that many of its key provisions will be shaped up through subsequent rules issued by the central government, and through practice. 

  1. The DPDP Act Applies to “Data Fiduciaries,” “Significant Data Fiduciaries,” and provides rights for “Data Principals” 

The DPDP Act seeks to establish a comprehensive national framework for processing personal data, replacing a much more limited data protection framework under the IT Act and rules that currently provide basic protections to limited categories of “sensitive” personal data such as sexual orientation, health data, etc. The new law by contrast covers all “personal data” (defined as “any data about an individual who is identifiable by or in relation to such data”) and does not contain heightened protection for any special category of data. The definition of “personal data,” thus, relies on the broad “identifiability” criterion, similar to the GDPR. Only “digital” personal data, or personal data collected through non-digital means that have been digitized subsequently are covered by the law. 

The DPDP Act uses the term “data principal” to refer to the individual that the personal data relates to (the equivalent of “data subject” under the GDPR). A “data fiduciary” is the entity that determines the purposes and means of processing of personal data, alone or in conjunction with others, and is the equivalent to a “data controller” under GDPR. While the definition of data fiduciaries includes a reference to potential joint fiduciaries, the Act does not provide any other details about this relationship. 

The definition of fiduciaries does not distinguish between private and public, natural and legal persons, technically extending to any person as long as the other conditions of the law are met. 

Specific Fiduciaries, Public or Private, Are Exempted or May Be Exempted from the Core Obligations of the Act

The law includes some broad exceptions for government entities in general, and others apply to specific processing purposes. For instance, the law allows the government to exempt activities that are in the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, maintenance of public order, or preventing incitement to commit crimes if it provides notice of the exemptions. Justice Srikrishna, who as the head of an expert committee set up to recommend a data protection law in India led the creation of the 2017 first draft of the law, has been critical of these government exemptions, as have been several Members of Parliament during the legislative debate. 

Some targeted exceptions also apply to companies, and are either well defined in the law or left to the government for specification. Under what can be called an “outsourcing exception,” the Act exempts companies based in India who process the personal data of people outside of India pursuant to a contract with a company based outside of India from core DPDP obligations including the rights of access and erasure normally held by data principals. Instead, such companies are largely required to only comply with data security obligations. 

In addition, the government is empowered to exempt any category of data fiduciaries  from some or all of the law, with the DPDP itself referring to “startups” in this context. These are fairly broad provisions and do not include any guidance on how they will apply or who could benefit from them. The government will need to make a specific designation for this exception to operate.

Significant Data Fiduciaries Have Significant New Obligations, such as DPOs, DPIAs and Audits 

The DPDP Act empowers the Government to designate any data fiduciary or class of data fiduciaries as a “Significant Data Fiduciary” (SDF), which is done using a series of criteria that lack quantifiable thresholds. These factors range from assessing characteristics of the processing operations (volume and sensitivity of personal data processed and the risk posed to the rights of data principals), to broader societal and even national sovereignty concerns (potential impact of the processing on the sovereignty and integrity of India; risk to electoral democracy; security of the state; and public order).

The designation of companies as SDFs is consequential, because it comes with enhanced obligations. Chief among them, SDFs will need to appoint a Data Protection Officer (DPO), who must be based in India and be the point of contact for a required grievance redressal mechanism. SDFs must also  appoint an independent data auditor to carry out data audits and evaluate the SDF’s compliance with the DPDP Act, and to undertake periodic Data Protection Impact Assessments.

It is important to note that appointing a DPO is not an obligation for all data fiduciaries. However, all fiduciaries are under an obligation to establish a “readily available” mechanism for redressing grievances by data principals in a timely manner. In order for such a process to be operationalized, usually an internal privacy compliance function or a dedicated privacy officer would be helpful.

The DPDP Act Recognizes the Role of Data Processors

Data processors are recognized by the DPDP Act, which makes it clear that fiduciaries may engage, appoint or otherwise involve processors to process personal data on their behalf “only under a valid contract” (Section 8(2)). There are no prescribed rules for what a processing contract should entail. However, the DPDP Act places all obligations on data fiduciaries, which remain liable for complying with the law. 

Data fiduciaries remain liable for overall compliance, regardless of any contractual arrangement to the contrary with data processors. The DPDP Bill requires data fiduciaries to mandate that a processor delete data when a data principal withdraws consent, and fiduciaries be able to share information of processors they have engaged when requested by a data subject.

  1. The DPDP Act Has Broad Extraterritorial Effect and Almost No Restrictions for International Data Transfers

The DPDP Act applies to the processing of “digital personal data” within India. Importantly, the definition of the “data principal” does not include any condition related to residence or citizenship, meaning that it is conceivable fiduciaries based in India who process the personal data of foreigners within the territory of the country may be covered by the Act (outside of the “outsourcing exception” mentioned above). 

The Act  also applies extraterritorially to processing of digital personal data outside India, if such processing is in connection with any activity related to offering of goods or services to data principals within India. The extraterritorial effect is similar in scope to the GDPR, and it may leave room for a broader interpretation through its inclusion of “any activity” connected to the offering of goods or services.

The DPDP Act does not currently restrict the transfer of personal data outside of India. It reverses the typical paradigm of international data transfer provisions in laws like the GDPR, by presuming that transfers may occur without restrictions, unless the Government specifically restricts transfers to certain countries (blacklisting) or enacts any other form of restriction (Section 16). No criteria for such restrictions have been mentioned in the law. This is a significant departure from previous instances of the Bill, which at one point contained data localization obligations (2018), and evolved at another point into “whitelisting” of countries (2022). 

It should also be noted that other existing sectoral laws (e.g., those governing specific industries like banking and telecommunications) already contain restrictions on cross-border transfers of particular kinds of data. The DPDP Act clarifies that existing localization mandates will not be affected by the new law. 

  1. Consent Remains Primary Means for Lawful Processing of Personal Data Under the Act

Data fiduciaries are under an obligation to process personal data for a lawful purpose and only if they either obtain consent from the data principal for that purpose, or they identify a “legitimate use” consistent with Section 4. This process is conceptually similar to the approach proposed by the GDPR, requiring a lawful ground before personal data can be collected or otherwise processed. However, in contrast to the GDPR (which provides for six possible lawful grounds), the DPDP Act includes only two: strictly defined “consent” and “legitimate use.” 

Which lawful ground is used for a processing operation is consequential. Based on the wording of the Act and in the absence of further specification, the obligations of fiduciaries to give notice and respond to access, correction and erasure requests (see Section 4 of this blog) are only applicable if the processing is based on consent and on voluntary sharing of personal data by the principal. 

Valid Consent Has Strict Requirements, Is Withdrawable, And Can be Exercised Through Consent Managers

The DPDP Act requires that consent for processing of personal data be “free, specific, informed, unconditional and unambiguous with a clear affirmative action.” These conditions are similarly strict to those required under the GDPR, highlighting that the people whose personal data are processed must be free to give consent, and their consent must not be tied to other conditions.  

In order to meet the “informed” criterion, the Act requires that notice be given to principals before or at the time that they are asked to give consent. The notice must include information about the personal data to be collected, the purpose for which it will be processed, the manner in which data principals may exercise their rights under the DPDP Act, and how to make a complaint to the Board. Data principals must be given the option to receive the information in English or a local language among the languages specified in the Constitution.

The DPDP Act addresses the issue of legacy data for which companies may have received consent prior to the enactment of the law. Fiduciaries should provide the same notice to these data principals as soon as “reasonably practicable.” In that case, however, the data processing may continue until the data principal withdraws consent. 

Data fiduciaries may only process personal data for the specific purpose provided to the data principal and must obtain separate consent to process the data for a new purpose. In practice, this will make it difficult for data fiduciaries to rely on “bundled consent.” Provisions around “secondary uses” of personal data or “compatible purposes” are not addressed in the Act, making the purpose limitation requirements strict. 

Data principals may also withdraw their consent at any time – and data fiduciaries must ensure that the process for withdrawing consent is as straightforward as that for giving consent. Once consent is withdrawn, personal data must be deleted unless a legal obligation to retain data applies. Additionally, data fiduciaries must ask any processors to cease processing any personal data for which consent has been withdrawn, in the absence of legal obligations imposing data retention.

The DPDP Act allows principals to give, manage, review and withdraw their consent through a “Consent Manager,” which will be registered with the Board and must provide an accessible, transparent, and interoperable platform. Consent Managers are part of India’s “Data Empowerment And Protection Architecture” policy, and similar structures have been already functional for some time, such as in the financial sector. Under the DPDP Act, Consent Managers will be accountable to data principals and act on their behalf as per prescribed rules. The Government will notify (in the Gazette) the conditions necessary for a company to register as a Consent Manager, which may include fulfilling minimum technical or financial criteria.

“Legitimate Uses” Are Narrowly Defined and Do Not Include Legitimate Interests or Contractual Necessity

As alternative to consent, all other lawful grounds for processing personal data have been amalgamated under the “legitimate uses” section, including some grounds of processing that previously appeared under a “reasonable purposes” category in previous iterations of the bill. It is notable that the list of “legitimate uses” in Section 7 of the Act does not include similar provisions to the grounds of “contractual necessity” and “legitimate interests” found in GDPR-style data protection laws, leaving limited options to private fiduciaries for grounding processing of personal data outside of consent, including for routine or necessary processing operations. 

Among the defined “legitimate uses”, the most relevant ones for processing personal data outside of a government, emergency or public health context, are the “voluntary sharing” of personal data under Section 7(a) and the “employment purposes” use under Section 7(i). 

The lawful ground most likely to raise interpretation questions is “voluntary sharing.” It allows a fiduciary to process personal data for a specified purpose for which a principal has voluntarily provided their personal data to the data fiduciary (presumably, provided it without the fiduciary seeking to obtain consent), and for which the principal has not indicated to the fiduciary an objection to the use of the personal data. For instance, one of the illustrations included in the law to explain Section 7(a) is the hypothetical of a buyer requesting a receipt of purchase at a store be sent to her phone number, permitting the store to use the number for that purpose. There is a possibility that subsequent rules may expand this “legitimate use” to cover instances of “contractual necessity” or “legitimate interests.”

A fiduciary may also process personal data without consent for purposes of employment or those related to safeguarding the employer from loss or liability, such as prevention of corporate espionage, maintenance of confidentiality of trade secrets, intellectual property, classified information or provision of any service to employees.

  1. Data Principals Have a Limited Set of “Data Subject Rights,” But Also Obligations

The DPDP Act provides data principles a set of enumerated rights, which is limited compared to those offered under modern GDPR-style data protection laws. The DPDP guarantees a right of access and a right to erasure and correction, in addition to a right to receive notice before consent is sought (similar to the right to information in the GDPR). Thus, a right to data portability, a right to object to processing based on other grounds than consent, and the right not to be subject to solely automated decision-making are missing. 

Instead, the DPDP Act provides for two other rights – a right to “grievance redressal,” which entails the right to have an easily accessible point of contact provided by the fiduciary to respond to complaints from the principal, and a right to “appoint a nominee,” which permits the data principal to nominate someone who can exercise rights on their behalf in the event of death or incapacity.

Notably, the rights of access, erasure and correction are limited to personal data processing based on consent or the “voluntary disclosure,” legitimate use, which means that whenever government bodies or other fiduciaries rely on any of the “legitimate uses” grounds they will not need to reply to access or erasure/correction requests, unless further rules adopted by the government specify otherwise.

In addition, the right of access is quite limited in scope. It only gives data principals the right to request and obtain a summary of the personal data being processed and of the relevant processing activities (as opposed to obtaining a copy of the personal data), and the identities of all fiduciaries and processors with whom the personal data has been shared by the fiduciary, along with a summary of the data being shared. However, Section 11 of the law leaves space for subsequent rules that may specify additional information to be given access to.

Data principals have the right to request erasure of personal data pursuant to Section 12(3), but it is important to highlight that erasure may also be required automatically – after the withdrawal of consent or when the specified purpose is no longer being served (Section 8(7)(a)). Similarly, correction, completion and updating of personal data can be requested by the principal, but must also occur automatically when the personal data is “likely to be used to make a decision that affects” the principal (Section 8(3)).  

Data Principals May Be Fined if They Do Not Comply With Their Obligations

Unlike the majority of international data protection laws, Section 15 of the DPDP Act imposes duties on data principals, similar to Article 10 of Vietnam’s recently adopted Personal Data Protection Decree (titled “Obligations of data subjects”). 

These obligations include, among others, a duty not to impersonate someone else while providing personal data for a specified purpose, not suppress any material information while providing personal data for any document issued by the Government, and, significantly, not register a false or frivolous grievance or complaint. Noncompliance may result in a fine (see clause 5 of the Schedule). This may hamper the submission of complaints with the Board, per expert analysis.

  1. Fiduciaries are Bound by a Principle of Accountability and Have Data Breach Notification Obligations

The DPDP Act does not articulate Principles of Processing, or Fair Information Practice Principles, but the content of several of its provisions put emphasis on purpose limitation (as explained in previous sections of the blog) and on the principle of accountability. 

Section 8 of the Act includes multiple obligations for data fiduciaries, all under an umbrella expectation in paragraph 1 that they are “responsible for complying” with the provisions of the Act and any subsequent implementation rules, both regarding processing undertaken by the data fiduciary and by any processor on its behalf. This specification echoes the GDPR accountability principle. In addition, data fiduciaries are under an obligation to implement appropriate technical and organizational measures to ensure the effective implementation of the law.

Data security is of particular importance, considering that data fiduciaries must both take reasonable security safeguards to prevent personal data breaches, and notify the Board and each affected party if such breaches occur. The details related to modalities and timeline of notification will be specified in subsequent implementation rules. 

A final obligation of data fiduciaries to highlight is the requirement they establish  a “readily available” mechanism for redressing “grievances” by data principals in a timely manner. The “grievance redress” mechanism is of utmost importance, considering that data principals cannot address the Board with a complaint until they “exhaust the opportunity of redressing” the grievance through this mechanism (Section 13(3)). The Act leaves determination of the time period for responding to grievances to delegated legislation, and it is possible that there may be different time periods for different categories of companies.

  1. Fiduciaries Have a Mandate to Verify Parental Consent for Processing Personal Data of Minors under 18

The DPDP Act creates significant obligations concerning the processing of children’s personal data, with “children” defined as minors under 18 years of age, without any distinguishing sub-category for older children or teenagers. As a matter of principle, data fiduciaries are forbidden to engage in any processing of children’s data that is “likely to cause any detrimental effect on the well-being of the child.”

Data fiduciaries are under an obligation to obtain verifiable parental consent before processing the personal data of any child. Similarly, consent must be obtained from a lawful guardian before processing the data of a person with disability. This obligation, which is increasingly common to privacy and data protection laws around the world, may create many challenges in practice. A good resource for untangling its complexity and applicability is FPF’s recently published report and accompanying infographic – “The State of Play: Is Verifiable Parental Consent Fit For Purpose?

Finally, the Act also includes a prohibition on data fiduciaries engaging in tracking or behavioral monitoring of children, or targeted advertising directed at children. Similar to many other provisions of the Act, the government may issue exemptions from these obligations for specific classes of fiduciaries, or may even lower the age of digital consent for children when their personal data is processed by designated data fiduciaries.

  1. The Act Creates a Data Protection Board to Enforce the Law, But Reserves Regulatory Powers For the Government

The DPDP Act empowers the Government to establish the Board as an independent agency that will be responsible for enforcing the new law. The Board will be led by a Chairperson and will have Members appointed by the Government for a renewable two-year mandate.

The Board is vested with the power to receive and investigate complaints from data principals, but only after the principal has exhausted the internal grievance redress mechanism set up by the relevant data fiduciaries. The Board can issue binding orders against those who breach the law, can direct urgent measures to remediate or mitigate a data breach, imposing financial penalties and direct parties to mediation. 

While the Board is granted “the same powers as are vested in a civil court” – including summoning any person, receiving evidence, and inspecting any documents (Section 28(7)), the Act specifically excludes any access to civil courts in the application of its provisions (Section 39), creating a de facto limitation on effective judicial remedy similar to the relief provided in Article 82 GDPR. The Act grants any person affected by a decision of the Board the right to pursue an appeal in front of an Appellate Tribunal, which is designated the Telecom Disputes Settlement and Appellate Tribunal established under other Indian law.

Penalties for breaches of the law have been stipulated in a Schedule attached to DPDP Act and range from the equivalent in rupees of USD $120 to USD $30.2 million. The Board can determine the penalty amount from a preset range based on the offense. 

However, the Board does not have the power to pass regulations to further specify details related to the implementation of the Act. The Government is conferred broad discretion in adopting delegated legislation to further specify the provisions of the Act, including clarifying modalities and timelines for fiduciaries to respond to requests from data principals, the requirements of valid notice for obtaining a data principal’s consent for processing of data, details related to data breach notifications, and more. The list of operational details that may be specified by the Government in subsequent rules is open-ended and detailed in Section 40(2)(a) to (z). Subsection (z) of this provision provides a catch-all permitting the Central Government to prescribe rules on “any other matter” related to the implementation of the Act. 

In practice, it is expected that it will take time for the new Board to be established and for rules to be issued in key areas for compliance. 

Besides rulemaking power, the Central Government has another significant role in the application of the law. Pursuant to Section 36, it can require any information (including presumably personal data) that it wants (or “call for”) from the Board, data fiduciaries, and “intermediaries” as defined by the IT Act. No further specifications are made in relation to such requests, other than that they must be made “for the purposes of the Act.” This provision is broader and subject to fewer restrictions than provisions on data access requests in the existing IT Act and its subsidiary rules.

Additionally, the Central Government may also order or direct any governmental agency and any “intermediary” to block information for access by the public “in the interests of the general public.” To issue such an order, the Board will need to have sanctioned the data fiduciary concerned at least twice in the past, and the Board must advise the Central Government to issue such an order. An order blocking public access may refer to “any computer resource” that enables data fiduciaries to offer goods or services to data principals within the territory of India. While it is now common among modern comprehensive data protection laws around the world for independent supervisory authorities to order erasure of personal data unlawfully processed, or to order international data transfers or sharing of personal data to cease if conditions of the law are not met, these provisions of the DPDP Act are atypical because the orders will come directly from the Government, and also because they more closely resemble online platform regulation than privacy law.

  1. Exceptions for Publicly Available Data And Processing for Research Purposes Are Notable for Training AI

Given that this law comes in the midst of a global conversation about how to regulate artificial intelligence and automated decision-making, it is critical to highlight provisions in the law that seem directed at facilitating development of AI trained on personal data. Specifically, the Act excludes from its application most publicly available personal data, as long as it was made publicly available by the data principal – for example, a blogger or a social media user publishing their personal data directly – or by someone else under a legal obligation to publish the data, such as personal data of company shareholders that regulated companies must publicly disclose by law.

Additionally, the Act exempts the processing of personal data necessary for research or statistical purposes (Section 17(2)(b)). This exemption is extremely broad, with only one limitation in the core text: the Act will still apply to research and statistical processing if the processing activity is used to make “any decision specific to the data principal.”

There is only one other instance in the DPDP Act where processing data to “make decisions” about a data principal is raised. Data fiduciaries are under an obligation to ensure the “completeness, accuracy and consistency” of personal data if it is used to make a decision that affects the data subject. In other words, while the Act does not provide for a GDPR-style right not to be subject to automated decision-making, it does require that when personal data are used for making any individual decisions, presumably including automated or algorithmic decisions, such data must be kept accurate, consistent and complete. 

Additionally, the DPDP Act remains applicable to any processing of personal data through AI systems, if the other conditions of the law are met, given the broad definitions of “processing” and of “personal data.” Further rules adopted by the Central Government or other notifications may provide more guidance in this regard.

Notably, the Act does not exempt processing of personal data for journalistic purposes, a fact criticized by the Editors’ Guild of India. In previous versions of the Bill, especially the expert version spearheaded by Justice Srikrishna in 2017, this exemption was present. It is still possible that the Central Government will address this issue through delegated legislation. 

Key Takeaways and Further Clarification

India’s data protection Act has been in the works for a significant period of time and the passage of the law is a welcome step forward after the recognition of privacy as a fundamental right in India by the Supreme Court in its landmark Puttaswamy judgment.

While the basic structure of the law is similar to many other global laws like the GDPR and its contemporaries, India’s approach has its differences, such as more limited grounds of processing, wide exemptions for government actors, regulatory powers for the government to further specify the law and to exempt specific fiduciaries or classes of fiduciaries from key obligations, no baked-in definition or heightened protection for special categories of data, and the rather unusual inclusion of powers for the Government to request access to information from fiduciaries, the Board and “intermediaries”, as well as to block access by the public to specific information in “computer resources”.

Finally, we note that many details of the Act are still left to be clarified once the new Data Protection Board of India is set up and further rules for the specification of the law are drafted and officially notified. 

Editors: Lee Matheson, Dominic Paulger, Josh Lee Kok Thong

FPF at Singapore PDP Week 2023: Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific

Authors: Cheng Kit Pang, Elena Guañuna, Alistair Simmons, and Matthew Rostick

Cheng Kit Pang, Elena Guañuna, Alistair Simmons, and Matthew Rostick are FPF Global Privacy Interns.

From July 18 to July 21, 2023, the Personal Data Protection Commission (PDPC) of Singapore held its annual Personal Data Protection Week (PDP Week), which overlapped with the IAPP’s Asia Privacy Forum 2023.  

The Future of Privacy Forum (FPF)’s flagship event during PDP Week was a roundtable on the governance implications of generative AI systems in the Asia-Pacific (APAC) region. In organizing this event together with the PDPC, FPF brought together over 80 participants from industry, academia, the legal sector, and international organizations, as well as regulators from across the APAC region, Africa, and the Middle East.   

FPF Roundtable on Governance of Generative AI Systems in APAC

On July 21, FPF organized a high-level closed-door roundtable, titled “Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific.” The roundtable explored the issues raised by applications of existing and emerging AI governance frameworks in APAC to generative AI systems.

Dominic Paulger (Policy Manager, FPF APAC) kicked off the roundtable with a presentation on the existing governance and regulatory frameworks that apply to generative AI systems in the APAC region. The presentation highlighted that to date, most major APAC jurisdictions have opted for “soft law” approaches to AI governance, such as developing ethical frameworks and voluntary governance frameworks, rather than “hard law” approaches, such as enacting binding regulations. However, the presentation also explained that China is an exception to this rule and has been active in enacting regulations targeting specific AI technologies, such as deep synthesis technologies and most recently, generative AI. In addition, even if they do not specifically target Generative AI, the comprehensive data protection laws enacted in most jurisdictions in the region are also applicable to how these types of computer programs are trained and generally process personal data.

The presentation was followed by three hours of discussion, facilitated by Josh Lee Kok Thong (Managing Director, FPF APAC). The discussions were first initiated by firestarters from industry, regulators, and academia: 

Turning to the wider roundtable discussion, participants highlighted the fast pace of developments in generative AI technology and hence, the importance of adopting an agile and future-proof approach to governance. Participants also identified that compared with other forms of AI technology, generative AI systems were more likely to raise challenges in addressing unseen bias in very large, unstructured data sets and “hallucinations” (generated output that is grammatically accurate but nonsensical or factually inaccurate). 

To address these issues, participants highlighted the importance of developing standards and metrics for evaluating the safety of generative AI systems and for measuring the effectiveness of achieving desired outcomes. Participants also called for efforts to educate users on generative AI systems, including the capabilities, limits, and risks of these technologies.

Regarding regulation of generative AI, participants were generally in favor of an incremental approach to the development of governance principles for generative AI systems in the region – allowing actors in the AI value chain to explore ways to operationalize existing AI principles and apply existing governance frameworks to the technology – rather than enacting “hard law” regulations. 

Participants also agreed on the need for AI governance principles to account for the three basic layers of the AI technology stack as different policy considerations apply at each of these levels, namely:

Several participants also raised that at the ecosystem level, it would be important for stakeholders to develop a common or standardized set of terminologies or taxonomies for key concepts in generative AI technology, such as “foundation models” or “large language models” (LLMs).

Some participants also called for greater collaboration between stakeholders, and a multidisciplinary approach to governance of generative AI systems and global alignment when developing best practices.

pdp week 4

pdp week 3

pdp week 5

pdp week 2

Photos: Participants from FPF Roundtable on Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific, 7/21/2023. Photos courtesy of the PDPC.

Other FPF Activities during PDP Week 

IAPP Asia Privacy Forum 2023

On July 20, FPF organized an IAPP panel discussion titled “Unlocking Legal Bases for Processing Personal Data in APAC: A Practical Guide,” which built on FPF’s year-long research project on consent and other legal bases for processing personal data in the APAC region – the final report of which was released in November 2022.

Moderator Josh Lee Kok Thong led the discussion, in which panelists Deputy Commissioner Denise Wong, Deputy Commissioner Leandro Y. Aguirre, Arianne Jimenez, and David N. Alfred (Co-Head, Data Protection, Privacy & Cybersecurity Practice, Drew & Napier) explained the challenges faced by practitioners and regulators in addressing differing data requirements for consent and alternatives like “legitimate interests” in APAC data protection laws.

pdp week 1

Photo: FPF Panel on Unlocking Legal Bases for Processing Personal Data in APAC, July 20, 2023.

FPF’s APAC office was also represented at two further panels during IAPP Asia Privacy Forum 2023: 

FPF Training on EU AI Act

On the sidelines of PDP Week, FPF held its inaugural in-person FPF Training session in the APAC region. The closed-door training session, which focused on the forthcoming EU AI Act and its impact on the APAC region, was held on July 20 and was conducted by Katerina Demetzou (Senior Counsel for Global Privacy, FPF) with interventions from Vincenzo Tiani from his experience of advising Members of the European Parliament (MEPs) on drafting the EU AI Act. The training provided a detailed analysis of the draft AI Act and explained the lifecycle of AI systems and the law-making process in the EU. The training drew close to 20 attendees comprising regulators and representatives from industry and the legal sector.

pdp week 6

Photo: FPF Training on the EU AI Act, 7/20/2023 

Conclusion

This was the second time that FPF organized events around PDP Week since the launch of FPF’s APAC office in 2021. The week’s events enabled FPF APAC to foster collaborative dialogues among regulators, industry, academia, and civil society from the APAC region and draw links with the EU, and the US. FPF is grateful for the support of the PDPC and IAPP in organizing these activities.

Edited by Dominic Paulger and Josh Lee Kok Thong

Insights into Brazil’s AI Bill and its Interaction with Data Protection Law: Key Takeaways from the ANPD’s Webinar

Authors: Júlia Mendonça and Mariana Rielli

The following is a guest post to the FPF blog by Júlia Mendonça, Researcher at Data Privacy Brasil, and Mariana Rielli, Institutional Development Coordinator at Data Privacy Brasil. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.

On July 6, 2023, the Brazilian National Data Protection Authority (ANPD) held a webinar event entitled: The interplay between AI regulation and data protection. The dialogue unfolded in the broader context of developments in AI regulation in Brazil which has, as its main drivers, the bills that propose a Regulatory Framework for Artificial Intelligence in the country. The bills were jointly analyzed by a Commission of 18 jurists appointed by the Federal Senate, which promoted meetings, seminars, and public hearings to substitute them with a new draft proposal. At the beginning of May, the draft produced by the Commission was transformed into a new bill that is currently going through the legislative process: Bill PL nº2338 (AI draft bill). 

The ANPD, noting the need to harmonize any upcoming AI regulation with the existing data protection regime (as well as future enforcement matters), organized this webinar, in addition to having published a preliminary analysis of the AI draft bill. The discussions during the webinar offer a glimpse into the AI lawmaking and policymaking in Brazil, one of the largest jurisdictions in the world – one that is also covered by a general data protection law applicable to personal data processed in the context of an AI system. This brief blog post outlines the main topics discussed during the event, particularly in relation to the interplay between the current AI draft bill and Brazil’s General Data Protection Law (LGPD).

The webinar’s opening welcomed Waldemar Gonçalves (President, ANPD, Brazil), Eduardo Gomes (Senator of the Republic, Brazil), and Estela Aranha, (Special Advisor, Ministry of Justice and Public Security, Brazil). The panel that followed was formed by representatives of the National Data Protection Council (CNPD) – a multisectoral advisory body, part of the ANPD structure – namely, Ana Paula Bialer (Founding Partner, Bialer Falsetti Associados, Brazil), Bruno Bioni, (Director and Founder, Data Privacy Brasil), Fabrício da Mota (Vice President, Conselho Federal da OAB, Brazil), and Laura Schertel (Visiting researcher, Goethe Universität Frankfurt; and private law Professor and lawyer, Brazil/EU).

Key representatives highlight the need for ongoing harmonization between AI regulation and data protection law in Brazil 

As the President of the ANPD, Waldemar Gonçalves highlighted the Authority’s ongoing work on the AI agenda, noting that data protection rules under the LGPD are closely interconnected with those provided for in the AI draft bill, such as with regard to the right to information. With such similarities in mind, Gonçalves noted the need for harmonization between different tools, such as the Data Protection Impact Assessment (DPIA) and the Algorithm Impact Assessment (AIA). 

Another initiative of the ANPD highlighted by Gonçalves as relevant to the AI agenda and the current AI regulatory efforts was the technical agreement between the Authority and the Latin American Development Bank (CAF), which will include a regulatory sandbox pilot program on data protection and AI

ANPD’s current president closed his remarks recalling the various recent cases in which data protection authorities around the world have spoken out on issues concerning AI-based systems, thereby reinforcing the importance of the ANPD in assuming an active role in this discussion. Eduardo Gomes, rapporteur of the AI draft bill, started from the same premises to support the efforts with the president of the Senate, Rodrigo Pacheco. In addition to reinforcing the importance of work of the Commission of Jurists in laying the groundwork for the debate in Brazil, he also recognized the need to foster other opportunities to “mature the subject.”

Concluding the opening panel, Estela Aranha focused her presentation on the topic of algorithmic discrimination in the context of the interplay between AI and existing data protection norms. Aranha mentioned examples with regards to data mining and how the resulting massive collection of data can generate the most varied risks, including risks of discrimination, and can go beyond the most obvious examples of sensitive and inferred data. The relevance of this specific point in the debate stems from the fact that the proposed AI draft bill is quite detailed, both in terms of definitions and obligations created, with regards to direct and indirect discrimination potentially created or enhanced by AI systems in the Brazilian context. Finally, Aranha also reaffirmed the Ministry of Justice’s support for the Bill. 

A deeper dive into the proposed AI draft bill and possible future(s) of AI regulation  

The following panel focused on a deeper look at the proposed AI draft bill and some of the specific provisions therein. The first panelist, Ana Paula Bialer, highlighted that there is already a robust framework for data protection that grants the data subject greater control over their data, based on the principle of “informational self-determination.” However, Bialer made a point that there may be a certain difficulty in applying the rationale of data protection to AI. Not in the sense that the data used is presumably not protected, but rather that there should be a thorough exercise of extension and “revalorization” of the principles of the LGPD, combined with a review of the set of rights put in place in the context of AI systems. 

Already assessing the current draft bill, Bialer also considered that the meaning of a human-centered approach can be different when thinking about different applications of AI in varying socio-economic contexts, exemplifying her reflection through the topic of recruitment and new hires’ selection and the right to full employment in Brazil. Bialer concluded by reaffirming the benefits that can be brought by AI for social and economic development in the country, as well as for the exercise of fundamental rights. In this context, Bialer welcomed the ANPD’s regulatory sandbox initiative and positioned herself more favorably to a strongly risk-based approach to AI regulation.

Bruno Bioni began by emphasizing the importance of having a dose of skepticism with regards to the broader debate – both on AI, and in respect to AI regulation – especially in a scenario where the almost “apocalyptic” narrative around AI continues gaining notoriety. This is important because, in Bioni’s opinion, such discourse may end up underestimating the regulatory tools that already exist. The very field of personal data protection has already provided positive and negative lessons when it comes to an object of regulation that is very plastic and polyvalent, “with a regulatory mission that is transversal and not sectoral.”

Bioni continued by pointing out that the intersection of data protection, AI regulation, and governance is very much related to the idea of a “toolbox” that opens opportunities for a more collaborative, collective regulatory production, relying on companies themselves to participate and to some extent, be rewarded, for example, if they demonstrate a good level of accountability. 

Among the various existing tools and how they can support each other, Bioni highlighted Algorithmic Impact Assessments (AIA) and Data Protection Impact Assessments (DPIAs) as  documentation that can foster and unfold into the other in such a way as to optimize both. The ANPD has already positioned the DPIA prescribed by the LGPD as an instrument to be better regulated and better standardized, which, for the expert, will be a significant advancement, even in a hypothetical scenario where it takes a long time for an AI regulation to be passed. 

According to Bioni, it is for this reason that data protection authorities around the world have led enforcement actions, in the  absence of AI laws or authorities created with this specific mission. Bioni concluded his remarks by pointing out that it is essential to think about a more collective or networked governance approach.

Fabrício da Mota Alves focused on the issue of institutional arrangements and of thinking about future legislation inserted in a regulatory environment that is founded on the administrative action of the Brazilian State. Fabrício pondered on the possibility that, following other countries in the world, the ANPD promotes some degree of administrative action (supervisory and sanctioning, in addition to regulation and awareness) related to AI, reinforcing that there is a concern to understand and call for the ANPD to build a very robust regulatory environment. Above all, there is a call for formal protocols so that companies and experts can understand the limits and the scope of ANPD’s actions in this dynamic scenario.

Celebrating the space provided by the webinar as one of the first and most qualified to take place outside of the legislative environment, Alves emphasized that it is imperative that, also in the context of regulating and enforcing AI-related cases (regardless of specific frameworks), the Brazilian ANPD maintains the stance it has adopted so far, with broad public participation, hearings, public consultations, and processes that are open to criticism from all affected sectors.

What’s next for the Brazilian AI bill?

Brazil’s AI draft bill is in its early stages, although it has already been the result of lengthy discussions by the expert committee assigned to prepare a new draft in 2022. There is an expectation that it will now be analyzed by a special committee of parliamentarians designated specifically to debate the Bill, with the prospect of new rounds of public hearings. After the text is approved by the plenary of the Brazilian Senate, the proposal still goes through the Chamber of Deputies, the reviewing house, until a common text is reached, which will then be sanctioned by the President of the Republic.

The whole webinar, in Portuguese, can be watched here

The First Japan Privacy Symposium: G7 DPAs discussed their approach to reign in AI, and other regulatory priorities

The Future of Privacy Forum and S&K Brussels hosted the first Japan Privacy Symposium in Tokyo, on June 22, 2023, following the G7 Data Protection and Privacy Commissioners roundtable. The Symposium brought global thought leadership on the interaction of data protection and privacy law with AI, as well as insights into the current regulatory priorities of the G7 Data Protection Authorities (DPAs) to an audience of more than 250 in-house privacy leaders, lawyers, consultants and journalists from Japan and the region.

The program started with a keynote address from Commissioner Shuhei Ohshima (Japan’s Personal Information Protection Commission), who shared details about the results of the G7 DPAs Roundtable from the day before. Two panels followed, featuring Rebecca Kelly Slaughter (Commissioner, U.S. Federal Trade Commission), Wojciech Wiewiórowski (European Data Protection Supervisor, EU), Philippe Dufresne (Federal Privacy Commissioner, Canada), Ginevra Cerrina Feroni (Vice President of the Garante, Italy), John Edwards (Information Commissioner, UK), and Bertrand du Marais (Commissioner, CNIL, France). Jules Polonetsky, FPF CEO, and Takeshige Sugimoto, Managing Partner at S&K Brussels and FPF Senior Fellow, hosted the Symposium. 

The G7 DPA Agenda, built on three pillars: Data Free Flow with Trust, emerging technologies, and enforcement cooperation

The DPAs of the G7 nations started to meet annually in 2020, following the initiative of the UK’s Information Commissioner Office during UK’s G7 Presidency that year. This is a new venue for international cooperation of DPAs, limited to Commissioners from Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union. Throughout the year, the DPAs maintain a permanent channel of communication and implement a work plan adopted during their annual Roundtable.

In his keynote at the Japan Privacy Symposium, Commissioner Shuhei Oshshima laid out the results of this year’s Roundtable, held in Tokyo on June 20 and 21. The Commissioner highlighted three pillars guiding the group’s cooperation this year: (I) Data Free Flow with Trust (DFFT), (II) emerging technologies, and (III) enforcement cooperation. 

The G7 Commissioners’ Communique expressed overall support for the DFFT political initiative, welcoming the reference to DPAs as stakeholders in the future Institutional Arrangement for Partnership (IAP), a new structure the G7 Digital Ministers announced earlier in April to operationalize the DFFT. However, in the Communique, the G7 DPAs emphasized that they “must have a key role in contributing on topics that are within their competence in this Arrangement.” It is noteworthy that, among their competencies, most G7 DPAs have the authority to order the cessation of data transfers across borders if legal requirements are not met (see, for instance, this case from the CNIL – the French DPA, this case from the European Data Protection Supervisor, or this case from the Italian Garante). 

The IAP seems to provide a key role for governments themselves currently, in addition to stakeholders and “the broader multidisciplinary community of data governance experts from different backgrounds,” according to Annex I of the Ministerial Declaration announcing the Partnership. The DPAs are singled out only as an example of such experts. 

In the Action Plan adopted in Tokyo, the G7 DPAs included clues as to how they see the operationalization of DFFT playing out: through interoperability and convergence of existing transfer tools. As such, they endeavor to “share knowledge on tools for secure and trustworthy transfers, notably through the comparison of Global Cross-Border Privacy Rules (CBPR) and EU certification requirements, and through the comparison of existing model contractual clauses.” (In an analysis touching broadly beyond the G7 jurisdictions, the Future of Privacy Forum published a report earlier this year emphasizing many commonalities, but also some divergence, among three sets of model contractual clauses proposed by the EU, the Iberoamerican Network of DPAs, and ASEAN).

Arguably, though, DFFT was not the main point on the G7 DPAs agenda. They had adopted a separate and detailed Statement on generative AI. In his keynote, Commissioner Shuhei Ohshima remarked that “generative AI adoption has increased significantly.” In order to promote trustworthy deployment and use of the new technology “the importance of DPAs is increasing also on a daily basis,” the Commissioner added.

Generative AI is not being deployed in a legislative void, and data protection law is the immediately applicable legal framework

Top of mind for G7 data protection and privacy regulators is AI, and generative AI in particular. “AI is not a law-free zone,” said FTC Commissioner Slaughter during her panel at the Symposium, being very clear that “existing laws on the books in the US and other jurisdictions apply to AI, just like they apply to adtech, [and] social media.”  This is apparent across the G7 jurisdictions: in March, the Italian DPA issued an order against OpenAI to stop processing personal data of users in Italy following concerns that ChatGPT breached the General Data Protection Regulation (GDPR); in May, the Canadian Federal Privacy Commissioner opened an investigation into ChatGPT jointly with provincial privacy authorities; and, in June, Japan’s PIPC issued an administrative letter warning OpenAI that it needs to comply with requirements from the Act on the Protection of Personal Information, particularly regarding the processing of sensitive data.

At the Japan Privacy Symposium, Ginevra Cerrina Feroni, VP of the Garante, shared the key concerns guiding the agency’s enforcement action against OpenAI, which was the first such action in the world. She highlighted several risks, including a lack of transparency about how OpenAI collects and processes personal data to deliver the ChatGPT service; uncertainty regarding a lawful ground for processing personal data, as required by the GDPR; a lack of avenues to comply with the rights of data subjects, such as access, erasure, and correction; and, finally, the potential exposure of minors to inappropriate content, due to inadequate age gating. 

After engaging in a constructive dialogue with OpenAI the Garante suspended the order, seeing improvements in previously flagged aspects. “OpenAI published a privacy notice to users worldwide to inform them how personal data is used in algorithmic training, and emphasized the right to object to such processing,” the Garante Vice President explained. She continued, noting that OpenAI “provided users with the right to reject their personal data being used for training the algorithms while using the service, in a dedicated way that is more easily accessible. They also enabled the ability of users to request deletion of inaccurate information, because – and this is important – they say they are technically unable to correct errors.” However, Vice President Cerrina Feroni mentioned that the investigation is ongoing and that the European Data Protection Board is currently coordinating actions among EU DPAs on this matter. 

The EDPS added that purpose limitation is among his chief concerns with services like ChatGPT, and generative AI more broadly. “Generative AI is meant to advance communication with human beings, but it does not provide fact-finding or fact-checking. We should not expect this as a top feature of Large Language Models. These programs are not an encyclopedia; they are just meant to be fluent, hence the rise of possibilities for them to hallucinate,” Supervisor Wiewiorowski said. 

Canadian Privacy Commissioner Philippe Dufresne emphasized that how we relate to generative AI from a privacy regulatory perspective “is an international issue.” Commissioner Dufresne also added, “a point worth repeating is that privacy must be treated as a fundamental right.” This is important, as “when we talk about privacy as a fundamental right, we point out how privacy is essential to other fundamental human rights within a democracy, like freedom of expression and all other rights. If we look at privacy like that, we must see that by protecting privacy, we are protecting all these other rights. Insofar as AI touches on these, I do see privacy being at the core of all of it,” Commissioner Dufresne concluded.

The G7 DPAs’ Statement on Generative AI outlines their key concerns, such as lack of legal authority to process personal data at all stages

In the aforementioned Generative AI Statement, the G7 data protection regulators laid out their main concerns in relation to how personal data is processed through this emerging type of computer program and service. First and foremost, the commissioners are concerned that processing of personal data lacks legal authority during all three relevant stages of developing and deploying generative AI systems: for the data sets used to train, validate and test generative AI models; for processing personal data resulting from the interactions of individuals with generative AI tools during their use; and, for the content that is generated by generative AI tools.

The commissioners also highlighted the need for security safeguards to protect against threats and attacks that seek to invert generative AI models, and that would technically prevent extractions or reproductions of personal data originally processed in datasets used to train the models. They also advocated for mitigation and monitoring measures to ensure personal data created by generative AI is accurate, complete, and up-to-date, as well as free from discriminatory, unlawful, or otherwise unjustifiable effects.

It is clear that data protection and privacy commissioners are proactive about ensuring generative AI systems are compatible with privacy and data protection laws. Only two weeks after their roundtable in Tokyo, it was reported that the US FTC initiated an investigation against OpenAI. And this proactive approach is intentional. As UK’s Information Commissioner, John Edwards, made clear, the commissioners are “keen to ensure” that they “do not miss this essential moment in the development of this new technology in a way that [they] missed the moment of building the business models underpinning social media and online advertising.” “We are here and watching,” he said.

Regardless of the adoption of new AI-focused laws, DPAs would remain central to AI governance

The Commissioners also discussed the wave of legislative initiatives targeting AI in their jurisdictions. AI systems are not built and deployed in a legislative void: data protection law is largely and immediately relevant, as is consumer protection law, product liability rules, and intellectual property law. In this environment, what is the added value of specific, targeted legislation addressing AI?

Addressing the EU AI Act proposal, European Data Protection Supervisor Wiewiórowski noted that the EU’s initiation of the legislation is not because the legislator thought there was a vacuum. “We saw that there were topics to be addressed more specifically for AI systems. There was a question whether we approach it as a product, service, or some kind of new phenomenon as far as legislation is concerned,” he added. As for the role of the DPAs once the AI Act will be adopted, he brought up the fact that in the EU, data protection is a fundamental right: which means that all legislation or policy solutions governing processing of personal data in a way or another must be looked at through this lens. As supervisory authorities tasked with guaranteeing this fundamental right, DPAs will continue playing a role.

The framework ensuring the enforcement of the AI Act is still under debate, as EU Member States are tasked with designating competent national authorities, and the European Parliament hopes to create a supranational collaborative body to play a role in enforcement. However, one thing is certain: in the proposal, the EDPS has been designated the competent authority to ensure that EU agencies and bodies comply with the EU AI Act. 

The CNIL seems to be eyeing the designation as EU AI Act enforcer as well. Commissioner du Marais pointed out that “since 1978, the French Act on IT and Freedom has banned automated decisions. We have a fairly long and established body of case law.” Earlier this year, the CNIL created a dedicated department including data and computer scientists among staff to monitor how AI systems comply with legal obligations stemming from data protection law. “To be frank, we don’t know yet what will come out of the legislative process, but we have started to prepare ourselves. We have also been designated by domestic law as supervisory and certification authority for AI during the 2024 Olympic Games.” 

The Garante has a long track record of enforcing data protection law on algorithmic systems and decision-making that impacted the rights of individuals. “The role of the Garante in safeguarding digital rights has always been prominent, even when the issue was not yet widely recognized by the public,” said Vice President Cerrina Feroni. Indeed, as shown by extensive research published last year by the Future of Privacy Forum, European DPAs have long been enforcing data protection law in cases where automated decision-making was central. The Garante led impactful investigations against several gig economy apps and their algorithms’ impacts on people.

Canada is also in the midst of legislating AI, introducing a bill last year that is currently under debate. “There is similarity with the European proposal, but [the Canadian bill] focuses more on high impact AI systems and on preventing harms and biased outputs and decision-making. It provides significant financial fines,” Commissioner Dufresne explained. As part of the bill, enforcement is currently assigned to the relevant ministry in the Canadian government. The Privacy Commissioner explained that the regulatory activity would be coordinated with his office, but also with the competition, media, and human rights regulators in Canada. When contributing recommendations during the legislative process, Commissioner Dufresne noted that he suggested “privacy to be a key principle.” In light of his vision that privacy as a fundamental right is essential for the realization of other fundamental rights, the Commissioner had a clear message that “the DPAs need to be front and center” of the future of AI governance.

UK Commissioner Edwards echoed the value of entrenched collaboration among digital regulators, adding that the UK already has an official “Digital Regulators Cooperation Forum,” established with its own staff. The entity “is important to provide a coherent regulatory framework,” he said.

Children’s privacy is a top priority across borders, with new regulatory approaches showing promising results

One of the key concerns that the G7 DPAs have in relation to generative AI is how the new services are dealing with children’s privacy. In fact, the regulators have made it one of their top priorities to broadly pursue the protection of children’s privacy when regulating social media services, targeted advertising, or online gaming, among others. 

Building on a series of recent high-profile cases brought by the FTC in this space, Commissioner Slaughter couldn’t have been clearer: “Kids are a huge priority issue for the FTC.” She reminded the audience that COPPA (Children’s Online Privacy Protection Act) has been around for more than two decades, and it is one of the strongest federal privacy laws in the US: “The FTC is committed to enforcing it aggressively.” Commissioner Slaughter explained that the FTC’s actions, such as their recent case against Epic Games, include considerations related to teenagers as well, even if they are not technically covered by COPPA protections, but are covered by the “unfair practices” doctrine of the FTC. 

UK Commissioner John Edwards gave a detailed account of the impact of the UK’s Age Appropriate Design Code in the design of online services provided to children, which was launched by his office in 2020. “We have seen genuine changes, including privacy settings being automatically set to very high for children. We have seen children and parents and carers being given more control over privacy settings. And we have seen that children are no longer nudged to lower privacy settings, with clearer tools and steps in place for them to exercise their data protection rights. We have also seen ads blocked for children,” Commissioner Edwards said, pointing out that these are significant improvements for the online experience of children. These results have been obtained primarily through a collaborative approach with the service providers, who have implemented changes after their services were subject to audits conducted by the regulator. 

Children’s and teenagers’ privacy is also top of mind for the CNIL. Among a series of guidance, recommendations, and actions, the French regulator is adding another layer to its approach – digital education. “We have made education a strategic priority. We have a partnership with the Ministry of Education and we have available a platform to certify digital skills for children, as well as with resources for kids and parents,” Commissioner du Marais said. Regarding regulatory priorities, he emphasized attention to age verification tools. Among the principles the French regulator favors for age verification are no direct collection of identity documents, no age estimates based on web browsing history, and no processing of biometric data to recognize an individual. The CNIL has asked websites not to carry out age verification themselves, and to instead rely on third-party solutions. 

The discussions of the G7 DPA Commissioners who participated in the first edition of the Japan Privacy Symposium laid out a vibrant and complex regulatory landscape, centered around new challenges posed to societal values and rights of individuals by AI technology, but also making advancements in perennial topics like cross-border data transfers and children’s privacy. More meaningful and deeper enforcement cooperation is to be expected among the G7 Commissioners, whose Action Plan espoused their commitment to move towards constant exchanges related to enforcement actions and to revitalize existing global enforcement cooperation networks, like GPEN (Global Privacy Enforcement Network). Next year, the G7 DPA Commissioners will meet in Rome. 

Japan Privacy Symposium G7 regulators

Editor: Alexander Thompson