The Digital Personal Data Protection Act of India, Explained

Authors: Raktima Roy, Gabriela Zanfir-Fortuna

Raktima Roy is a Privacy Attorney with several years of experience in India and holds an LLM in Law and Technology from Georgetown University, as well as an FPF Global Privacy Intern.

The Digital Personal Data Protection Act of India (DPDP) sprinted through its final stages last week after several years of debates, postponements and negotiations, culminating with its publication in the Official Gazette on Friday, August 11, 2023. In just over a week, the Bill passed the lower and upper Houses of the Parliament and received Presidential assent. India, the most populous country in the world with more than 1.4 billion people, is the largest democracy and the 19th country among the G20 members to pass a comprehensive personal data protection law – which it did during its tenure holding the G20 Presidency.

The adoption of the DPDP Bill in the Parliament comes 6 years after Justice K.S. Puttaswamy v Union of India, a landmark case in which the Supreme Court of India recognized a fundamental right to privacy in India, including informational privacy, within the “right to life” provision of India’s Constitution. In this judgment, a nine-judge bench of the Supreme Court urged the Indian Government to put in place “a carefully structured regime” for the protection of personal data. As part of India’s ongoing efforts to create this regime, there have been several rounds of expert consultations and reports, and two previous versions of the bill were introduced in the Parliament in 2019 and 2022. A brief history of the law is available here

The law as enacted is transformational. It has a broad scope of application, borrowing from the EU’s General Data Protection Regulation (GDPR) approach when defining “personal data” and extending coverage to all entities who process personal data regardless of size or private status. The law also has significant extraterritorial application. The DPDP creates far reaching obligations, imposing narrowly defined lawful grounds for processing any personal data in a digital format, establishing purpose limitation obligations and their corollary – a duty to erase the data once the purpose is met, with seemingly no room left for secondary uses of personal data, and creates a set of rights for individuals whose personal data are collected and used, including rights to notice, access and erasure. The law also creates a supervisory authority, the Data Protection Board of India (Board), which has the power to investigate complaints and issue fines, but does not have the power to issue guidance or regulations. 

At the same time, the law provides significant exceptions for the central government and other government bodies, the degree of exemption depending on their function (such as law enforcement). Other exemptions include those for most publicly available personal data, processing for research and statistical purposes, and processing the personal data of foreigners by companies in India pursuant a contract with a foreign company (such as outsourcing companies). Some processing by startups may also be exempt, if notified by the government. The Act also empowers the central government to act upon a notification by the Board and request access to any information from an entity processing personal data, an intermediary (as defined by the Information Technology Act, 2000 – the “IT Act”) or from the Board, as well as to order suspension of access of the public to specific information. The Central Government is also empowered to adopt a multitude of “rules” (similar to regulations under US state privacy laws) that detail the application of the law. 

It is important to note that the law will not come into effect until the government provides notice of an effective date. The DPDP Act does not contain a mandated transitional period akin to the two-year gap between the 2016 enactment of the GDPR and its entry into force in May 2018. Rather, it empowers the Government to determine the dates on which different sections of the Act will come into force, including the sections governing the formation of the new Board that will oversee compliance with the law. 

This blog will lay out the most important aspects of the DPDP Act, understanding nonetheless that many of its key provisions will be shaped up through subsequent rules issued by the central government, and through practice. 

  1. The DPDP Act Applies to “Data Fiduciaries,” “Significant Data Fiduciaries,” and provides rights for “Data Principals” 

The DPDP Act seeks to establish a comprehensive national framework for processing personal data, replacing a much more limited data protection framework under the IT Act and rules that currently provide basic protections to limited categories of “sensitive” personal data such as sexual orientation, health data, etc. The new law by contrast covers all “personal data” (defined as “any data about an individual who is identifiable by or in relation to such data”) and does not contain heightened protection for any special category of data. The definition of “personal data,” thus, relies on the broad “identifiability” criterion, similar to the GDPR. Only “digital” personal data, or personal data collected through non-digital means that have been digitized subsequently are covered by the law. 

The DPDP Act uses the term “data principal” to refer to the individual that the personal data relates to (the equivalent of “data subject” under the GDPR). A “data fiduciary” is the entity that determines the purposes and means of processing of personal data, alone or in conjunction with others, and is the equivalent to a “data controller” under GDPR. While the definition of data fiduciaries includes a reference to potential joint fiduciaries, the Act does not provide any other details about this relationship. 

The definition of fiduciaries does not distinguish between private and public, natural and legal persons, technically extending to any person as long as the other conditions of the law are met. 

Specific Fiduciaries, Public or Private, Are Exempted or May Be Exempted from the Core Obligations of the Act

The law includes some broad exceptions for government entities in general, and others apply to specific processing purposes. For instance, the law allows the government to exempt activities that are in the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, maintenance of public order, or preventing incitement to commit crimes if it provides notice of the exemptions. Justice Srikrishna, who as the head of an expert committee set up to recommend a data protection law in India led the creation of the 2017 first draft of the law, has been critical of these government exemptions, as have been several Members of Parliament during the legislative debate. 

Some targeted exceptions also apply to companies, and are either well defined in the law or left to the government for specification. Under what can be called an “outsourcing exception,” the Act exempts companies based in India who process the personal data of people outside of India pursuant to a contract with a company based outside of India from core DPDP obligations including the rights of access and erasure normally held by data principals. Instead, such companies are largely required to only comply with data security obligations. 

In addition, the government is empowered to exempt any category of data fiduciaries  from some or all of the law, with the DPDP itself referring to “startups” in this context. These are fairly broad provisions and do not include any guidance on how they will apply or who could benefit from them. The government will need to make a specific designation for this exception to operate.

Significant Data Fiduciaries Have Significant New Obligations, such as DPOs, DPIAs and Audits 

The DPDP Act empowers the Government to designate any data fiduciary or class of data fiduciaries as a “Significant Data Fiduciary” (SDF), which is done using a series of criteria that lack quantifiable thresholds. These factors range from assessing characteristics of the processing operations (volume and sensitivity of personal data processed and the risk posed to the rights of data principals), to broader societal and even national sovereignty concerns (potential impact of the processing on the sovereignty and integrity of India; risk to electoral democracy; security of the state; and public order).

The designation of companies as SDFs is consequential, because it comes with enhanced obligations. Chief among them, SDFs will need to appoint a Data Protection Officer (DPO), who must be based in India and be the point of contact for a required grievance redressal mechanism. SDFs must also  appoint an independent data auditor to carry out data audits and evaluate the SDF’s compliance with the DPDP Act, and to undertake periodic Data Protection Impact Assessments.

It is important to note that appointing a DPO is not an obligation for all data fiduciaries. However, all fiduciaries are under an obligation to establish a “readily available” mechanism for redressing grievances by data principals in a timely manner. In order for such a process to be operationalized, usually an internal privacy compliance function or a dedicated privacy officer would be helpful.

The DPDP Act Recognizes the Role of Data Processors

Data processors are recognized by the DPDP Act, which makes it clear that fiduciaries may engage, appoint or otherwise involve processors to process personal data on their behalf “only under a valid contract” (Section 8(2)). There are no prescribed rules for what a processing contract should entail. However, the DPDP Act places all obligations on data fiduciaries, which remain liable for complying with the law. 

Data fiduciaries remain liable for overall compliance, regardless of any contractual arrangement to the contrary with data processors. The DPDP Bill requires data fiduciaries to mandate that a processor delete data when a data principal withdraws consent, and fiduciaries be able to share information of processors they have engaged when requested by a data subject.

  1. The DPDP Act Has Broad Extraterritorial Effect and Almost No Restrictions for International Data Transfers

The DPDP Act applies to the processing of “digital personal data” within India. Importantly, the definition of the “data principal” does not include any condition related to residence or citizenship, meaning that it is conceivable fiduciaries based in India who process the personal data of foreigners within the territory of the country may be covered by the Act (outside of the “outsourcing exception” mentioned above). 

The Act  also applies extraterritorially to processing of digital personal data outside India, if such processing is in connection with any activity related to offering of goods or services to data principals within India. The extraterritorial effect is similar in scope to the GDPR, and it may leave room for a broader interpretation through its inclusion of “any activity” connected to the offering of goods or services.

The DPDP Act does not currently restrict the transfer of personal data outside of India. It reverses the typical paradigm of international data transfer provisions in laws like the GDPR, by presuming that transfers may occur without restrictions, unless the Government specifically restricts transfers to certain countries (blacklisting) or enacts any other form of restriction (Section 16). No criteria for such restrictions have been mentioned in the law. This is a significant departure from previous instances of the Bill, which at one point contained data localization obligations (2018), and evolved at another point into “whitelisting” of countries (2022). 

It should also be noted that other existing sectoral laws (e.g., those governing specific industries like banking and telecommunications) already contain restrictions on cross-border transfers of particular kinds of data. The DPDP Act clarifies that existing localization mandates will not be affected by the new law. 

  1. Consent Remains Primary Means for Lawful Processing of Personal Data Under the Act

Data fiduciaries are under an obligation to process personal data for a lawful purpose and only if they either obtain consent from the data principal for that purpose, or they identify a “legitimate use” consistent with Section 4. This process is conceptually similar to the approach proposed by the GDPR, requiring a lawful ground before personal data can be collected or otherwise processed. However, in contrast to the GDPR (which provides for six possible lawful grounds), the DPDP Act includes only two: strictly defined “consent” and “legitimate use.” 

Which lawful ground is used for a processing operation is consequential. Based on the wording of the Act and in the absence of further specification, the obligations of fiduciaries to give notice and respond to access, correction and erasure requests (see Section 4 of this blog) are only applicable if the processing is based on consent and on voluntary sharing of personal data by the principal. 

Valid Consent Has Strict Requirements, Is Withdrawable, And Can be Exercised Through Consent Managers

The DPDP Act requires that consent for processing of personal data be “free, specific, informed, unconditional and unambiguous with a clear affirmative action.” These conditions are similarly strict to those required under the GDPR, highlighting that the people whose personal data are processed must be free to give consent, and their consent must not be tied to other conditions.  

In order to meet the “informed” criterion, the Act requires that notice be given to principals before or at the time that they are asked to give consent. The notice must include information about the personal data to be collected, the purpose for which it will be processed, the manner in which data principals may exercise their rights under the DPDP Act, and how to make a complaint to the Board. Data principals must be given the option to receive the information in English or a local language among the languages specified in the Constitution.

The DPDP Act addresses the issue of legacy data for which companies may have received consent prior to the enactment of the law. Fiduciaries should provide the same notice to these data principals as soon as “reasonably practicable.” In that case, however, the data processing may continue until the data principal withdraws consent. 

Data fiduciaries may only process personal data for the specific purpose provided to the data principal and must obtain separate consent to process the data for a new purpose. In practice, this will make it difficult for data fiduciaries to rely on “bundled consent.” Provisions around “secondary uses” of personal data or “compatible purposes” are not addressed in the Act, making the purpose limitation requirements strict. 

Data principals may also withdraw their consent at any time – and data fiduciaries must ensure that the process for withdrawing consent is as straightforward as that for giving consent. Once consent is withdrawn, personal data must be deleted unless a legal obligation to retain data applies. Additionally, data fiduciaries must ask any processors to cease processing any personal data for which consent has been withdrawn, in the absence of legal obligations imposing data retention.

The DPDP Act allows principals to give, manage, review and withdraw their consent through a “Consent Manager,” which will be registered with the Board and must provide an accessible, transparent, and interoperable platform. Consent Managers are part of India’s “Data Empowerment And Protection Architecture” policy, and similar structures have been already functional for some time, such as in the financial sector. Under the DPDP Act, Consent Managers will be accountable to data principals and act on their behalf as per prescribed rules. The Government will notify (in the Gazette) the conditions necessary for a company to register as a Consent Manager, which may include fulfilling minimum technical or financial criteria.

“Legitimate Uses” Are Narrowly Defined and Do Not Include Legitimate Interests or Contractual Necessity

As alternative to consent, all other lawful grounds for processing personal data have been amalgamated under the “legitimate uses” section, including some grounds of processing that previously appeared under a “reasonable purposes” category in previous iterations of the bill. It is notable that the list of “legitimate uses” in Section 7 of the Act does not include similar provisions to the grounds of “contractual necessity” and “legitimate interests” found in GDPR-style data protection laws, leaving limited options to private fiduciaries for grounding processing of personal data outside of consent, including for routine or necessary processing operations. 

Among the defined “legitimate uses”, the most relevant ones for processing personal data outside of a government, emergency or public health context, are the “voluntary sharing” of personal data under Section 7(a) and the “employment purposes” use under Section 7(i). 

The lawful ground most likely to raise interpretation questions is “voluntary sharing.” It allows a fiduciary to process personal data for a specified purpose for which a principal has voluntarily provided their personal data to the data fiduciary (presumably, provided it without the fiduciary seeking to obtain consent), and for which the principal has not indicated to the fiduciary an objection to the use of the personal data. For instance, one of the illustrations included in the law to explain Section 7(a) is the hypothetical of a buyer requesting a receipt of purchase at a store be sent to her phone number, permitting the store to use the number for that purpose. There is a possibility that subsequent rules may expand this “legitimate use” to cover instances of “contractual necessity” or “legitimate interests.”

A fiduciary may also process personal data without consent for purposes of employment or those related to safeguarding the employer from loss or liability, such as prevention of corporate espionage, maintenance of confidentiality of trade secrets, intellectual property, classified information or provision of any service to employees.

  1. Data Principals Have a Limited Set of “Data Subject Rights,” But Also Obligations

The DPDP Act provides data principles a set of enumerated rights, which is limited compared to those offered under modern GDPR-style data protection laws. The DPDP guarantees a right of access and a right to erasure and correction, in addition to a right to receive notice before consent is sought (similar to the right to information in the GDPR). Thus, a right to data portability, a right to object to processing based on other grounds than consent, and the right not to be subject to solely automated decision-making are missing. 

Instead, the DPDP Act provides for two other rights – a right to “grievance redressal,” which entails the right to have an easily accessible point of contact provided by the fiduciary to respond to complaints from the principal, and a right to “appoint a nominee,” which permits the data principal to nominate someone who can exercise rights on their behalf in the event of death or incapacity.

Notably, the rights of access, erasure and correction are limited to personal data processing based on consent or the “voluntary disclosure,” legitimate use, which means that whenever government bodies or other fiduciaries rely on any of the “legitimate uses” grounds they will not need to reply to access or erasure/correction requests, unless further rules adopted by the government specify otherwise.

In addition, the right of access is quite limited in scope. It only gives data principals the right to request and obtain a summary of the personal data being processed and of the relevant processing activities (as opposed to obtaining a copy of the personal data), and the identities of all fiduciaries and processors with whom the personal data has been shared by the fiduciary, along with a summary of the data being shared. However, Section 11 of the law leaves space for subsequent rules that may specify additional information to be given access to.

Data principals have the right to request erasure of personal data pursuant to Section 12(3), but it is important to highlight that erasure may also be required automatically – after the withdrawal of consent or when the specified purpose is no longer being served (Section 8(7)(a)). Similarly, correction, completion and updating of personal data can be requested by the principal, but must also occur automatically when the personal data is “likely to be used to make a decision that affects” the principal (Section 8(3)).  

Data Principals May Be Fined if They Do Not Comply With Their Obligations

Unlike the majority of international data protection laws, Section 15 of the DPDP Act imposes duties on data principals, similar to Article 10 of Vietnam’s recently adopted Personal Data Protection Decree (titled “Obligations of data subjects”). 

These obligations include, among others, a duty not to impersonate someone else while providing personal data for a specified purpose, not suppress any material information while providing personal data for any document issued by the Government, and, significantly, not register a false or frivolous grievance or complaint. Noncompliance may result in a fine (see clause 5 of the Schedule). This may hamper the submission of complaints with the Board, per expert analysis.

  1. Fiduciaries are Bound by a Principle of Accountability and Have Data Breach Notification Obligations

The DPDP Act does not articulate Principles of Processing, or Fair Information Practice Principles, but the content of several of its provisions put emphasis on purpose limitation (as explained in previous sections of the blog) and on the principle of accountability. 

Section 8 of the Act includes multiple obligations for data fiduciaries, all under an umbrella expectation in paragraph 1 that they are “responsible for complying” with the provisions of the Act and any subsequent implementation rules, both regarding processing undertaken by the data fiduciary and by any processor on its behalf. This specification echoes the GDPR accountability principle. In addition, data fiduciaries are under an obligation to implement appropriate technical and organizational measures to ensure the effective implementation of the law.

Data security is of particular importance, considering that data fiduciaries must both take reasonable security safeguards to prevent personal data breaches, and notify the Board and each affected party if such breaches occur. The details related to modalities and timeline of notification will be specified in subsequent implementation rules. 

A final obligation of data fiduciaries to highlight is the requirement they establish  a “readily available” mechanism for redressing “grievances” by data principals in a timely manner. The “grievance redress” mechanism is of utmost importance, considering that data principals cannot address the Board with a complaint until they “exhaust the opportunity of redressing” the grievance through this mechanism (Section 13(3)). The Act leaves determination of the time period for responding to grievances to delegated legislation, and it is possible that there may be different time periods for different categories of companies.

  1. Fiduciaries Have a Mandate to Verify Parental Consent for Processing Personal Data of Minors under 18

The DPDP Act creates significant obligations concerning the processing of children’s personal data, with “children” defined as minors under 18 years of age, without any distinguishing sub-category for older children or teenagers. As a matter of principle, data fiduciaries are forbidden to engage in any processing of children’s data that is “likely to cause any detrimental effect on the well-being of the child.”

Data fiduciaries are under an obligation to obtain verifiable parental consent before processing the personal data of any child. Similarly, consent must be obtained from a lawful guardian before processing the data of a person with disability. This obligation, which is increasingly common to privacy and data protection laws around the world, may create many challenges in practice. A good resource for untangling its complexity and applicability is FPF’s recently published report and accompanying infographic – “The State of Play: Is Verifiable Parental Consent Fit For Purpose?

Finally, the Act also includes a prohibition on data fiduciaries engaging in tracking or behavioral monitoring of children, or targeted advertising directed at children. Similar to many other provisions of the Act, the government may issue exemptions from these obligations for specific classes of fiduciaries, or may even lower the age of digital consent for children when their personal data is processed by designated data fiduciaries.

  1. The Act Creates a Data Protection Board to Enforce the Law, But Reserves Regulatory Powers For the Government

The DPDP Act empowers the Government to establish the Board as an independent agency that will be responsible for enforcing the new law. The Board will be led by a Chairperson and will have Members appointed by the Government for a renewable two-year mandate.

The Board is vested with the power to receive and investigate complaints from data principals, but only after the principal has exhausted the internal grievance redress mechanism set up by the relevant data fiduciaries. The Board can issue binding orders against those who breach the law, can direct urgent measures to remediate or mitigate a data breach, imposing financial penalties and direct parties to mediation. 

While the Board is granted “the same powers as are vested in a civil court” – including summoning any person, receiving evidence, and inspecting any documents (Section 28(7)), the Act specifically excludes any access to civil courts in the application of its provisions (Section 39), creating a de facto limitation on effective judicial remedy similar to the relief provided in Article 82 GDPR. The Act grants any person affected by a decision of the Board the right to pursue an appeal in front of an Appellate Tribunal, which is designated the Telecom Disputes Settlement and Appellate Tribunal established under other Indian law.

Penalties for breaches of the law have been stipulated in a Schedule attached to DPDP Act and range from the equivalent in rupees of USD $120 to USD $30.2 million. The Board can determine the penalty amount from a preset range based on the offense. 

However, the Board does not have the power to pass regulations to further specify details related to the implementation of the Act. The Government is conferred broad discretion in adopting delegated legislation to further specify the provisions of the Act, including clarifying modalities and timelines for fiduciaries to respond to requests from data principals, the requirements of valid notice for obtaining a data principal’s consent for processing of data, details related to data breach notifications, and more. The list of operational details that may be specified by the Government in subsequent rules is open-ended and detailed in Section 40(2)(a) to (z). Subsection (z) of this provision provides a catch-all permitting the Central Government to prescribe rules on “any other matter” related to the implementation of the Act. 

In practice, it is expected that it will take time for the new Board to be established and for rules to be issued in key areas for compliance. 

Besides rulemaking power, the Central Government has another significant role in the application of the law. Pursuant to Section 36, it can require any information (including presumably personal data) that it wants (or “call for”) from the Board, data fiduciaries, and “intermediaries” as defined by the IT Act. No further specifications are made in relation to such requests, other than that they must be made “for the purposes of the Act.” This provision is broader and subject to fewer restrictions than provisions on data access requests in the existing IT Act and its subsidiary rules.

Additionally, the Central Government may also order or direct any governmental agency and any “intermediary” to block information for access by the public “in the interests of the general public.” To issue such an order, the Board will need to have sanctioned the data fiduciary concerned at least twice in the past, and the Board must advise the Central Government to issue such an order. An order blocking public access may refer to “any computer resource” that enables data fiduciaries to offer goods or services to data principals within the territory of India. While it is now common among modern comprehensive data protection laws around the world for independent supervisory authorities to order erasure of personal data unlawfully processed, or to order international data transfers or sharing of personal data to cease if conditions of the law are not met, these provisions of the DPDP Act are atypical because the orders will come directly from the Government, and also because they more closely resemble online platform regulation than privacy law.

  1. Exceptions for Publicly Available Data And Processing for Research Purposes Are Notable for Training AI

Given that this law comes in the midst of a global conversation about how to regulate artificial intelligence and automated decision-making, it is critical to highlight provisions in the law that seem directed at facilitating development of AI trained on personal data. Specifically, the Act excludes from its application most publicly available personal data, as long as it was made publicly available by the data principal – for example, a blogger or a social media user publishing their personal data directly – or by someone else under a legal obligation to publish the data, such as personal data of company shareholders that regulated companies must publicly disclose by law.

Additionally, the Act exempts the processing of personal data necessary for research or statistical purposes (Section 17(2)(b)). This exemption is extremely broad, with only one limitation in the core text: the Act will still apply to research and statistical processing if the processing activity is used to make “any decision specific to the data principal.”

There is only one other instance in the DPDP Act where processing data to “make decisions” about a data principal is raised. Data fiduciaries are under an obligation to ensure the “completeness, accuracy and consistency” of personal data if it is used to make a decision that affects the data subject. In other words, while the Act does not provide for a GDPR-style right not to be subject to automated decision-making, it does require that when personal data are used for making any individual decisions, presumably including automated or algorithmic decisions, such data must be kept accurate, consistent and complete. 

Additionally, the DPDP Act remains applicable to any processing of personal data through AI systems, if the other conditions of the law are met, given the broad definitions of “processing” and of “personal data.” Further rules adopted by the Central Government or other notifications may provide more guidance in this regard.

Notably, the Act does not exempt processing of personal data for journalistic purposes, a fact criticized by the Editors’ Guild of India. In previous versions of the Bill, especially the expert version spearheaded by Justice Srikrishna in 2017, this exemption was present. It is still possible that the Central Government will address this issue through delegated legislation. 

Key Takeaways and Further Clarification

India’s data protection Act has been in the works for a significant period of time and the passage of the law is a welcome step forward after the recognition of privacy as a fundamental right in India by the Supreme Court in its landmark Puttaswamy judgment.

While the basic structure of the law is similar to many other global laws like the GDPR and its contemporaries, India’s approach has its differences, such as more limited grounds of processing, wide exemptions for government actors, regulatory powers for the government to further specify the law and to exempt specific fiduciaries or classes of fiduciaries from key obligations, no baked-in definition or heightened protection for special categories of data, and the rather unusual inclusion of powers for the Government to request access to information from fiduciaries, the Board and “intermediaries”, as well as to block access by the public to specific information in “computer resources”.

Finally, we note that many details of the Act are still left to be clarified once the new Data Protection Board of India is set up and further rules for the specification of the law are drafted and officially notified. 

Editors: Lee Matheson, Dominic Paulger, Josh Lee Kok Thong

FPF at Singapore PDP Week 2023: Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific

Authors: Cheng Kit Pang, Elena Guañuna, Alistair Simmons, and Matthew Rostick

Cheng Kit Pang, Elena Guañuna, Alistair Simmons, and Matthew Rostick are FPF Global Privacy Interns.

From July 18 to July 21, 2023, the Personal Data Protection Commission (PDPC) of Singapore held its annual Personal Data Protection Week (PDP Week), which overlapped with the IAPP’s Asia Privacy Forum 2023.  

The Future of Privacy Forum (FPF)’s flagship event during PDP Week was a roundtable on the governance implications of generative AI systems in the Asia-Pacific (APAC) region. In organizing this event together with the PDPC, FPF brought together over 80 participants from industry, academia, the legal sector, and international organizations, as well as regulators from across the APAC region, Africa, and the Middle East.   

FPF Roundtable on Governance of Generative AI Systems in APAC

On July 21, FPF organized a high-level closed-door roundtable, titled “Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific.” The roundtable explored the issues raised by applications of existing and emerging AI governance frameworks in APAC to generative AI systems.

Dominic Paulger (Policy Manager, FPF APAC) kicked off the roundtable with a presentation on the existing governance and regulatory frameworks that apply to generative AI systems in the APAC region. The presentation highlighted that to date, most major APAC jurisdictions have opted for “soft law” approaches to AI governance, such as developing ethical frameworks and voluntary governance frameworks, rather than “hard law” approaches, such as enacting binding regulations. However, the presentation also explained that China is an exception to this rule and has been active in enacting regulations targeting specific AI technologies, such as deep synthesis technologies and most recently, generative AI. In addition, even if they do not specifically target Generative AI, the comprehensive data protection laws enacted in most jurisdictions in the region are also applicable to how these types of computer programs are trained and generally process personal data.

The presentation was followed by three hours of discussion, facilitated by Josh Lee Kok Thong (Managing Director, FPF APAC). The discussions were first initiated by firestarters from industry, regulators, and academia: 

Turning to the wider roundtable discussion, participants highlighted the fast pace of developments in generative AI technology and hence, the importance of adopting an agile and future-proof approach to governance. Participants also identified that compared with other forms of AI technology, generative AI systems were more likely to raise challenges in addressing unseen bias in very large, unstructured data sets and “hallucinations” (generated output that is grammatically accurate but nonsensical or factually inaccurate). 

To address these issues, participants highlighted the importance of developing standards and metrics for evaluating the safety of generative AI systems and for measuring the effectiveness of achieving desired outcomes. Participants also called for efforts to educate users on generative AI systems, including the capabilities, limits, and risks of these technologies.

Regarding regulation of generative AI, participants were generally in favor of an incremental approach to the development of governance principles for generative AI systems in the region – allowing actors in the AI value chain to explore ways to operationalize existing AI principles and apply existing governance frameworks to the technology – rather than enacting “hard law” regulations. 

Participants also agreed on the need for AI governance principles to account for the three basic layers of the AI technology stack as different policy considerations apply at each of these levels, namely:

Several participants also raised that at the ecosystem level, it would be important for stakeholders to develop a common or standardized set of terminologies or taxonomies for key concepts in generative AI technology, such as “foundation models” or “large language models” (LLMs).

Some participants also called for greater collaboration between stakeholders, and a multidisciplinary approach to governance of generative AI systems and global alignment when developing best practices.

pdp week 4

pdp week 3

pdp week 5

pdp week 2

Photos: Participants from FPF Roundtable on Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific, 7/21/2023. Photos courtesy of the PDPC.

Other FPF Activities during PDP Week 

IAPP Asia Privacy Forum 2023

On July 20, FPF organized an IAPP panel discussion titled “Unlocking Legal Bases for Processing Personal Data in APAC: A Practical Guide,” which built on FPF’s year-long research project on consent and other legal bases for processing personal data in the APAC region – the final report of which was released in November 2022.

Moderator Josh Lee Kok Thong led the discussion, in which panelists Deputy Commissioner Denise Wong, Deputy Commissioner Leandro Y. Aguirre, Arianne Jimenez, and David N. Alfred (Co-Head, Data Protection, Privacy & Cybersecurity Practice, Drew & Napier) explained the challenges faced by practitioners and regulators in addressing differing data requirements for consent and alternatives like “legitimate interests” in APAC data protection laws.

pdp week 1

Photo: FPF Panel on Unlocking Legal Bases for Processing Personal Data in APAC, July 20, 2023.

FPF’s APAC office was also represented at two further panels during IAPP Asia Privacy Forum 2023: 

FPF Training on EU AI Act

On the sidelines of PDP Week, FPF held its inaugural in-person FPF Training session in the APAC region. The closed-door training session, which focused on the forthcoming EU AI Act and its impact on the APAC region, was held on July 20 and was conducted by Katerina Demetzou (Senior Counsel for Global Privacy, FPF) with interventions from Vincenzo Tiani from his experience of advising Members of the European Parliament (MEPs) on drafting the EU AI Act. The training provided a detailed analysis of the draft AI Act and explained the lifecycle of AI systems and the law-making process in the EU. The training drew close to 20 attendees comprising regulators and representatives from industry and the legal sector.

pdp week 6

Photo: FPF Training on the EU AI Act, 7/20/2023 

Conclusion

This was the second time that FPF organized events around PDP Week since the launch of FPF’s APAC office in 2021. The week’s events enabled FPF APAC to foster collaborative dialogues among regulators, industry, academia, and civil society from the APAC region and draw links with the EU, and the US. FPF is grateful for the support of the PDPC and IAPP in organizing these activities.

Edited by Dominic Paulger and Josh Lee Kok Thong

Insights into Brazil’s AI Bill and its Interaction with Data Protection Law: Key Takeaways from the ANPD’s Webinar

Authors: Júlia Mendonça and Mariana Rielli

The following is a guest post to the FPF blog by Júlia Mendonça, Researcher at Data Privacy Brasil, and Mariana Rielli, Institutional Development Coordinator at Data Privacy Brasil. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.

On July 6, 2023, the Brazilian National Data Protection Authority (ANPD) held a webinar event entitled: The interplay between AI regulation and data protection. The dialogue unfolded in the broader context of developments in AI regulation in Brazil which has, as its main drivers, the bills that propose a Regulatory Framework for Artificial Intelligence in the country. The bills were jointly analyzed by a Commission of 18 jurists appointed by the Federal Senate, which promoted meetings, seminars, and public hearings to substitute them with a new draft proposal. At the beginning of May, the draft produced by the Commission was transformed into a new bill that is currently going through the legislative process: Bill PL nº2338 (AI draft bill). 

The ANPD, noting the need to harmonize any upcoming AI regulation with the existing data protection regime (as well as future enforcement matters), organized this webinar, in addition to having published a preliminary analysis of the AI draft bill. The discussions during the webinar offer a glimpse into the AI lawmaking and policymaking in Brazil, one of the largest jurisdictions in the world – one that is also covered by a general data protection law applicable to personal data processed in the context of an AI system. This brief blog post outlines the main topics discussed during the event, particularly in relation to the interplay between the current AI draft bill and Brazil’s General Data Protection Law (LGPD).

The webinar’s opening welcomed Waldemar Gonçalves (President, ANPD, Brazil), Eduardo Gomes (Senator of the Republic, Brazil), and Estela Aranha, (Special Advisor, Ministry of Justice and Public Security, Brazil). The panel that followed was formed by representatives of the National Data Protection Council (CNPD) – a multisectoral advisory body, part of the ANPD structure – namely, Ana Paula Bialer (Founding Partner, Bialer Falsetti Associados, Brazil), Bruno Bioni, (Director and Founder, Data Privacy Brasil), Fabrício da Mota (Vice President, Conselho Federal da OAB, Brazil), and Laura Schertel (Visiting researcher, Goethe Universität Frankfurt; and private law Professor and lawyer, Brazil/EU).

Key representatives highlight the need for ongoing harmonization between AI regulation and data protection law in Brazil 

As the President of the ANPD, Waldemar Gonçalves highlighted the Authority’s ongoing work on the AI agenda, noting that data protection rules under the LGPD are closely interconnected with those provided for in the AI draft bill, such as with regard to the right to information. With such similarities in mind, Gonçalves noted the need for harmonization between different tools, such as the Data Protection Impact Assessment (DPIA) and the Algorithm Impact Assessment (AIA). 

Another initiative of the ANPD highlighted by Gonçalves as relevant to the AI agenda and the current AI regulatory efforts was the technical agreement between the Authority and the Latin American Development Bank (CAF), which will include a regulatory sandbox pilot program on data protection and AI

ANPD’s current president closed his remarks recalling the various recent cases in which data protection authorities around the world have spoken out on issues concerning AI-based systems, thereby reinforcing the importance of the ANPD in assuming an active role in this discussion. Eduardo Gomes, rapporteur of the AI draft bill, started from the same premises to support the efforts with the president of the Senate, Rodrigo Pacheco. In addition to reinforcing the importance of work of the Commission of Jurists in laying the groundwork for the debate in Brazil, he also recognized the need to foster other opportunities to “mature the subject.”

Concluding the opening panel, Estela Aranha focused her presentation on the topic of algorithmic discrimination in the context of the interplay between AI and existing data protection norms. Aranha mentioned examples with regards to data mining and how the resulting massive collection of data can generate the most varied risks, including risks of discrimination, and can go beyond the most obvious examples of sensitive and inferred data. The relevance of this specific point in the debate stems from the fact that the proposed AI draft bill is quite detailed, both in terms of definitions and obligations created, with regards to direct and indirect discrimination potentially created or enhanced by AI systems in the Brazilian context. Finally, Aranha also reaffirmed the Ministry of Justice’s support for the Bill. 

A deeper dive into the proposed AI draft bill and possible future(s) of AI regulation  

The following panel focused on a deeper look at the proposed AI draft bill and some of the specific provisions therein. The first panelist, Ana Paula Bialer, highlighted that there is already a robust framework for data protection that grants the data subject greater control over their data, based on the principle of “informational self-determination.” However, Bialer made a point that there may be a certain difficulty in applying the rationale of data protection to AI. Not in the sense that the data used is presumably not protected, but rather that there should be a thorough exercise of extension and “revalorization” of the principles of the LGPD, combined with a review of the set of rights put in place in the context of AI systems. 

Already assessing the current draft bill, Bialer also considered that the meaning of a human-centered approach can be different when thinking about different applications of AI in varying socio-economic contexts, exemplifying her reflection through the topic of recruitment and new hires’ selection and the right to full employment in Brazil. Bialer concluded by reaffirming the benefits that can be brought by AI for social and economic development in the country, as well as for the exercise of fundamental rights. In this context, Bialer welcomed the ANPD’s regulatory sandbox initiative and positioned herself more favorably to a strongly risk-based approach to AI regulation.

Bruno Bioni began by emphasizing the importance of having a dose of skepticism with regards to the broader debate – both on AI, and in respect to AI regulation – especially in a scenario where the almost “apocalyptic” narrative around AI continues gaining notoriety. This is important because, in Bioni’s opinion, such discourse may end up underestimating the regulatory tools that already exist. The very field of personal data protection has already provided positive and negative lessons when it comes to an object of regulation that is very plastic and polyvalent, “with a regulatory mission that is transversal and not sectoral.”

Bioni continued by pointing out that the intersection of data protection, AI regulation, and governance is very much related to the idea of a “toolbox” that opens opportunities for a more collaborative, collective regulatory production, relying on companies themselves to participate and to some extent, be rewarded, for example, if they demonstrate a good level of accountability. 

Among the various existing tools and how they can support each other, Bioni highlighted Algorithmic Impact Assessments (AIA) and Data Protection Impact Assessments (DPIAs) as  documentation that can foster and unfold into the other in such a way as to optimize both. The ANPD has already positioned the DPIA prescribed by the LGPD as an instrument to be better regulated and better standardized, which, for the expert, will be a significant advancement, even in a hypothetical scenario where it takes a long time for an AI regulation to be passed. 

According to Bioni, it is for this reason that data protection authorities around the world have led enforcement actions, in the  absence of AI laws or authorities created with this specific mission. Bioni concluded his remarks by pointing out that it is essential to think about a more collective or networked governance approach.

Fabrício da Mota Alves focused on the issue of institutional arrangements and of thinking about future legislation inserted in a regulatory environment that is founded on the administrative action of the Brazilian State. Fabrício pondered on the possibility that, following other countries in the world, the ANPD promotes some degree of administrative action (supervisory and sanctioning, in addition to regulation and awareness) related to AI, reinforcing that there is a concern to understand and call for the ANPD to build a very robust regulatory environment. Above all, there is a call for formal protocols so that companies and experts can understand the limits and the scope of ANPD’s actions in this dynamic scenario.

Celebrating the space provided by the webinar as one of the first and most qualified to take place outside of the legislative environment, Alves emphasized that it is imperative that, also in the context of regulating and enforcing AI-related cases (regardless of specific frameworks), the Brazilian ANPD maintains the stance it has adopted so far, with broad public participation, hearings, public consultations, and processes that are open to criticism from all affected sectors.

What’s next for the Brazilian AI bill?

Brazil’s AI draft bill is in its early stages, although it has already been the result of lengthy discussions by the expert committee assigned to prepare a new draft in 2022. There is an expectation that it will now be analyzed by a special committee of parliamentarians designated specifically to debate the Bill, with the prospect of new rounds of public hearings. After the text is approved by the plenary of the Brazilian Senate, the proposal still goes through the Chamber of Deputies, the reviewing house, until a common text is reached, which will then be sanctioned by the President of the Republic.

The whole webinar, in Portuguese, can be watched here

The First Japan Privacy Symposium: G7 DPAs discussed their approach to reign in AI, and other regulatory priorities

The Future of Privacy Forum and S&K Brussels hosted the first Japan Privacy Symposium in Tokyo, on June 22, 2023, following the G7 Data Protection and Privacy Commissioners roundtable. The Symposium brought global thought leadership on the interaction of data protection and privacy law with AI, as well as insights into the current regulatory priorities of the G7 Data Protection Authorities (DPAs) to an audience of more than 250 in-house privacy leaders, lawyers, consultants and journalists from Japan and the region.

The program started with a keynote address from Commissioner Shuhei Ohshima (Japan’s Personal Information Protection Commission), who shared details about the results of the G7 DPAs Roundtable from the day before. Two panels followed, featuring Rebecca Kelly Slaughter (Commissioner, U.S. Federal Trade Commission), Wojciech Wiewiórowski (European Data Protection Supervisor, EU), Philippe Dufresne (Federal Privacy Commissioner, Canada), Ginevra Cerrina Feroni (Vice President of the Garante, Italy), John Edwards (Information Commissioner, UK), and Bertrand du Marais (Commissioner, CNIL, France). Jules Polonetsky, FPF CEO, and Takeshige Sugimoto, Managing Partner at S&K Brussels and FPF Senior Fellow, hosted the Symposium. 

The G7 DPA Agenda, built on three pillars: Data Free Flow with Trust, emerging technologies, and enforcement cooperation

The DPAs of the G7 nations started to meet annually in 2020, following the initiative of the UK’s Information Commissioner Office during UK’s G7 Presidency that year. This is a new venue for international cooperation of DPAs, limited to Commissioners from Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union. Throughout the year, the DPAs maintain a permanent channel of communication and implement a work plan adopted during their annual Roundtable.

In his keynote at the Japan Privacy Symposium, Commissioner Shuhei Oshshima laid out the results of this year’s Roundtable, held in Tokyo on June 20 and 21. The Commissioner highlighted three pillars guiding the group’s cooperation this year: (I) Data Free Flow with Trust (DFFT), (II) emerging technologies, and (III) enforcement cooperation. 

The G7 Commissioners’ Communique expressed overall support for the DFFT political initiative, welcoming the reference to DPAs as stakeholders in the future Institutional Arrangement for Partnership (IAP), a new structure the G7 Digital Ministers announced earlier in April to operationalize the DFFT. However, in the Communique, the G7 DPAs emphasized that they “must have a key role in contributing on topics that are within their competence in this Arrangement.” It is noteworthy that, among their competencies, most G7 DPAs have the authority to order the cessation of data transfers across borders if legal requirements are not met (see, for instance, this case from the CNIL – the French DPA, this case from the European Data Protection Supervisor, or this case from the Italian Garante). 

The IAP seems to provide a key role for governments themselves currently, in addition to stakeholders and “the broader multidisciplinary community of data governance experts from different backgrounds,” according to Annex I of the Ministerial Declaration announcing the Partnership. The DPAs are singled out only as an example of such experts. 

In the Action Plan adopted in Tokyo, the G7 DPAs included clues as to how they see the operationalization of DFFT playing out: through interoperability and convergence of existing transfer tools. As such, they endeavor to “share knowledge on tools for secure and trustworthy transfers, notably through the comparison of Global Cross-Border Privacy Rules (CBPR) and EU certification requirements, and through the comparison of existing model contractual clauses.” (In an analysis touching broadly beyond the G7 jurisdictions, the Future of Privacy Forum published a report earlier this year emphasizing many commonalities, but also some divergence, among three sets of model contractual clauses proposed by the EU, the Iberoamerican Network of DPAs, and ASEAN).

Arguably, though, DFFT was not the main point on the G7 DPAs agenda. They had adopted a separate and detailed Statement on generative AI. In his keynote, Commissioner Shuhei Ohshima remarked that “generative AI adoption has increased significantly.” In order to promote trustworthy deployment and use of the new technology “the importance of DPAs is increasing also on a daily basis,” the Commissioner added.

Generative AI is not being deployed in a legislative void, and data protection law is the immediately applicable legal framework

Top of mind for G7 data protection and privacy regulators is AI, and generative AI in particular. “AI is not a law-free zone,” said FTC Commissioner Slaughter during her panel at the Symposium, being very clear that “existing laws on the books in the US and other jurisdictions apply to AI, just like they apply to adtech, [and] social media.”  This is apparent across the G7 jurisdictions: in March, the Italian DPA issued an order against OpenAI to stop processing personal data of users in Italy following concerns that ChatGPT breached the General Data Protection Regulation (GDPR); in May, the Canadian Federal Privacy Commissioner opened an investigation into ChatGPT jointly with provincial privacy authorities; and, in June, Japan’s PIPC issued an administrative letter warning OpenAI that it needs to comply with requirements from the Act on the Protection of Personal Information, particularly regarding the processing of sensitive data.

At the Japan Privacy Symposium, Ginevra Cerrina Feroni, VP of the Garante, shared the key concerns guiding the agency’s enforcement action against OpenAI, which was the first such action in the world. She highlighted several risks, including a lack of transparency about how OpenAI collects and processes personal data to deliver the ChatGPT service; uncertainty regarding a lawful ground for processing personal data, as required by the GDPR; a lack of avenues to comply with the rights of data subjects, such as access, erasure, and correction; and, finally, the potential exposure of minors to inappropriate content, due to inadequate age gating. 

After engaging in a constructive dialogue with OpenAI the Garante suspended the order, seeing improvements in previously flagged aspects. “OpenAI published a privacy notice to users worldwide to inform them how personal data is used in algorithmic training, and emphasized the right to object to such processing,” the Garante Vice President explained. She continued, noting that OpenAI “provided users with the right to reject their personal data being used for training the algorithms while using the service, in a dedicated way that is more easily accessible. They also enabled the ability of users to request deletion of inaccurate information, because – and this is important – they say they are technically unable to correct errors.” However, Vice President Cerrina Feroni mentioned that the investigation is ongoing and that the European Data Protection Board is currently coordinating actions among EU DPAs on this matter. 

The EDPS added that purpose limitation is among his chief concerns with services like ChatGPT, and generative AI more broadly. “Generative AI is meant to advance communication with human beings, but it does not provide fact-finding or fact-checking. We should not expect this as a top feature of Large Language Models. These programs are not an encyclopedia; they are just meant to be fluent, hence the rise of possibilities for them to hallucinate,” Supervisor Wiewiorowski said. 

Canadian Privacy Commissioner Philippe Dufresne emphasized that how we relate to generative AI from a privacy regulatory perspective “is an international issue.” Commissioner Dufresne also added, “a point worth repeating is that privacy must be treated as a fundamental right.” This is important, as “when we talk about privacy as a fundamental right, we point out how privacy is essential to other fundamental human rights within a democracy, like freedom of expression and all other rights. If we look at privacy like that, we must see that by protecting privacy, we are protecting all these other rights. Insofar as AI touches on these, I do see privacy being at the core of all of it,” Commissioner Dufresne concluded.

The G7 DPAs’ Statement on Generative AI outlines their key concerns, such as lack of legal authority to process personal data at all stages

In the aforementioned Generative AI Statement, the G7 data protection regulators laid out their main concerns in relation to how personal data is processed through this emerging type of computer program and service. First and foremost, the commissioners are concerned that processing of personal data lacks legal authority during all three relevant stages of developing and deploying generative AI systems: for the data sets used to train, validate and test generative AI models; for processing personal data resulting from the interactions of individuals with generative AI tools during their use; and, for the content that is generated by generative AI tools.

The commissioners also highlighted the need for security safeguards to protect against threats and attacks that seek to invert generative AI models, and that would technically prevent extractions or reproductions of personal data originally processed in datasets used to train the models. They also advocated for mitigation and monitoring measures to ensure personal data created by generative AI is accurate, complete, and up-to-date, as well as free from discriminatory, unlawful, or otherwise unjustifiable effects.

It is clear that data protection and privacy commissioners are proactive about ensuring generative AI systems are compatible with privacy and data protection laws. Only two weeks after their roundtable in Tokyo, it was reported that the US FTC initiated an investigation against OpenAI. And this proactive approach is intentional. As UK’s Information Commissioner, John Edwards, made clear, the commissioners are “keen to ensure” that they “do not miss this essential moment in the development of this new technology in a way that [they] missed the moment of building the business models underpinning social media and online advertising.” “We are here and watching,” he said.

Regardless of the adoption of new AI-focused laws, DPAs would remain central to AI governance

The Commissioners also discussed the wave of legislative initiatives targeting AI in their jurisdictions. AI systems are not built and deployed in a legislative void: data protection law is largely and immediately relevant, as is consumer protection law, product liability rules, and intellectual property law. In this environment, what is the added value of specific, targeted legislation addressing AI?

Addressing the EU AI Act proposal, European Data Protection Supervisor Wiewiórowski noted that the EU’s initiation of the legislation is not because the legislator thought there was a vacuum. “We saw that there were topics to be addressed more specifically for AI systems. There was a question whether we approach it as a product, service, or some kind of new phenomenon as far as legislation is concerned,” he added. As for the role of the DPAs once the AI Act will be adopted, he brought up the fact that in the EU, data protection is a fundamental right: which means that all legislation or policy solutions governing processing of personal data in a way or another must be looked at through this lens. As supervisory authorities tasked with guaranteeing this fundamental right, DPAs will continue playing a role.

The framework ensuring the enforcement of the AI Act is still under debate, as EU Member States are tasked with designating competent national authorities, and the European Parliament hopes to create a supranational collaborative body to play a role in enforcement. However, one thing is certain: in the proposal, the EDPS has been designated the competent authority to ensure that EU agencies and bodies comply with the EU AI Act. 

The CNIL seems to be eyeing the designation as EU AI Act enforcer as well. Commissioner du Marais pointed out that “since 1978, the French Act on IT and Freedom has banned automated decisions. We have a fairly long and established body of case law.” Earlier this year, the CNIL created a dedicated department including data and computer scientists among staff to monitor how AI systems comply with legal obligations stemming from data protection law. “To be frank, we don’t know yet what will come out of the legislative process, but we have started to prepare ourselves. We have also been designated by domestic law as supervisory and certification authority for AI during the 2024 Olympic Games.” 

The Garante has a long track record of enforcing data protection law on algorithmic systems and decision-making that impacted the rights of individuals. “The role of the Garante in safeguarding digital rights has always been prominent, even when the issue was not yet widely recognized by the public,” said Vice President Cerrina Feroni. Indeed, as shown by extensive research published last year by the Future of Privacy Forum, European DPAs have long been enforcing data protection law in cases where automated decision-making was central. The Garante led impactful investigations against several gig economy apps and their algorithms’ impacts on people.

Canada is also in the midst of legislating AI, introducing a bill last year that is currently under debate. “There is similarity with the European proposal, but [the Canadian bill] focuses more on high impact AI systems and on preventing harms and biased outputs and decision-making. It provides significant financial fines,” Commissioner Dufresne explained. As part of the bill, enforcement is currently assigned to the relevant ministry in the Canadian government. The Privacy Commissioner explained that the regulatory activity would be coordinated with his office, but also with the competition, media, and human rights regulators in Canada. When contributing recommendations during the legislative process, Commissioner Dufresne noted that he suggested “privacy to be a key principle.” In light of his vision that privacy as a fundamental right is essential for the realization of other fundamental rights, the Commissioner had a clear message that “the DPAs need to be front and center” of the future of AI governance.

UK Commissioner Edwards echoed the value of entrenched collaboration among digital regulators, adding that the UK already has an official “Digital Regulators Cooperation Forum,” established with its own staff. The entity “is important to provide a coherent regulatory framework,” he said.

Children’s privacy is a top priority across borders, with new regulatory approaches showing promising results

One of the key concerns that the G7 DPAs have in relation to generative AI is how the new services are dealing with children’s privacy. In fact, the regulators have made it one of their top priorities to broadly pursue the protection of children’s privacy when regulating social media services, targeted advertising, or online gaming, among others. 

Building on a series of recent high-profile cases brought by the FTC in this space, Commissioner Slaughter couldn’t have been clearer: “Kids are a huge priority issue for the FTC.” She reminded the audience that COPPA (Children’s Online Privacy Protection Act) has been around for more than two decades, and it is one of the strongest federal privacy laws in the US: “The FTC is committed to enforcing it aggressively.” Commissioner Slaughter explained that the FTC’s actions, such as their recent case against Epic Games, include considerations related to teenagers as well, even if they are not technically covered by COPPA protections, but are covered by the “unfair practices” doctrine of the FTC. 

UK Commissioner John Edwards gave a detailed account of the impact of the UK’s Age Appropriate Design Code in the design of online services provided to children, which was launched by his office in 2020. “We have seen genuine changes, including privacy settings being automatically set to very high for children. We have seen children and parents and carers being given more control over privacy settings. And we have seen that children are no longer nudged to lower privacy settings, with clearer tools and steps in place for them to exercise their data protection rights. We have also seen ads blocked for children,” Commissioner Edwards said, pointing out that these are significant improvements for the online experience of children. These results have been obtained primarily through a collaborative approach with the service providers, who have implemented changes after their services were subject to audits conducted by the regulator. 

Children’s and teenagers’ privacy is also top of mind for the CNIL. Among a series of guidance, recommendations, and actions, the French regulator is adding another layer to its approach – digital education. “We have made education a strategic priority. We have a partnership with the Ministry of Education and we have available a platform to certify digital skills for children, as well as with resources for kids and parents,” Commissioner du Marais said. Regarding regulatory priorities, he emphasized attention to age verification tools. Among the principles the French regulator favors for age verification are no direct collection of identity documents, no age estimates based on web browsing history, and no processing of biometric data to recognize an individual. The CNIL has asked websites not to carry out age verification themselves, and to instead rely on third-party solutions. 

The discussions of the G7 DPA Commissioners who participated in the first edition of the Japan Privacy Symposium laid out a vibrant and complex regulatory landscape, centered around new challenges posed to societal values and rights of individuals by AI technology, but also making advancements in perennial topics like cross-border data transfers and children’s privacy. More meaningful and deeper enforcement cooperation is to be expected among the G7 Commissioners, whose Action Plan espoused their commitment to move towards constant exchanges related to enforcement actions and to revitalize existing global enforcement cooperation networks, like GPEN (Global Privacy Enforcement Network). Next year, the G7 DPA Commissioners will meet in Rome. 

Japan Privacy Symposium G7 regulators

Editor: Alexander Thompson  

FPF Paper, “The Thin Red Line …,” Receives the Council of Europe’s 2023 Stefano Rodotà Award

On Friday, June 16th, members of the FPF team joined the 44th Plenary meeting of the Council of Europe’s Committee of Convention 108 in Strasbourg, France to accept a tremendous research honor. On this occasion, Katerina Demetzou, Senior Counsel for Global Privacy, Dr. Gabriela Zanfir-Fortuna, VP for Global Privacy, and Sebastião Barros Vale, former Senior Counsel for Europe at FPF, received the 2023 Stefano Rodotà Data Protection award in the category of ‘best academic article’ for their paper, “The Thin Red Line: Refocusing Data Protection Law on Automated Decision-Making, A Global Perspective with Lessons from Case-Law.” Demetzou and Barros Vale were present in Strasbourg during the Plenary meeting to present the paper and lift the award.  

The Council of Europe (CoE), founded in 1949, is an international organization with 46 Member States and 6 Observer States. All Council of Europe Member States have signed up to the European Convention of Human Rights (ECHR), a treaty designed to protect human rights, democracy, and the rule of law. The European Court of Human Rights (ECtHR) oversees the implementation of the ECHR in the Member States. 

Demetzou, Barros Vale, and Dr. Gabriela Zanfir-Fortuna, VP for Global Privacy at FPF, are honored to receive recognition at the birthplace of the CoE’s historic 1981 treaty, the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data. Convention 108 established the first legally binding international instrument in data protection, and the CoE adopted the modernized Convention 108+ in 2018. A year later, the Convention 108+ Committee established the Stefano Rodotà Award to honor the memory and legacy of Stefano Rodotà (1933-2017), a leading Italian Professor and politician, and one of the founding fathers of data protection law in Europe. The Stefano Rodotà Award is awarded to precedent-setting and innovative research in the field of data protection. The dedicated academic, intellectual, and political influence of President Rodotà lives on through Demetzou, Barros Vale, and Dr. Zanfir-Fortuna’s global exploration of data protection instruments safeguarding individuals against harms from ADM in emerging technologies. 

“The Thin Red Line” analyzes the legal protection provided by data protection law to individuals that are being subjected to automated decision making (ADM) on the basis of their personal data. To that end, the authors dedicate the first part of their research to an analysis of European Data Protection Authority (DPAs) enforcement actions of the GDPR on ADM cases. The second section looks to Brazil, Mexico, Argentina, Colombia, China and South Africa to explore protections against harmful ADM found in non-EU general data protection laws. The article concludes that even in cases where a processing operation does not meet the high threshold set by Article 22 GDPR (‘solely by automated means’), DPAs have made use of an array of legal principles, rights, and obligations to protect individuals against ADM practices, nonetheless. With the exception of Colombia, the other non-EU jurisdictions have a specific ADM provision. In all cases studied, the general data protection laws provide a broad material scope, such that any automated processing operation, solely or not, is regulated according to relevant provisions. Additionally, all laws studied include strong transparency and fairness requirements.  

While the debate on a European Regulation for AI is ongoing, this paper aims to contribute to the discussion by highlighting that in cases where algorithms and AI systems process personal data, the GDPR is enforceable and protects individuals. Despite extensive legal scholarship on Article 22 GDPR, FPF’s experts identified a gap in previous literature through their global examination of existing enforcements and interpretations from regulators.

After the award ceremony, Demetzou was especially grateful for “the Committee’s warmth, as well as their committed understanding and appreciation for our research.” Dr. Zanfir-Fortuna called on the importance of the article’s findings while reflecting on emerging AI regulatory trends: “Data protection law has proved to be one of the most relevant existing legal frameworks to deal with the risks posed by the mass deployment of new AI tools. Existing legal obligations related to processing of personal data, on all continents, are stringent and more pressing than possible future AI legislation, as they are immediately applicable to existing AI systems.”  The authors hope this intervention, as well as the paper’s global scan, will support researchers and policymakers in understanding how existing data protection law protects against potential harms from algorithms and AI systems.

For more, read “The Thin Red Line: Refocusing Data Protection Law on ADM, A Global Perspective with Lessons from Case-Law.”

rodota award 2023

Nigeria’s New Data Protection Act, Explained

On June 12, 2023, the President of Nigeria signed the Data Protection Bill into law following a successful third reading at the Senate and the House of Representatives. The Data Protection Act, 2023 (the Act) has had executive and legislative support and marks an important milestone in Nigeria’s nearly two-decade journey towards a comprehensive data protection law. Renewed efforts towards a comprehensive law began in September 2022 when the National Commissioner of the Nigeria Data Protection Bureau (NDPB), now the National Data Protection Commissioner (NDPC), announced that the office would seek legal support for a new law as part of the Nigeria Digital Identification for Development Project. The drafting of the law was followed by a validation process that was conducted in October 2022. After validation, the Act was submitted to the Federal Executive Council for approval, which paved the way for its transmission to the National Assembly. The 2022 Data Protection Bill was introduced in both houses of Nigeria’s bicameral legislature as the Nigeria Data Protection Bill, 2023. The Act commenced upon signature by the President.

The Act provides for data protection principles that are common to many international data protection frameworks. It defines “personal data” broadly and it includes legal obligations for “data controllers” and “processors,” defined similarly to the majority of data protection laws around the world. While the structure and content of the Act align with other international frameworks for data protection, the Act contains notable unique provisions: 

Prior to the introduction of the Act, Nigeria’s data protection landscape was governed by the Nigeria Data Protection Regulation, 2019 (NDPR) and the Nigeria Data Protection Regulation 2019: Implementation Framework (Implementation Framework). However, the need to fill in the gaps under the NDPR, create a legal foundation for the existing data protection body, and as a necessary condition for the rollout of a national digital identification program required the creation of a new legislative framework. However, the NDPR and its Implementation Framework shall remain in force alongside the Act. Under Section 64(2)(f) all existing regulatory instruments, including regulations, directives, and authorizations issued by the National Information Technology Development Agency (NITDA) or NDPB shall remain in force as if they were issued by the Commission until they expire, are repealed, replaced, reassembled or altered. Per Section 63 of the Act, the new law shall take precedence in any instance of a conflict with pre-existing provisions. 

1. Covered Actors: Novel Categories of Data Controllers and Processors

The Act applies to the processing of personal data by data controllers, data processors, and third parties, which may be individuals, private entities, or public entities that process personal data. A data controller is defined as an individual, private entity, public Commission or agency, or any other body which, alone or jointly with others, determines the purposes and means of the processing of personal data. A data processor is defined as an individual, private entity, public authority, or any other body who or which processes personal data on behalf of or at the direction of a data controller or another data processor. The Act does not define third parties. 

The Act introduces a novel category of “data controllers and processors of major importance.” A data controller and processor of major importance is defined as a “data controller or data processor that is domiciled, resident in, or operating in Nigeria and processes or intends to process personal data of more than a such number of data subjects who are within Nigeria.” The Act continues, explaining that “the Commission may prescribe or such other class of data controller or data processor that is processing personal data of particular value or significance to the economy, society, or security of Nigeria as the Commission may designate.” 

While the practical thresholds of this definition are set to be further clarified by the Commission, they will be based on the number of data subjects whose data are processed and the value or significance of the processed data. This categorization has commonalities with the EU’s Digital Service Act’s designation of entities as Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) and may be used to create unique and additional obligations for such controllers and processors. The Act currently requires qualifying entities to meet special registration requirements, appoint a data protection officer, and pay different penalty amounts for violations. Future obligations will relate to processes relating to filing of compliance returns (Section 61(2)(g)), as well as any others that may be prescribed through regulations later issued by the Commission.

2. Covered Data: Broad Categories of Sensitive Personal Data 

The Act covers both personal and sensitive personal data. It defines personal data as “any information relating to an individual, who can be identified or is identifiable, directly or indirectly, by reference to an identifier such as a name, an identification number, location data, an online identifier or one or more factors specific to the physical, physiological, genetic, psychological, cultural, social, or economic identity of that individual.” The definition closely tracks Article 4(1) of the GDPR.

It further defines sensitive personal data as personal data relating to an individual’s:

Section 30(2) of the Act envisions a broad, flexible definition for sensitive personal data by authorizing the Commission to prescribe further categories of sensitive personal data. The Act also prohibits the processing of sensitive personal data unless specified conditions are met. Notable allowances for processing of sensitive personal data include:

These proposed rules closely track Article 9 of GDPR’s restrictions for the processing of “special category” data. Unlike the GDPR, which envisions situations where the prohibition on processing sensitive personal data may not be lifted on the basis of consent, the consent exception under Nigeria’s Act is only restricted to situations where a data subject has given and then withdrawn such consent. Additionally, the Act only applies an “explicit consent” requirement to the potential sharing of sensitive personal data by a “foundation, association, or other not-for-profit body with charitable, educational, literary, artistic, philosophical, religious, or trade union purposes,” while the GDPR’s Article 9(2)(a) requires “explicit consent” for all excepted processing. However, the Act does permit the Commission to potentially create additional regulations that may apply to the processing of sensitive personal data, including regulations expanding categories of sensitive personal data, additional grounds for processing such data, and safeguards to be applied.

3. Territorial Application: Broad Extraterritorial Application of the Act

Section 2(2)(c)) of the Act contains broad extraterritorial authority, covering any form of processing the personal data of a data subject in Nigeria by controllers or processors not established in Nigeria. This provision does not consider the nature of processing being conducted, unlike frameworks such as the GDPR, which include the “targeting” criteria. 

Exemptions from Application of the Act: Increased Protections for Exempted Processing Activities 

Section 3 of the Act provides several different exemptions from the broader application of the law and makes room for the Commission to expand the processing activities that may be exempted from the Act. Processing personal information “solely for personal or household purposes” is exempt, as long as such processing does not violate a data subject’s right to privacy. This is a stark difference from laws such as the GDPR, which wholly exempts processing of personal data by a natural person in the course of personal or household activity, regardless of whether it touches on the person’s right to privacy or not. Therefore, there are instances where personal data processing activities of a non-professional and non-commercial nature may fall under the ambit of the law. The rationale for this condition is not clear. Other exemptions include processing activities by law enforcement during the prevention, investigation, detection, or prosecution of a crime, processing for the prevention or control of a national public health emergency, national security or public interest purposes, and as necessary for the establishment, exercise, or defense of legal claims that are exempt from most of the obligations under Part V of the Act. Exempt entities must still comply with some specific provisions under Part V including:

While the Act reserves for the Commission authority to prescribe additional exemptions, it includes a greater number of protections for exempt processing activities than the 2022 Bill. In addition to the above-mentioned provisions that exempt entities must comply with, the Act empowers the Commission to issue a Guidance Note on the legal safeguards and best practices for exempted data controllers and processors where such processing violates or is likely to violate Sections 24 and 25 of the law. Some exemptions have been narrowed relative to the 2022 Bill. Entities who were exempted from complying with provisions under the 2022 Bill must now comply with the above-mentioned provisions for exempt entities under the 2023 Act, as well as those relating to data security and cross-border data transfers. 

4. Obligations of Data Controllers and Processors: Novel Registration Requirements for Data Controllers and Processors of Major Importance 

Some of the Act’s obligations for data controllers and processors are novel, while others have been maintained from the NDPR. 

Data Controllers and Processors of “Major Importance” 

The designation of “data controllers and processors of major importance” and the Commission’s authority to classify and regulate such entities is a key new development. Section 44 of the Act sets out the process and timelines to which such entities must adhere, including registering with the Commission within six months after the commencement of the Act, or upon meeting the statutory criteria for qualifying as a data controller or processor of major importance. Additionally, the Act empowers the Commission to exempt classes of data controllers and processors of major importance from registration where it considers that such registration is unnecessary or disproportionate. The criteria for exemption may be stipulated through Regulations by the Commission.

Another special obligation for controllers and processors of major importance is the requirement to appoint a Data Protection Officer (DPO), which is imposed by Section 32 on such entities only. This requirement substantially differs from the NDPR and the Implementation Framework; under the NDPR, every data controller must appoint a DPO (Article 4.1.2), while the Implementation Framework stipulates conditions for such an appointment (3.4.1). 

Other important obligations of all data controllers and processors include:

Compliance with Data Protection Principles

Data controllers and processors are responsible for complying with the principles provided in the Act. The principles are similar to the FIPPS-based sets found in many comprehensive data protection regimes but also include the duty of care as a principle for controllers and processors (Section 24(3)). Specifically, both controllers and processors owe “a duty of care” with respect to data processing, which is linked to demonstrating accountability related to compliance with other principles provided by the Act.  

Filing of Audit Reports

As discussed in greater detail below, controllers and processors must seek the services of a data protection compliance organization (DPCO) to perform a data protection audit, among other obligations. As the Act does not create new criteria for entities required to conduct such audits, provisions under the NDPR and Implementation Framework remain in force. While the Implementation Framework provides that the authority may carry out scheduled audits or perform spot checks, the common practice is for controllers and processors that process personal data of more than 2000 data subjects in 12 months to engage a DPCO to conduct annual audits on their behalf. This practice is expected to continue. 

Provision of Information to a Data Subject Prior to Collection of Personal Data

Where a data controller collects personal data directly or indirectly from a data subject, they must supply the data subject with the following information prior to collection:

Where personal data is collected indirectly, a controller may be exempted from providing this information to a data subject if it had already been provided or where it would involve a disproportionate effort (Section 27(2)). The transparency obligations imposed by Section 27(1) (listed above) shall form part of the content of a privacy policy that a controller is obliged to have under the law and must be expressed in “clear, concise, transparent, intelligible, and easily accessible form.” In providing this information, a controller is obligated to take into account the intended class of data subjects. This implies that a privacy notice may need to be adjusted to cater to, among other issues, the literacy levels and language differences among data subjects.

Conducting a Data Protection Impact Assessment

Section 28(1) mandates a data controller to conduct a data protection impact assessment (DPIA) prior to the processing of personal data, where such processing may likely result in a high risk to the rights and freedoms of a data subject. The Act does not specify the period within which a DPIA must be conducted prior to such processing. Laws such as Kenya’s Data Protection Act require a DPIA to be conducted 60 days prior to the processing;  this obligation may be clarified under future Regulations.

The Commission may designate, by Regulation or Directive, categories of processing or persons that automatically trigger the requirement to conduct a DPIA. To qualify, a DPIA must include: 

Overseeing the Conduct of Data Processors and Sub-Processors

Controllers engaging processors, or processors engaging sub-processors, must take “reasonable measures” to ensure that the engaged party complies with the requirements of the Act set out in Section 29(1). These measures must take the form of a written agreement and ensure that the engaged party:

Data Security and  Data Breach Notification Requirements

Controllers and processors shall be required to implement security measures and safeguards for personal data. The level of such measures shall take into account several factors, including:

The measures that controllers and processors may implement are further described under section 39(2), including pseudonymization, encryption, and periodic assessments of risks to processing systems.

Where a data breach occurs that affects a data processor, the processor will be required to notify the data controller or processor that engaged it as soon as the breached party becomes aware of the incident, and must respond to information requests regarding the breach (Section 40(1)).

Where a data controller suffers a breach that is likely to cause a risk to the rights and freedoms of data subjects as defined by Section 40(7), several steps are required, including:

The requirements for communications to the Commission and to affected data subjects also differ. Communication to the Commission should be as detailed as possible and include a description of the nature of the breach, while notice to data subjects should be in plain and clear language and include steps to take to mitigate any adverse effects. Section 40(4) highlights the common information that should be present in both cases, such as the name and contact details of a point of contact for the data controller. Information relating to a breach may be provided in a phased manner, where it is impossible to provide all information in a single communication.

5. Lawful Grounds for Processing Personal Data, Consent Requirements, and Children’s Personal Data

The Act provides for six lawful grounds for processing personal data similar to those under the GDPR, including:

Consent Requirements

The Act requires that consent be freely given, specific, informed, and unambiguous. This is similar to consent requirements under the NDPR and Implementation Framework. The Act prohibits implied consent – i.e., the inference of consent from a data subject’s inactivity or the use of pre-ticked boxes. This corresponds with most of the consent provisions under the Implementation Framework, other than the fact that the Framework provides exceptions for consent relating to cookies. The Framework (5.6) provides that consent for cookies may be implied from the continued surfing of a website and does not mandate explicit consent. This effectively limits the extent of consent to direct marketing that is required under 5.3.1(a) of the Implementation Framework.

Children’s Privacy

The Act expands the protections accorded to children and persons lacking legal capacity compared to the NDPR and its Implementation Framework. It increases the age threshold under which a data subject is considered a  “child” to 18 years, in alignment with the Nigeria Child Rights Act (however, not all states have domesticated the Act), and contrasts with the Implementation Framework,  which categorizes a child as a person under 13 years of age). The Act also includes specific consent requirements for children and persons lacking the legal capacity to consent. While the NDPR and Implementation Framework are silent on whom to obtain such consent from, under the Act, consent shall be obtained explicitly from parents or legal guardians (Section 31(1)). To effect this, the Act requires controllers and processors to adopt consent verification mechanisms. To guarantee stronger privacy protections for children, the Commission will create Regulations to guide the personal data processing of a child of 13 years and above in the course of their usage of online products and services.

However, there are instances where a controller or processor may process the personal data of children and persons lacking legal capacity without the consent of a parent or legal guardian, such as: 

Further Protection for Processing of Personal Data Relating to Children and Persons Lacking Legal Capacity 

In addition to the consent requirements, the Act further requires controllers and processors to adopt age verification mechanisms. Age verification is required “where feasible,” taking into consideration available technology. Presentation of any government-approved identification documents will be permitted as a verification mechanism. 

6. Data Subject Rights: Robust Rights with No Implementation Mechanisms for Data Subjects and Narrow Restrictions on Exercise of Rights

The Act provides for data subject rights, which data controllers and processors must comply with prior to and during the processing of personal data, including the rights to: 

The Act does not provide comprehensive mechanisms for implementing these rights, such as parameters and modalities to respond to data subject requests. However, the Implementation Framework (2.3.2(c)) requires controllers to inform data subjects on the method to use to withdraw consent before obtaining consent. The Act states that “a controller should ensure that it is as easy for the data subject to withdraw as it is to give consent.” 

The Act does not provide general restrictions/limits to the rights except for specific cases such as:

7. Cross Border Data Transfers: Broad Grounds for Transfers of Personal Data as well as Parliamentary Authorizations to Protect Data Sovereignty

The Act establishes as a rule that personal data should not be transferred outside of Nigeria, allowing for two exceptions. First, personal data can be transferred when the recipient of the personal data (the data importer) is subject either to (1) a law, (2) Binding Corporate Rules (‘BCRs’), (3) contractual clauses, (4) a Code of Conduct, or (5) a certification mechanism that “affords an adequate level of protection” to that provided by the Act. In the absence of such adequate protection through one of the enumerated means, personal data can also be transferred outside of Nigeria in exceptional situations, listed in Section 43 and mapping precisely to the set of derogations under Article 49 GDPR (consent of the individual, or for the performance of a contract, among others).

Controllers are under an obligation to keep a record of the legal basis for transferring personal data outside Nigeria, as well as to record “the adequacy of protection,” according to the criteria described in detail under Section 42 of the Act. This wording suggests that the adequacy of the means of transfers used can be validly assessed by each controller. This is a departure from other existing adequacy regimes, which usually require an official body to declare a specific jurisdiction adequate. 

The Commission is tasked with issuing guidelines on how to assess the adequacy of a particular means of transfer, under the criteria established by Section 42 of the Act. This section explains that an adequate level of protection means “upholding principles that are substantially similar (n. – our emphasis) to the conditions for processing personal data” under the Act. The criteria relevant for adequacy include “access of a public authority to personal data,” potentially complicating such assessments in line with the broader global debate on “government access to data held by private companies.”

Of note, the Commission is given the possibility under the Act to determine whether “a country, region, or specified sector within a country, or standard contractual clauses, affords an adequate level of protection.” In this sense, it is important to recall that the NDPR and Annex C of the Implementation Framework already provide a white list of 41 countries whose laws are considered adequate. Interestingly, the Act specifically allows the Commission to make an adequacy determination under the Nigerian law based on an adequacy decision “made by a competent authority of other jurisdictions,” if such adequacy is based on similar criteria to those listed in Section 42 of the Act. This opens the door for Nigeria to potentially equivalate adequacy decisions made by foreign bodies, like the European Commission, making an “adequacy network effect” functional. The Commission is also empowered to approve BCRs, Codes of Conduct, and certification mechanisms for data transfers. 

Finally, and particularly interesting in the context of emerging certification frameworks like the Global Cross Border Privacy Rules (CBPR) framework, the Act requires that any specific “international, multinational cross-border data transfer codes, rules, or certification mechanisms” relating to data subject protection or data sovereignty must be approved by the National Assembly of Nigeria. This provision on data sovereignty aligns with the Nigeria National Data Strategy, 2022, which incorporates data sovereignty as one of its enabling pillars. Under the Strategy, data sovereignty will facilitate data residency and ensure that data is treated in accordance with national laws and regulations.

In this sense, the Act also empowers the Commission to “designate categories of personal data that are subject to additional specified restrictions on transfer to another country.” This designation would be based on “the nature” of such personal data and on “risks” to data subjects. This provision opens the door to potential future data localization requirements for specific categories of personal data. 

8. Enforcement: Legal Foundation for the Nigeria Data Protection Bureau, Creation of a Governing Council and Expected Regulations

Establishment of the Commission

Originally created through an Executive Order in February 2022, the NDPB has now been renamed the “Nigeria Data Protection Commission” and will operate as an independent and impartial body to oversee the Act’s implementation and enforcement. Previously, data protection enforcement in Nigeria was conducted under the auspices of the Nigeria Information and Technology Development Agency. However, concerns that the NITDA lacked powers to oversee data protection in the country may have necessitated the creation of a new agency. The Commission will function as a successor agency, and all persons engaged in the activities of the Commission shall, upon enactment of the Act, have the same rights, powers, and remedies held by the NDPB before the commencement of the law (Section 64(1)). All regulatory instruments issued by the NITDA, including the NDPR, shall remain in force, and shall have the same weight as if they had been issued by the Commission until they expire, are repealed, replaced, reassembled, or altered (Section 64(2)(f)).

Functions and Powers of the Commission  

Some of the key functions and powers of the Commission include:

  1. Accrediting, Licensing, and Registering Suitable Bodies to Provide Data Protection Compliance Services (Section 5(c)).

Section 28 of the Act provides the Commission with the power to delegate the duty to monitor, audit, and report on compliance with the law to licensed data protection compliance organizations. This model was introduced under the NDPR and allows the data protection authority to delegate some functions under existing regulations to monitor, audit, and report on compliance by data controllers and data processors. Detailed provisions on the operation of DPCOs can be found under the NDPR and Implementation Framework and shall continue to apply to controllers and processors. 

  1. Designating, Registering, and Collecting Fees from Data Controllers and Processors of Major Importance (Section 5(d)).

Following successful registration of a controller or processor of major importance, the Commission is tasked to publish a register of duly registrants on its website. The Commission is also expected to prescribe fees and levies to be paid by this class of controllers and processors. 

  1. Participating in international fora and engaging with national and regional authorities responsible for data protection to develop efficient strategies for the regulation of cross-border transfers of personal data (Section 5(j)).

Currently, the Commission’s predecessor, the NDPB, continues to fulfill this mandate, as seen in its recent participation in initiatives such as the Cross Border Privacy Rules Forum

  1. Issuing Regulations, Rules, Directives, and Guidance.

The Commission is expected to develop certain regulations as prescribed under the law and as detailed above, including in relation to designating new categories of sensitive data, adequate steps for data breach notification, conducting DPIAs, or issuing data localization regulations for specific categories of personal data.  

Other functions of the Commission include promoting public awareness and understanding of personal data protection, the rights and obligations imposed under the law, and the risks to personal data; receiving complaints alleging violations of the Act or subsidiary legislation; and ensuring compliance with national and international personal data protection obligations and good practice.

In a bid to ensure that the services of the Commission are accessible beyond urban areas, the Commission is allowed to establish its offices in other parts of Nigeria (Section 3(b)). This is important as part of creating awareness of the importance of data protection across the country.  

The Commission will be governed by a “Governing Council” (the Council), whose members will be appointed by the President on the recommendation of the Minister on a part-time basis, drawn from the public and private sector to serve for a term of 5 years that is renewable once. This rule exempts the National Commissioner, who will serve as the Secretary to the Council.

The Council is tasked with providing overall policy direction of the affairs of the Commission, approving strategic and action plans, budgeting support programs submitted by the National Commissioner, as well as providing advice and counsel to the National Commissioner.

9. Offenses, Sanctions, and Compensation: Higher Penalties for Data Controllers and Processors of Major Importance 

The Act provides a data subject who has suffered injury, loss, or harm arising from a violation of the law with a private right of action that allows recovery of damages in a civil proceeding. Where a controller or processor violates the provisions of the Act or subsidiary legislation, the Commission may issue a compliance order requiring them to take specific measures to remedy the situation within a specified period as well as inform them of their right to a judicial review. The Commission may also impose an enforcement order or a sanction. In issuing an enforcement order or a sanction, the Commission may:

However, it is not clear from the Act what conditions may trigger an enforcement order, sanction, and thus a penalty or any other such measure. In laws such as Section 62 of Kenya’s Data Protection Act, failure to comply with the requirements of an enforcement order (referred to as a compliance order under the Act) triggers a penalty notice. The Act does not specify the period within which complaints may be heard and concluded.

The penalty amount depends on whether the violator is a data controller or processor of major importance or not. Penalties against data controllers or processors of major importance shall be the higher of N10,000,000 (approximately 22,000 USD) or 2% of the annual gross revenue of the preceding financial year. Penalties against other data controllers and processors shall be greater than N2,000,000 (approximately 4,300 USD) or 2% of the annual gross revenue of the preceding financial year.

The Commission is empowered to create regulations that create new offenses and that impose penalties not exceeding those prescribed under the Act (Section 56(3)).

Conclusion 

As Nigeria continues to make its mark within the global digital economy and rapidly expand its technology ecosystem, this Act represents a continued focus on protecting the personal data of Nigerian citizens, in alignment with common internationally accepted principles of data protection. 

However, the Act contains unique provisions that should not be overlooked, including a new classification of data controllers and processors “of major importance” and specific obligations attached to them, as well as broader protections for exempt processing activities. Overall, the Act represents a significant step in Nigerian data protection and notably resolves the long-running dispute regarding the identity and institutional authority of Nigeria’s primary data protection regulator. 

Unveiling China’s Generative AI Regulation

Authors: Yirong Sun and Jingxian Zeng

The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech and Jingxian Zeng, research fellow at the University of Hong Kong Philip K. H. Wong Centre for Chinese Law. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.

The Draft Measures for the Management of Generative AI Services (the “Draft Measures”) were released on April 11, 2023, with their comment period closed on May 10. Public statements by industry participants and legal experts provided insight into the likely content of their comments. It is now the turn of China’s cyber super-regulator – the Cyberspace Administration of China (“CAC”) – to consider these comments and likely produce a revised text.

This blog analyzes the provisions and implications of the Draft Measures. It covers the Draft Measures’ scope of application, how they apply to the development and deployment lifecycle of generative AI systems, and how they deal with the ability of generative AI systems to “hallucinate” (that is, produce inaccurate or baseless output). It also highlights potential developments and contextual points about the Draft Measures that industry and observers should pay attention to.

The Draft Measures aim to protect the “collective” interests of “the public” within the territory of the People’s Republic of China (PRC) in relation to the Management of Generative AI Services. The primary risk foreseen by the CAC involves the potential use of the novel technology to manipulate public opinion and fuel social mobilization by spreading sensitive or false information. The Draft Measures also seek to tackle issues arising from high-profile societal events, such as data leaks, frauds, privacy breaches, intellectual property infringements, as well as overseas incidents widely reported in Chinese media, including defamation and extreme cases of suicide following interactions with AI chatbots. Notably, the Draft Measures set high standards for data authenticity and impose safeguards for personal information and user input. They also mandate the disclosure of information that may impact users’ trust and the provision of guidance for using the service rationally. 

Meanwhile, concerns have arisen that the Draft Measures may slow down the development of generative AI-based products and services by Chinese tech giants. Companies providing services based on generative AI, including those provided through application programming interfaces (“APIs”), are all subject to stringent requirements in the Draft Measures. The Draft Measures thus concern not only those who have the means to train their own models, but also smaller businesses who want to leverage on open-source pre-trained models to deliver services. In this regard, the Draft Measures are likely to present compliance challenges within the open-source context.

While this blog focuses on the Draft Measures, it is important to note that industrial policies from both central and local governments in China also exert substantial influence over the sector. Critically, the task to promote AI advancement amid escalating concerns is overseen by authorities other than the CAC, such as the Ministry of Science and Technology (“MST”) and the Ministry of Industry and Information Technology (“MIIT”). Recently, the China Academy of Information and Communications Technology (“CAICT”), a research institute affiliated with the MIIT, introduced China’s first-ever industry standards1 for assessing generative AI products. These agencies, along with their competition and coordination, can and will co-play a significant role with the CAC in the realm of generative AI regulation. 

1. Notable aspects of the Draft Measures’ scope of application: Definition of “public” and extraterritorial application

Ambiguity in the definition of “public”

The Draft Measures regulate all generative AI-based services offered to “the public within the PRC territory.”2 This scope of application diverges from existing Chinese laws and regulations where intended service recipients are not usually considered. For instance, regulations targeting deep synthesis and recommendation algorithms both apply to the provision of service using these technologies regardless of service recipients being individuals, businesses or “the public.” Looking at its context, Article 6 of the Draft Measures suggests that generative AI-based services have the potential to shape public opinion or stimulate social mobilization, essentially highlighting their impact on “the public.” This new development thus likely signifies the CAC’s goal to prioritize the protection of wider societal interests over individual ones such as privacy or intellectual property which could be protected under previous regulations.  

However, the Draft Measures leave “the public (公众)” undefined. This gives rise to ambiguity as to the scope of application for the Draft Measures. For example, would a service licensed exclusively to a Chinese private entity for in-house use fall in the scope? How about a service accessible only to certain public institutes but not to the unaffiliated, or one customized for individual clients who each receive a unique product derived from a common foundation model, or simply an open-source model that is ready to download and install?

Extraterritorial application

The new approach also suggests a more extensive extraterritorial reach. Regardless of where the service is provided, as long as the public within the PRC territory has access to it, the Draft Measures apply. To avoid being subject to Chinese law, OpenAI, for example, has reportedly begun blocking users based in mainland China. This development could further restrict Chinese users’ access to overseas generative AI services, especially since even before the Draft Measures were released, most Chinese users’ access to such services was  already geo-blocked – either by the service providers themselves (e.g., by requiring a foreign telephone number for registration), or by the Chinese government through enforcement measures. At the same time, the scale of China’s user market and its involvement in AI development render it a “vital” jurisdiction in terms of AI regulation. OpenAI CEO has recently called for collaboration with China to counter AI risks, a trend we might see more in the future. 

2. The Draft Measures adopt a compliance approach based on the lifecycle of generative AI systems 

The Draft Measures are targeted at “providers” of generative AI-based services

The Draft Measures take the approach of regulating generative AI-based service providers. As per Article 5, “providers (提供者)” are those “using generative AI to offer services such as chat, text, image, audio generation; including providing programmable interface and other means which support others to themselves generate text, images, audio, etc.” The obligations are as follows:

Incentivizing providers to allocate risk upstream to developers

By imposing lifecycle compliance obligations on the end-providers, the Draft Measures create incentives for end-providers to allocate risks to upstream developers through mechanisms like contracts. Whether the parties can distribute their rights and obligations fairly and efficiently depends on various factors, such as the resources available to them and the presence of asymmetric information among them. To better direct this “private ordering” with significant social implications, the EU has planned to create non-binding standard contractual clauses based on each party’s level of control in the AI value chain. The CAC’s stance in this new and fast-moving area remains to be seen.

The Draft Measures pose potential challenges for deploying open-source generative AI systems

Open-source models raise a related but distinct issue. Open-source communities are currently developing highly capable large language models (“LLMs”), and businesses have compelling commercial incentives to adopt them, as training a model from scratch is relatively hard. However, many open-source models are released without a full disclosure of their training datasets, due to reasons such as the extensive effort required for data cleaning and privacy issues, especially when user data is involved. Adding to this complexity is the fact that open-source LLMs are not typically trained in isolation. Rather, they form a modification chain where the models build on top of each other with modifications made by different contributors. Consequently, for those using open-source models, several obligations in the Draft Measures become difficult or even impossible to fulfill, including pre-launch assessment, post-launch retraining, and information disclosure.

3. The Draft Measures target the “hallucination” of generative AI systems

The Draft Measures describe generative AI as “technologies generating text, image, audio, video, code, or other such content based on algorithms, models, or rules.” In contrast to the EU’s new compromise text on rules for generative AI, which adopts a technical definition of “foundation models,” the Draft Measures focus on the technology’s function, regardless of their underlying mechanisms. Moreover, according to Article 6 of the Draft Measures, generative AI-based services automatically fall under the scope of Regulations for the Security Assessment of Internet Information Services Having Public Opinion Properties or Social Mobilization Capacity, which mandate security assessment.  A group of seven Chinese scholars have proposed removing this provision and applying security assessment only to those that actually possess these properties.

The Draft Measures contain provisions targeted at ensuring accuracy throughout the developmental lifecycle of generative AI systems. These echo the CAC’s primary concern that the technology could be misused to generate and disseminate misinformation. Article 7(4) of the Draft Measures stipulates that providers must guarantee the “veracity, accuracy, objectivity, and diversity” of the training data. Article 4(4) of the Draft Measures requires that all content generated be “true and accurate,” and that providers of generative AI-based products and services must adopt measures in place to “prevent the generation of false information.” Such providers are responsible for filtering out any non-compliant material and preventing its regeneration within three months (Article 15). However, industry representatives and legal practitioners in China have raised concerns about the baseline and technical feasibility of ensuring data authenticity, given the use of open internet information and synthetic data in the development of generative AI.

4. Looking Ahead

The CAC is expected to refine the Draft Measures after gathering public feedback. The final version and subsequent promulgation may be influenced by a broader set of contextual factors. We believe the following aspects also warrant consideration:

1Chinese major players in the AI industry are forming interest groups to channel their influence on policy makers. For example, China’s industry standards for generative AI were drafted by over 40 entities including tech companies such as Baidu, SenseTime, Xiaomi, NetEase. SenseTime also launched an open platform for AI safety governance to shape practices around AI regulatory issues such as cybersecurity, traceability, IP protection.

2A widely circulated translation of Article 2 states: “These Measures apply to the research, development, and use of products with generative AI functions, and to the provision of services to the public within the territory of the People’s Republic of China.” However, we believe this is misleading. A more accurate read of the original Chinese text and its context suggest that “the provision of services to the public” is a cumulative requirement rather than a separate one.

3The Draft Measures seem to exhibit technical sophistication in their terminology. In Articles 7 and 17, the data compliance obligation is split into two phases – pre-training and optimization. However, the choice of terminology is peculiar, as the prevailing terms in machine learning are pre-training and fine-tuning. Optimization is typically employed to describe a stage within the training process, often used in conjunction with forward and backward propagation.

FIRST JAPAN PRIVACY SYMPOSIUM CONVENING G7 REGULATORS FOCUSES ON GLOBAL TRENDS AND ENFORCEMENT PRIORITIES

The Future of Privacy Forum (FPF), a global non-profit focused on data protection and privacy, and S&K Brussels LPC will jointly present the first edition of the Japan Privacy Symposium on June 22, 2023. The event will convene in Tokyo, bringing together leaders in the Japanese privacy community with data protection and privacy regulators from across the globe.

The event coincides with the G7 Data Protection Authorities and Privacy Commissioners’ Summit, and the Symposium will convene leaders in the Japanese privacy community with data protection and privacy regulators from across the globe to discuss key issues on AI governance and data protection law, the future of adtech, global cooperation and enforcement trends.  The line-up of speakers includes: Ms. Rebecca Kelly Slaughter (Commissioner, U.S. Federal Trade Commission), Dr. Wojciech Wiewiórowski (European Data Protection Supervisor), Mr. Philippe Dufresne (Federal Privacy Commissioner, Canada), Ms. Ginevra Cerrina Feroni (Vice President of the Garante, Italy), Mr. John Edwards (Information Commissioner, UK), with a keynote address from Mr. Shuhei Ohshima (Commissioner, Japan’s Personal Information Protection Commission).

“We’re excited to co-host this valuable event that will bring together data protection and privacy regulators from around the world alongside the Japanese privacy community,” Gabriela Zanfir-Fortuna, FPF’s Vice President for Global Privacy, said. “Data protection and privacy regulators from the G7 economies are meeting in Tokyo to strategize about coordinated approaches to tackle the challenges raised by the advancement of new technologies fueled by data and their impact on society, people, and economy. This Symposium offers a forum for the regulators and the Japanese data protection and privacy community members to exchange ideas, share an overview of the state of play in global regulation and strategize for the future.”

Takeshige Sugimoto, Managing Director and Partner at S&K Brussels LPC, FPF’s Senior Fellow for Global Privacy, and Co-Founder and Board Member of Japan DPO Association, added: “S&K Brussels is delighted to co-host the inaugural Japanese privacy symposium to bring together esteemed privacy and data protection leaders from G7 countries.  Opportunities for collaboration in the global data protection and privacy community are vital, and we hope that the Japan Privacy Symposium will set the stage for important participation and dialogue for years to come.”

FPF is focused on the expansion of its international reach in Asia, with its August 2021 Asia Pacific office opening in Singapore and the announcement of a new FPF APAC Managing Director, Josh Lee Kok Thong, last July.

For more information about the event, the agenda, and speakers, visit the FPF site.

###

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks and develop appropriate protections.

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on Twitter and LinkedIn.

About S&K Brussels LPC

S&K Brussels LPC is a Japanese law firm composed of lawyers and foreign lawyers whose main practice area is data protection and privacy laws in five jurisdictions i.e., EU, UK, US, China and Japan, which opened in Brussels, Belgium in 2019. We focus on Future Proof efforts to read the shape of future regulations, including AI regulations and other data-related regulations as regulations closely related to data protection legislation.

— データ保護とプライバシーに焦点を当てた世界的な非営利団体であるフューチャー・オブ・プライバシー・フォーラム(Future of Privacy Forum(FPF))弁護士法人S&K Brussels法律事務所は、2023年6月22日に第1回日本プライバシーシンポジウムを共同で開催します。当イベントは、日本のプライバシーコミュニティのリーダーたちと、世界中のデータ保護およびプライバシー規制監督当局が一堂に会し、東京で開催されます。

G7データ保護監督当局・プライバシーコミッショナーのサミットと同時期の開催となる当シンポジウムでは、日本のプライバシーコミュニティのリーダーたちと世界中のデータ保護・プライバシー規制監督当局が一堂に会し、AIガバナンスとデータ保護法、広告技術の未来、グローバル協力、執行動向に関する重要課題について議論する予定です。当シンポジウムでは、レベッカ・ケリー・スローター(Rebecca Kelly Slaughter)氏(米国連邦取引委員会委員)、ヴォイチェフ・ヴィエヴィオロフスキー(Wojciech Wiewiórowski)氏(欧州データ保護監督官)、フィリップ・デュフレーヌ(Philippe Dufresne)氏(カナダ連邦プライバシーコミッショナー)、ジネーヴラ・チェリーナ・フェローニ(Ginevra Cerrina Feroni)氏(イタリアデータ保護監督当局(Garante)副委員長)、ジョン・エドワーズ(John Edwards)氏(英国情報コミッショナーオフィス情報コミッショナー)にパネリストとして御登壇頂くとともに、大島周平(Shuhei Ohshima)氏(日本個人情報保護委員会委員)に基調講演に御登壇頂きます。

FPFのグローバルプライバシー担当副社長であるガブリエラ・ザンフィル=フォルトゥナ(Gabriela Zanfir-Fortuna)氏は、「世界中のデータ保護およびプライバシー規制監督当局と日本のプライバシーコミュニティが集まるこの貴重なイベントを共催できることを大変嬉しく思います」、「G7経済圏のデータ保護およびプライバシー規制監督当局が東京に集まり、データによって促進される新しい技術の進歩や、それらが社会、人々、経済に与える影響によって生じる課題に取り組むための協調的なアプローチについて戦略を練っています。当シンポジウムは、規制監督当局と日本のデータ保護およびプライバシーコミュニティのメンバーが意見を交換し、世界の規制の現状を共有し、将来に向けて戦略を立てるための場を提供するものです。」と述べています。

弁護士法人S&K Brussels法律事務所代表パートナー、FPFのグローバルプライバシー担当シニアフェロー、一般社団法人日本DPO協会の共同設立者兼理事である杉本武重(Takeshige Sugimoto)氏は「S&K Brusselsは、G7諸国の尊敬すべきプライバシーとデータ保護のリーダーたちが集まる、第1回日本プライバシーシンポジウムを共同開催することができて嬉しく思います。 世界のデータ保護とプライバシーコミュニティにおける協力の機会は不可欠であり、日本プライバシーシンポジウムが、今後何年にもわたって重要な参加と対話の舞台となることを期待しています。」と述べています。

FPFは、2021年8月にアジア太平洋オフィス(FPF APAC)をシンガポールに開設し、昨年7月には新しいFPF APACマネージングディレクターのジョシュ・リー・コク・トン(Josh Lee Kok Thong)氏の就任を発表するなど、アジアでの国際展開に力を入れています。

当イベントの詳細、アジェンダおよび登壇者については、FPFのウェブサイトを御覧下さい。

###

Future of Privacy Forum(FPF)について

Future of Privacy Forum(FPF)は、学術関係者、市民社会、政府関係者、産業界が集まり、データ活用の社会的、政策的、法的な意味を評価し、リスクを特定し、適切な保護を開発するための世界的な非営利団体です。FPFは、適切な法律、政策および規則が整備されれば、技術とデータは社会に利益をもたらし生活を向上させることができると考えています。FPFはワシントンD.C.、ブリュッセル、シンガポールおよびテルアビブにオフィスを構えています。TwitterLinkedInでFPFをフォローしてください。

S&K Brussels法律事務所について

S&K Brussels法律事務所は2019年にベルギーのブリュッセルで開業したEU、英国、米国、中国及び日本の5つの法域のデータ保護法制を主な取扱分野とする弁護士・外国弁護士によって構成される日本の法律事務所です。データ保護法制に密接に関連する規制としてのAI規制をはじめとするデータ関連規制を含め、将来の規制の形を読むFuture Proofの取組みに力を入れています。

FPF at CPDP 2023: Covering Hot Topics, from Data Protection by Design and by Default, to International Data Transfers and Machine Learning

At this year’s annual Computers, Privacy and Data Protection (CPDP) conference in Brussels, several Future of Privacy Forum (FPF) staff took part in different panels, organized by FPF, as well as academic, industry, and civil society groups. This blogpost provides a brief overview of these exciting events, and CPDP will publish recordings of them shortly. 

May 24: EU Commission and ASEAN launch Joint Guide to Model Clauses for Data Transfers

On the conference’s first day, FPF Vice President for Global Privacy Gabriela Zanfir-Fortuna joined a panel organized by Haifa Center for Law and Technology (Faculty of Law, University of Haifa) on the GDPR’s effectiveness, alongside Tal Zarsky, Dean and Professor of Law at the University of Haifa’s Faculty of Law, Raphael Gellert, assistant professor in ICT and private law at Radboud University, Sam Jungyun Choi, associate in the technology regulatory group of Covington & Burling LLP, and Amit Ashkenazi, research student and adjunct lecturer on cyber law and policy at the University of Haifa. The panel contributed to current reflections on the effectiveness of the GDPR, five years after its enactment, by focusing on challenges arising from the regulatory design it sets forth. Gabriela noted that data protection law is much broader than the GDPR because it has a fundamental element behind it, meaning that the right to the protection of personal data is protected as a fundamental right at a constitutional level in the EU. She also stressed that ongoing adoption of the GDPR has catalyzed more societal interest in law, technology, and data protection rights and concepts. 

cpdp 2023 1

Photo: CPDP Panel on Exploring the Many Faces of the GDPR – in Search of Effective Data Protection Regulation, 5/24/2023

Later that day, Gabriela moderated a panel organized by the EU Commission, which served as a platform to launch a “Joint Guide to ASEAN Model Contractual Clauses and EU Standard Contractual Clauses.” The Guide identifies commonalities between the two sets of model clauses and aims to “assist companies present in both jurisdictions with their compliance efforts under both sets of clauses.” The panel was joined by Denise Wong, Deputy Commissioner-designate, Personal Data Protection Commission Singapore, and Assistant Chief Executive-Designate of the Infocomm Media Development Authority, Alisa Vekeman, European Commission, International Affairs and Data Flow team, and Philipp Raether, Group Chief Privacy Officer, Allianz. The panelists noted that model clauses are the most used mechanisms for international data transfers and that efforts like the Joint Guide are a promising solution for a global regime underpinning flows of personal data across different jurisdictions, while providing safeguards for individuals and their data. Officials in the panel noted that the Guide is just the first step in this EU-ASEAN collaboration on model clauses, noting that a set of best practices from companies who use both is to be expected in the near future.

To wrap up the first day of the conference, FPF’s Policy Councel for Global Privacy, Katerina Demetzou, joined a panel on the constitutionalization of data rights in the Global South, along with Mariana Marques Rielli, Institutional Development Coordinator at Data Privacy Brazil, Laura Schertel Mendes, law Professor at the University of Brasilia (UnB) and at the Brazilian Institute for Development, Education and Research (IDP) and Senior Visiting Researcher at the Goethe-Universität Frankfurt am Main with the Capes/Alexander von Humboldt Fellowship, and Risper Onyango, advocate of the High Court of Kenya, currently serving as a Digital Policy Lead under the Digital Economy Department at the Lawyers Hub. In her intervention, Katerina explored how the GDPR has been applied by Data Protection Authorities in Europe to emotion recognition AI systems and to Generative AI. Her examples emphasized that discussions about AI governance and AI regulation should examine existing data protection law applications to these systems and develop in response to gaps in these legal systems.

img 7732

Photo: Panel on From Theory to Practice: Digital Constitutionalism and Data Justice in Movement in the Global South, 5/24/2023

May 25: High Level Discussion Spurred by FPF’s Data Protection by Design and by Default Case-Law Report

On May 17, FPF launched a comprehensive Report on the enforcement of the EU’s GDPR Data Protection by Design and by Default (DPbD&bD) obligations, which are outlined in GDPR Article 25. The Report is informed by extensive research covering more than 92 decisions from Data Protection Authorities (DPAs) and national Courts, and it offers specific Guidance and other policy documents issued by regulators.

On May 25, FPF organized a panel moderated by the Report’s co-author Christina Michelakaki, FPF Policy Fellow for Global Privacy, on the enforcement of Article 25 GDPR and the uptake of Privacy Enhancing Technologies (PETs). Marit Hansen, State Data Protection Commissioner of Land Schleswig-Holstein, Jaap-Henk Hoepman, Professor, Radboud University Nijmegen/University of Groningen, Cameron Russell, Primary Privacy Advisor on Global Payments Matters at eBay, and Stefano Leucci, Legal and Technology expert at the European Data Protection Supervisor joined the panel. The speakers offered their perspectives on the Article 25 GDPR enforcement, delving into topics such as the interrelation between dark patterns and by default settings, the role of Article 25 GDPR in preventing harms from AI systems, and the maturity of PETs. 

Photos: CPDP workshop on State-of-Play of Privacy Preserving Machine Learning (PPML), and CPDP Panel on the Enforcement of Data Protection by Design & Default: Consequences for the Uptake of Privacy-Enhancing Technologies, 5/25/2023

cpdp 2023 4 6

Photo: CPDP Panel on the Enforcement of Data Protection by Design & Default: Consequences for the Uptake of Privacy-Enhancing Technologies, 5/25/2023

FPF’s Managing Director for Europe, Rob van Eijk, organized and facilitated a workshop exploring how to clear the path towards alternative solutions for processing of (personal) data with Machine Learning. Four data scientists, Lindsay Carignan (Holistic AI), Nigel Kingsman (Holistic AI), Victor Ruehle (Microsoft Research), and Reza Shokri (National University of Singapore) joined the workshop. The group introduced an easy to understand privacy auditing framework that quantitatively measures privacy risks in ML systems, while also exploring the relationship between bias and regulations in legislation such as the EU AI Act. You can watch the recording of the workshop here.

The same day, Rob also joined a panel on PETs, consumer protection, and the online ads ecosystem with Marek Steffen Jansen, Privacy Policy Lead – EMEA/Global at Google, Anthony Chavez, VP of Product Management of Google, Marie-Paule Benassi, lawyer, economist, data scientist, and Head of Enforcement of Consumer Law and Redress at the European Commission, Stefan Hanloser, VP Data Protection Law at ProSiebenSat.1 Media SE, and Christian Reimsbach-Kounatze, Information Economist and Policy Analyst at the OECD Directorate for Science, Technology and Innovation. You can watch the recording of the panel here.

May 26: Reflections on automation, compliance and data protection law

Finally, Gabriela participated in a day-long “philosopher’s seminar” on compliance and automation in data protection law organized by CPDP, ALTEP-DP, COHUBICOL under the leadership of Prof. Mireille Hildebrandt, which will flow into a series of published research papers later on in 2023.

While celebrating the five years anniversary of the GDPR becoming applicable, at a pivotal moment of growth and change for emerging technologies, CPDP 2023 in Brussels gave the FPF team an extraordinary opportunity to engage with and facilitate collaborative dialogues with leading academics, technologists, policy experts, and regulators.

New FPF Report: Unlocking Data Protection by Design and by Default: Lessons from the Enforcement of Article 25 GDPR

On May 17, the Future of Privacy Forum launched a new report on enforcement of the EU’s GDPR Data Protection by Design and by Default (DPbD&bD) obligations, which are outlined in GDPR Article 25. The Report draws from more than 92 data protection authority (DPA) cases, court rulings, and guidelines from 16 EEA member states, the UK, and the EDPB to provide an analysis of enforcement trends regarding Article 25. The identified cases cover a spectrum of personal data processing activities, from accessing online services and platforms, to tools for educational and employment contexts, to “emotion recognition” AI systems for customer support, and many more.

The Report aims to explore the effectiveness of the DPbD&bD obligations in practice, informed by how DPAs and courts enforced Article 25. For instance, we analyze whether DPAs and courts find breaches of Article 25 without links to other infringements of the regulation and what provisions enforcers tend to apply together with Article 25 the most, including the general data protection principles and requirements related to data security under Article 32. We also look at what controls and controller behavior are and are not deemed sufficient to comply with Article 25.

The GDPR’s DPbD&bD provisions in Article 25 oblige controllers to: 1) adopt technical and organizational measures (TOMs) that, by design, implement data protection principles into data processing and protect the rights of individuals whose personal data is processed; and 2) ensure that only personal data necessary for each specific purpose is processed. Given the breadth of these obligations, it has been argued that Article 25 makes the GDPR “stick” by bridging the gap between its legal text and practical implementation. GDPR’s DPbD&bD obligations are seen as a tool to enhance accountability for data controllers, implement data protection effectively, and add emphasis to the proactive implementation of data protection safeguards.

Our analysis on the enforcement, and ultimately the effectiveness, of Article 25 is all the more important, given the increasing development and deployment of novel technologies involving very complex personal data processing, like Generative AI, and rising data protection concerns. Understanding how Article 25 obligations manifest in practice and the requirements of DPbD&bD may prove essential for the next technological age.

This Report outlines and explores the key elements of GDPR Article 25, including the:

Additionally, we analyze the individual concepts of “by Design” and “by Default,” identify divergent enforcement trends, and explore three common applications of Article 25 (direct marketing, privacy preservation and Privacy Enhancing Technologies (PETs), and EdTech). This Report also includes a number of Annexes that seek to provide more information on the specific cases analyzed and a comparative overview of DPA enforcement actions. 

Our analysis determines that European DPAs diverge in how they interpret the preventive nature of Article 25 GDPR. Some are reluctant to find violations in cases of isolated incidents or where Article 5 GDPR principles are not violated, while others apply Article 25 preventively before further GDPR breaches or even planned data processing. Our research also finds that most DPAs are reluctant to specify appropriate protective measures and to explicitly outline the role of PETs. Ultimately, the Report shows that despite the novelty of Article 25, and the criticism surrounding its vague and abstract wording, it is a frequent source of some of the highest GDPR fines, highlighting the need for organizations to maintain a firm grasp over the concepts of DPbD&bD.