Insights from the Second Japan Privacy Symposium: Global Data Protection Authorities Discuss Their 2025 Priorities, from AI, to Cross-Regulatory Collaboration

The Future of Privacy Forum (FPF) hosted the Second Japan Privacy Symposium (Symposium) in Tokyo on November 15, 2024. The Symposium brought together leading data protection authorities (DPAs) from around the world to discuss pressing issues in privacy and data governance. The Symposium featured in-depth discussions on international collaboration, artificial intelligence (AI) governance, and the evolving landscape of data protection laws.

event recap blog template (1)

The Symposium kickstarted the Personal Information Protection Commission of Japan’s (PPC) Japan Privacy Week, and was an official side-event of the 62nd Asia-Pacific Privacy Authorities (APPA) Forum (APPA 62). FPF is grateful for the collaboration and support from the PPC, the Japan DPO Association, and S&K Brussels LPC.

In this blog post, we share some of the key takeaways from the Symposium. 

Japan Privacy Symposium features global privacy regulators in Tokyo

The Symposium welcomed an esteemed line-up of speakers. Commissioner Shuhei Oshima from the PPC delivered the opening keynote. In his keynote, Commissioner Oshima shared about the PPC’s regulatory priorities for 2025. These included cross-border data transfers and the Data Free Flow with Trust initiative, as well as further collaboration with the G7 DPAs and bilaterally with various international regulators. 

Following the keynote, Gabriela Zanfir-Fortuna, Vice-President for Global Privacy at FPF moderated a panel on the regulatory strategies of APAC and global DPAs in 2024 and beyond. Gabriela was joined by Philippe Dufresne, Privacy Commissioner of Canada, Office of the Privacy Commissioner of Canada, Ashkan Soltani, Executive Director of the California Privacy Protection Agency (CPPA), Dr. Nazri Kama, Commissioner, Personal Data Protection Commissioner’s Office of Malaysia (PDPD), Thienchai Na Nakorn, Chairman, Personal Data Protection Committee of Thailand (PDPC), and Josh Lee Kok Thong, Managing Director for Asia-Pacific at FPF. 

Regulators in APAC have some common priorities, such as cybersecurity and cross-border data transfers

The panel kicked off with highlights from a recent report published by FPF’s APAC office, “Regulatory Strategies of Data Protection Authorities in the Asia-Pacific Region: 2024, and Beyond”, presented by Josh. In line with similar FPF work focusing on the EU, Latin America and Africa, the report provides a comprehensive analysis of strategy documents and key regulatory actions of DPAs in 10 major jurisdictions in Asia-Pacific, as well as an overview of key trends in the region.

There are three top common priorities for APAC’s major DPAs:

  1. First, cybersecurity and data breach responses, with 90% of the DPAs included in the Report prioritising this. However, jurisdictions are at various stages of implementing measures in these areas, while enforcement approaches also differ significantly.
  1. Second, cross-border data transfers, which are a priority for 80% of APAC DPAs. Jurisdictions are similarly taking a diversity of approaches, from taking a leading role in international initiatives, such as the Global Cross-Border Privacy Rules (CBPR) System (for instance, Japan and Singapore), to promoting the use of standardized contractual clauses (for instance, China, Japan and Singapore).
  1. Third, AI governance, with 70% of regulators prioritising this. Some have developed comprehensive policy frameworks and regulations for AI, while others have focused on issuing guidelines or addressing AI within existing regulatory structures.

Cross-regulatory and cross-border collaboration is a shared priority for regulators in APAC and beyond

During the panel discussion, one top regulatory priority surfaced was on cross-border collaboration. Commissioner Dufresne emphasized the importance of international cooperation in addressing privacy challenges. “At the OPC, we will continue to be focused on topics such as international collaboration,” he noted. Commissioner Dufresne discussed the OPC’s efforts to collaborate with domestic and international partners, including other regulators in fields such as competition, copyright, broadcasting, telecommunications, cybersecurity, and national security. “Data protection is key to so many of those things,” Commissioner Dufresne said. “It touches other regulators, so working very closely is something we’ve been discussing, including at the G7.”

Expanding regional and international collaboration was similarly a key priority for Malaysia. Commissioner Nazri noted that Malaysia’s PDPD had visited fellow regulators in the UK, EU, Japan, South Korea and Singapore. The PDPD had also just joined the APPA Forum, as well as the APEC Cross-Border Privacy Enforcement Arrangement (CPEA). Going forward, Commissioner Nazri noted that the PDPD would be “moving towards” applying for the Global Cross-Border Privacy Rules (CBPR) certification system. The PDPD is also taking steps towards meeting the EU’s adequacy requirements, with Commissioner Nazri expressing hope that Malaysia would attain EU adequacy “in the next two years.”

Similarly, Chairman Thienchai from Thailand’s PDPC noted that it had sent delegations to attend Global CBPR workshops, and that the PDPC could also be applying to be a member of the Global CBPR system soon. 

Regulators are balancing between AI innovation and risk, while managing an ever-growing pool of AI-related issues

AI remains a top concern for regulators worldwide. Commissioner Dufresne stated that ensuring the protection of privacy in the context of emerging and changing technology is a key priority for the OPC. “Certainly, generative AI and other emerging technologies like quantum computing and neurorights are changing the landscape,” he said. “We need to use innovation to protect data.”

He emphasized the importance of leveraging technology to protect privacy, noting that AI can be used as a tool against threats like deepfakes. The OPC is also looking to work with cross-regulatory partners to address issues such as synthetic media. “We’re looking to work with cross-regulatory partners in identifying specific areas and seeing what are the common areas or perhaps different areas of privacy and competition with a specific topic like synthetic media,” he explained.

California’s CPPA has also been at the forefront of rule-making and enforcement actions pertaining to AI and automated decision-making challenges. In this regard, Director Soltani observed that “there is no AI without PI (personal information).” The CPPA has thus had to develop deep expertise in AI while acting as California’s privacy regulator. Besides focusing on rule-making, the CPPA has been conducting enforcement sweeps in various sectors, starting with the connected vehicle sector. 

The task of applying data protection laws to AI and issuing relevant industry guidance is also one that Thailand’s PDPC is working on. Chairman Thienchai noted that the PDPC had “established a working group study” on how AI is impacting the protection mechanism under Thailand’s Personal Data Protection Act, with results expected in the first quarter of 2025. Thailand’s PDPC is also working to issue guidelines on the intersection of AI and the PDPA. The guidelines could state, for instance, that in using personal data to train AI systems, developers have to do so on an anonymised basis. 

Regulators continue to work on implementing updates to data protection laws to deal with new and emerging challenges

A third theme that emerged from the panel discussion was how regulators were planning to continue working on updates to their data protection laws – and implementing them – in 2025.

For California’s CPPA, Director Soltani highlighted that his agency was deeply engaged in rule-making, especially in these areas: (a) cybersecurity, where companies in California will be required to perform and submit cybersecurity assessments to the CPPA; (b) data protection impact assessments or risk assessments, where companies will be required to perform such assessments including where they deploy AI tools; and (c) automated decision-making technologies and AI. Director Soltani also highlighted the ongoing work of implementing aspects of the California Consumer Privacy Act (CCPA). For instance, with the CPPA’s Data Broker Registry, the CPPA is working on setting up a one-stop shop by January 2026, where Californians will “have the ability to go to one place and request that all of their data be deleted from all of these companies.”

For Malaysia, Commissioner Nazri provided an update on recent amendments to Malaysia’s Personal Data Protection Act (PDPA) that were passed in late-2024. “The amendment was presented to our national parliament in July this year and was officially approved on July 31,” he noted. Commissioner Nazri noted several key changes to the PDPA, including:

Commissioner Nazri also noted that the PDPD would be issuing 19 new documents in tranches throughout 2025. Specifically, these were nine pieces of subsidiary legislation, two circulars (or Commissioner’s Orders), seven guidelines, and one standard. Commissioner Nazri further shared that work was ongoing to re-formulate the PDPD into an independent Commissioner’s Office. 

For Thailand, while Thailand had passed its PDPA in 2021, Chairman Thienchai noted that Thailand’s PDPA contained a review requirement to update the law if necessary. Chairman Thienchai thus noted that the PDPC would be working in 2025 to introduce a proposal to amend the PDPA “to catch up with the global community.” Further, Chairman Thienchai acknowledged challenges with data breaches, especially in the public sector, and emphasized the need for coordination among agencies. “We have to coordinate with other agencies to improve the enforcement mechanism in the PDPA,” he said.

Finally, the PDPC is prioritizing cross-border data transfers. “We issued some subordinate laws related to cross-border transfers and we adopted ASEAN Model Contractual Clauses (MCCs) and also EU Standard Contractual Clauses (SCCs) in our subordinate laws,” Chairman Thienchai explained, concluding with an update that his office is “promoting ASEAN MCCs with the Thai Chamber of Commerce.”

Conclusion

The second edition of the Japan Privacy Symposium showcased the shared challenges and priorities among global data protection authorities. From AI governance to cross-regulatory collaboration and legal reforms, the Symposium highlighted the need for continued dialogue, cooperation, and information-sharing. 

Following the Symposium, FPF was also honored and privileged to have been invited to participate in speaking opportunities during the closed and public sessions of APPA 62. In particular, Gabriela moderated a session on AI governance and regulation, while Josh spoke on a panel on balancing innovation and data protection.

FPF remains committed to facilitating these important conversations and advancing the discourse on privacy and emerging technologies globally.

Five Big Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2025

In the enduring absence of a comprehensive national framework governing the collection, use, and transfer of personal data, state-level activity on privacy legislation has been on a consistent upward trend since the enactment of the California Consumer Privacy Act in 2018. With all 50 U.S. states scheduled to be in session in 2025, stakeholders are anticipating yet another year of expansion and divergence across the state privacy landscape. It is still too early to predict which states will adopt or amend their privacy laws so instead this article explores the five big questions set to shape American privacy law in the coming year.

  1. Will a new consensus emerge on data minimization (and would it change anything)? 

State privacy laws have traditionally incorporated the principle of data minimization by prohibiting data processing beyond what is reasonably necessary to accomplish the purposes that are disclosed to a user. Consumer advocates have long objected to this approach, arguing that it incentivizes companies to bury broad disclosures in dense privacy notices, resulting in little, if any, heightened protection. This year, in a potentially paradigm-shifting move, the Maryland Online Data Privacy Act became the first state comprehensive law to attempt to depart from the typical data minimization standard by placing new limits on the collection and use of data tied to the activities necessary to provide a specific product or service requested by a consumer. My colleague Jordan Francis has called the classic approach “procedural data minimization” and the Maryland approach “substantive data minimization.” 

While Maryland is the only comprehensive state privacy law to adopt a substantive data minimization approach, proposals in Vermont and Maine that came close to enactment this year contained similar language. Heightened data minimization provisions are also elements of recent sectoral laws including the Washington State My Health My Data Act, the New York Child Data Protection Act, and the Virginia Child Data Privacy Amendment. Taken together, these frameworks portend a new trend toward substantive data minimization standards; however, their statutory requirements vary in subtle but consequential ways. Distinctions include whether new minimization standards (1) apply to the collection or the processing of data, (2) limit data processing to what is “reasonably” necessary, “strictly” necessary, or just plain “necessary” to provide a requested product or service, and (3) are subordinate to other bases for using personal data, such as a list of “permissible purposes” (e.g. protecting data security) or if consistent with consumer consent. 

The emergence of substantive data minimization requirements in state privacy laws represents an attempt to depart from the much maligned “notice and consent” approach to consumer privacy law. However, the ultimate impact of these emerging standards is not yet clear, and is expected to be largely shaped by future trends in interpretation, implementation, and enforcement. Consider the following, yet unanswered, questions about these “necessity” data minimization standards:

  1. Will data brokers face renewed scrutiny?

Perhaps the biggest surprise of the 2024 cycle has been the lack of legislative activity directly focused on the information collection and sharing practices of data brokers. The third party collection and sale of sensitive data, including health and location information, has been the subject of several high profile media investigations and is increasingly cited as a potential threat to national security. However, this year no new states passed data broker specific privacy laws and very few such bills were even introduced.

The scarcity of state level attention is even more noticeable when considering national efforts. New restrictions and enforcement concerning the brokering of personal information was one of the few privacy topics on which federal policymakers were particularly active this year. For example, the Biden Administration’s Executive Order 13873, the Protecting Americans Data from Foreign Adversaries Act, and the Federal Trade Commission’s litigation against Kochava and settlements with Gravy Analytics and Mobilewalla

However, privacy legislation constraining the activities of data brokers could be set for a comeback in 2025 and lawmakers have a number of options they could pursue. In November, a coalition of data brokers decisively lost a bid to strike down New Jersey’s Daniel’s Law, which empowers certain government employees to request the removal of personal information from public websites. Furthermore, data broker registry laws are now in effect – and increasingly being enforced – in California, Texas, Oregon, and Vermont. California is also attracting attention for its efforts to build a “one stop shop” accessible deletion mechanism intended to allow individuals to request the deletion of their personal information across the entire data broker ecosystem.

On the other hand, it is possible that lawmakers will instead choose to address concerns about the data broker industry through more comprehensive regulatory approaches. There are inherent challenges to singling out a particular industry or practice for regulation that often raise complicated line drawing issues. A possible template for such a broader approach may be the aforementioned Maryland Online Data Privacy Act, which contains a unique standalone restriction on the sale of sensitive personal data.

  1. Which laws will be subject to legal challenges?

Several recent state privacy laws have been met with constitutional challenges, often concerning their intersection with First Amendment protected activity and impact to interstate commerce. To date, the most common litigation (and industry success in seeking injunctions) has involved laws requiring social media companies to conduct age verification and limit to features/access to certain child users. At the same time, lawmakers have continued to iterate on these proposals in search of a framework that can reliably withstand constitutional scrutiny – industry has notably only secured a partial injunction of the Texas SCOPE Act. 

Looking ahead to 2025, legislative experimentation and industry litigation concerning children’s online safety and privacy laws are likely to continue apace. However, the tenor of these challenges may evolve following the Supreme Court’s decision in NetChoice v. Moody. While that case involved state laws regulating the content moderation practices of social media companies, several Justices expressed disapproval of how the case was brought as a “facial challenge” prior to enforcement, which may shift litigation strategies to focus on “as applied challenges”.  

Stakeholders should also pay close attention to privacy laws in California and Maryland. In California, industry groups have already raised concerns that recent California Privacy Protection Agency rulemaking activity – on both data brokers and automated decisionmaking technology opt-outs – exceeds the bounds of the Agency’s statutory authority and is in violation of the California Administrative Procedure Act. Separately, while the Maryland Age Appropriate Design Code was drafted to remove any direct requirements to moderate content, the law’s risk assessment requirements may still contain “proxies for content” that Ninth Circuit found to likely violate the First Amendment in California’s version of the law.

  1. How will lawmakers approach artificial intelligence? 

Opportunities, risks, and hype surrounding advancements in artificial intelligence (AI) technologies have impacted every domain in tech policy, and data privacy is no exception. In fact, privacy rules may emerge as one of the more successful levers for governing AI. For example, existing technology-neutral privacy laws will already apply to AI systems to the extent that they collect, process and output personal information. In particular, transparency, security, risk assessment, and consumer choice requirements under existing laws are poised to have significant influence on the development and use of new AI tools.

It is also important to recognize that AI is not a single technology, but can encompass a range of systems, some of which have been with us for decades (such as facial recognition technology) and some of which are still emerging (such as general purpose ‘foundation’ models). Lawmakers therefore have an array of approaches from which they could address AI safety, transparency, and fairness. For example, they could comprehensively regulate a broad range of technologies and harms, which is the approach taken by the draft Texas Responsible AI Governance Act. They may also seek to regulate a particular AI technology or use case such as “‘deep fakes” in political advertisements. Finally, lawmakers could also bake new AI-specific requirements into comprehensive privacy laws, as Minnesota did this year by creating a new right to contest the result of significant profiling decisions.

President-elect Donald Trump’s promise to repeal the Biden Administration’s AI Executive Order and incoming FTC Chair Ferguson’s leaked agenda to “terminate all initiatives involving so called… AI ‘bias’” could also inspire state lawmakers to focus on the use of AI systems in a manner that results in unlawful discrimination. This was the focus of the Colorado AI Act, which was enacted this year and that may serve as a template for similar state level efforts. However, efforts to establish a harmonized state-level approach to regulating discriminatory outcomes in high-risk systems may be complicated: Colorado’s pathsetting law is likely to be further shaped by amendments and rulemaking prior to taking effect.

  1. How will the new administration and congress impact the state privacy landscape?

Next year, President-elect Trump will enjoy narrow but meaningful majorities in both chambers of congress. The Republican Party has historically supported the enactment of broadly preemptive privacy legislation, raising the possibility – however faint – that some or all of the emerging state privacy ‘patchwork’ could be superseded by new federal legislation. Business groups may see a window of opportunity to advocate for a broadly preemptive national privacy framework modeled on existing state laws like the Texas Data Privacy and Security Act. However, at present there is little to suggest that preemptive comprehensive privacy will be a top priority for Republican lawmakers during the next congress, though bipartisan movement on child-specific online safety legislation appears more likely.

The November election results will influence not only the legislative agenda in Washington D.C., but also legislative activity in the states. Democratic governors and attorneys general are already discussing legislative and legal strategies to attempt to minimize or block various priorities of the Trump agenda. Concerns about the incoming administration’s approach to issues like immigration, law enforcement, and health care may be a motivating factor for commercial privacy legislation in Democrat-controlled states. For example, in a potential sign of things to come, Democratic Senators in Michigan rapidly sought to establish new protections for “reproductive health data” during the State’s ‘lame duck’ session immediately following the November election. 

Outside of a few notable examples, recent state privacy laws have typically been enacted on an overwhelmingly bipartisan basis. However, this pattern could shift next year should commercial privacy become increasingly intertwined with other, more polarized issues. Therefore, while 2025 is likely to be as active as ever for legislative activity, this dynamic could ultimately reduce the amount of bills that are enacted compared to prior years. 

Do you have the answers to these questions or are you brave enough to make your own predictions? Email the author of this post at [email protected] 

In a Landmark Judgment, The Inter-American Court of Human Rights Recognized an Autonomous Right to Informational Self-Determination

The following is a guest post to the FPF blog by Jonathan Mendoza Iserte, Secretary of Personal Data Protection at Mexico’s Instituto Nacional de Transparencia y Acceso a la Información y Protección de Datos Personales (INAI), and Nelson Remolina Angarita, Professor at the Faculty of Law, Universidad de los Andes, (Colombia). The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.

The right to “informational self-determination” has recently emerged as an autonomous fundamental right within the Inter-American legal sphere, following a landmark ruling by the Inter-American Court of Human Rights (IACHR) in the case Members of the José Alvear Restrepo Lawyers’ Collective vs. Colombia, issued on October 18th, 2023. Its protection is essential for the exercise of other fundamental rights, such as the right to privacy, reputation, the right to defense, and the right to security within the Inter-American system of fundamental rights. The case was brought to the attention of the IACHR on July 8, 2020, by the Inter-American Commission on Human Rights and it highlights the obligation of States to protect the right to informational self-determination against practices of surveillance, harassment, and the collection of personal information by state agencies. The Court examined the allegations related to the intelligence activities carried out by the Colombian State against members of the José Alvear Restrepo Lawyers’ Collective (CAJAR), an organization dedicated to the defense of human rights in Colombia, which resulted in threats, intimidation, and a climate of insecurity that forced several of its members into exile.

The facts of the case concern events that began in the 1990s. It has been alleged that during intelligence operations, information was collected about members of CAJAR and that this information was misused, including being handed over to illegal armed groups. It was noted that the victims “did not have access to an effective remedy to address their claims related to accessing the intelligence database” of the State.

Although the ruling covers a wide range of human rights issues, in this piece we will focus solely on matters related to data protection or informational self-determination. The purpose of this analysis is to analyze the most relevant aspects of the case regarding the right to personal data protection, exploring its development and recognition as an autonomous human right that must be respected and upheld within the Inter-American human rights system. Specifically, it will address how the Inter-American Court has integrated this right into the framework of state obligations, and how its violation affects not only the privacy of individuals but also their ability to exercise other fundamental rights.

1. Importance of the CAJAR Ruling regarding personal data processing in the Inter-American human rights system 

 With the CAJAR landmark ruling, the IACHR expressly recognized informational self-determination as an autonomous human right for the first time, which must be respected and upheld within the Inter-American human rights system. Indeed, in its judgment Series C No. 506 of October 18, 2023, the IACHR concluded:

586. In the view of the Inter-American Court, the aforementioned elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems in the region, and which finds its basis in the protective content of the American Convention, particularly in the rights enshrined in Articles 11 and 13, and, in terms of its judicial protection, in the right guaranteed by Article 25.”1(…)

588. Ultimately, it is an autonomous right that, in turn, serves as a guarantee for other rights, such as those concerning privacy, the protection of honor, the safeguarding of reputation, and, in general, human dignity. It is worth noting that this right extends, with the applicable limitations (see paras. 601 to 608 below), to any personal data held by any public body, and it similarly applies to records or databases managed by private entities, issues that are not addressed in detail due to the scope of this international case.” (Emphasis added)

This is a ruling of great significance within the Inter-American human rights system because it imposes obligations on States and opens the door for it to be upheld by international courts of justice.

The Inter-American Human Rights System (IAHRS) is based on the American Convention on Human Rights (ACHR), where States voluntarily commit to respecting and guaranteeing the rights established in the treaty, including the right to informational self-determination. This right encompasses the ability to access and control personal data held in public records. In this context, as noted in the CAJAR ruling, the state’s actions constituted a violation of this right, prompting the Court to issue binding rulings that may require reparations, legislative reforms, or other measures to remedy and prevent future violations.

The IACHR does not have enforcement powers comparable to those of national courts, its rulings are based on the principle of state consent under international law and are reinforced through mechanisms such as diplomatic pressure, reputational accountability, and domestic implementation. States are expected to integrate these rulings into their legal systems, and non-compliance may lead to international scrutiny. 

Adopting mechanisms to guarantee this right in practice (not just on paper or in theory) is  one of the obligations States must fulfill, as emphasized by the IACHR:

599. In any case, the Inter-American Court reiterates that the effectiveness of the right to informational self-determination requires States to provide adequate, swift, free, and effective mechanisms or procedures to process and address requests, either by the same authority managing the data or by another competent institution in matters of personal data protection or oversightdocs (see para. 582). (…) This requirement, derived from the obligation established in Article 2 of the American Convention, which encompasses the issuance of regulations and the development of practices conducive to the observance of human rights, including appropriate administrative procedures, constitutes an essential guarantee for asserting and exercising this right.”2 (Emphasis added)3

In the operative part of the ruling, the IACHR decided, among other things, the following:

13. The State is internationally responsible for the violation of the right to informational self-determination, recognized in Articles 11.2 and 13.1 of the American Convention on Human Rights, in relation to the obligations to respect and guarantee rights, and to adopt domestic legal provisions as established by Articles 1.1 and 2 of the same international instrument.” Specifically, the IACHR declared the violation of the right to informational self-determination because the victims of arbitrary intelligence activities were not guaranteed “access to the data that the intelligence agencies had collected about them. Furthermore, such access was hindered due to the limited progress in purging the archives of the now-defunct DAS” (paragraph 1011).

Given the above, the IACHR ordered a purge of the archives4 of the defunct Administrative Department of Security (DAS) to ensure that victims can access their information and exercise the eventual correction, cancellation, or deletion of data held in the archives (paragraph 1011). Additionally, the IACHR demands that, during the purging of the archives, “authorities must ensure the protection of sensitive data contained in the archives regarding which public access may eventually be granted” (paragraph 1013).

Moreover, the IACHR ordered that:

36. The State shall proceed with the approval of the necessary regulations to implement reasonable, swift, simple, free, and effective mechanisms or procedures that allow individuals to access and control the data held on them in intelligence archives, in accordance with the scope of the right to informational self-determination, as detailed in paragraphs 1059 and 1060 of this Judgment.

This order vindicates an essential aspect of the right to data protection, which not only includes access to the data but also the existence of effective mechanisms to that end. This means that it is not enough to create formal or theoretical tools, but rather useful and timely tools to ensure that rights are realized or guaranteed in practice.

The IACHR’s decision has been compared to the 1983 ruling of the Federal Constitutional Court of Germany on the law regarding the population, profession, and workplace census (Census Law), which highlighted the importance and scope of the right to “informational self-determination” and outlined the factual, legal, and administrative conditions that should govern the collection and processing of personal data through population censuses.

The right to informational self-determination encompasses the trilogy made up of the person, their personal data, and their constitutional rights. It represents an essential right that is gaining increasing relevance in the face of the growing use of information about individuals, and it is realized in the ability of individuals to decide when and within what limits personal matters are made public, as well as in controlling what happens to their personal data. The ruling points out that the current and future conditions of data processing endanger self-determination because technologies make it easier to: 

(1) Archive personal data indefinitely; 

(2) Integrate that information with data from other databases anywhere in the world; 

(3) Review or consult personal data in seconds. 

Added to this is the individual’s inability or difficulty in controlling both the use of their personal data and the quality of the information about them.

As with other rights, informational self-determination is not guaranteed without limits. The ruling clarifies that “the individual does not have unlimited or absolute dominion over their data.” The prevalence of the public interest justifies the imposition of certain restrictions to live in society. For those limitations to be valid and legitimate, they must be based on a legal or constitutional mandate.

2. The right to informational self-determination, as cornerstone of democratic regimes in Latin America

The ruling of the IACHR in the CAJAR case not only represents a milestone in recognizing informational self-determination as an autonomous human right but also presents an urgent challenge for Latin American states regarding the protection of fundamental rights in the digital environment. In a region still facing deep inequalities, conflicts, and institutional fragility, the protection of personal data and privacy is not only essential to safeguarding individual rights, but also to strengthening the democratic regime upon which human rights are based.

A solid democratic regime depends on transparency, accountability, and the unrestricted respect for citizens’ rights, where the right to informational self-determination plays a vital role. Undue state surveillance, mass data collection without control, and information leaks, as evidenced in the CAJAR case, are practices that undermine public trust in institutions and create an environment of insecurity and harassment, especially for those who defend human rights or criticize power. Therefore, protecting personal information becomes a fundamental guarantee for free citizen participation without fear of reprisals.

At the regional level, Latin American countries need to strengthen their legal frameworks to protect personal data and ensure that informational self-determination is respected in practice, not just on paper. In this sense, a key recommendation is that states adopt robust data protection laws aligned with international standards, such as the Council of Europe’s Convention 108 for the Protection of Individuals with regard to Automatic Processing of Personal Data and its additional protocol on supervisory authorities and transborder data flows; the Ibero-American Data Protection Standards of the Ibero-American Data Protection Network, and the updated Principles on Privacy and Personal Data Protection of the Organization of American States (OAS), which can serve as a model. 

These laws must establish clear and effective mechanisms for citizens to access, rectify, and delete their data, and these mechanisms must be agile, free, and accessible to all sectors of the population, particularly the most vulnerable. Additionally, it is essential to have independent data protection authorities equipped with sufficient resources to oversee compliance with regulations and with sanctioning powers.

In addition to strengthening legal frameworks, it is imperative that Latin American countries develop secure technologies and platforms that enable accountable data processing. The use of encryption and other Privacy Enhancing Technologies, regular security audits, and the responsible purging of databases are fundamental steps to ensure that sensitive information is protected from unauthorized access. In the case of the now-defunct DAS in Colombia, the IACHR ruling ordered the purging of intelligence files, highlighting the need for states to implement effective protocols to guarantee the deletion or rectification of obsolete personal data or data collected arbitrarily without specific purposes.

Strengthening the democratic regime in the region means recognizing that the protection of personal data and the right to privacy are not privileges, but fundamental pillars for the defense of all human rights. Respect for informational self-determination not only protects citizens from abuses of power but also fosters trust in democratic institutions, creating a more transparent, secure, and participatory environment.

The construction of a strong democracy in Latin America necessarily involves a robust defense of digital rights, where informational self-determination and data protection are unrestricted guarantees for all citizens. As Yuval Noah Harari points out, “It is not enough for a democratic government to refrain from infringing on human and civil rights. It must take steps to guarantee them.”

  1.  See Inter-American Court of Human Rights Judgment of October 18, 2023. Series C No. 506. The official text of the judgment can be consulted at: https://jurisprudencia.corteidh.or.cr/vid/953775991. ↩︎
  2. See Inter-American Court of Human Rights Judgment of October 18, 2023. Series C No. 506. The official text of the judgment can be consulted at: https://jurisprudencia.corteidh.or.cr/vid/953775991. ↩︎
  3. See Inter-American Court of Human Rights Judgment of October 18, 2023. Series C No. 506. The official text of the judgment can be consulted at: https://jurisprudencia.corteidh.or.cr/vid/953775991. ↩︎
  4. The operative part of the judgment states the following: ’23. The State shall proceed with the purification of intelligence files in order to guarantee the victims’ right to informational self-determination regarding the data concerning them in such files, in the terms of paragraphs 1011 to 1014 of this Judgment.’ ↩︎

Brussels Privacy Symposium Report 2024

This year’s Brussels Privacy Symposium, held on 8 October 2024, convened global stakeholders from across Europe and beyond for in-depth discussions on the EU AI Act in the context of the broader EU digital ecosystem. Co-organized jointly by the Future of Privacy Forum and the Brussels Privacy Hub of the Vrije Universiteit Brussel, the eighth edition of the Symposium was a melting pot of brilliant minds from across academia, regulatory authorities and policymakers, industry, and civil society. 

In addition to three expert panels exploring notions of risk and impact assessments across the EU digital rulebook, prohibitions and obligations for sensitive data processing, and an increasingly complex enforcement landscape, the organizers also welcomed Mark Scott, Senior Resident Fellow at the Atlantic Council’s Digital Forensics Research Lab for the Opening Keynote. With previous roles as chief technology correspondent for Politico, and more than a decade as correspondent for the New York Times, Scott provided a thorough and frank analysis of Europe’s “digital challenge” as the focus shifts from rulemaking to enforcement. 

For this year’s program, Professor Adriana Iamnitchi, Chair of Computational Social Sciences at Maastricht University, presented research findings from a cutting-edge project analyzing search trends and patterns on prominent social media platforms to identify mis/disinformation. And finally, European Data Protection Supervisor Wojciech Wiewiórowski and Professor Gloria González-Fuster of the Vrije Universiteit Brussel sat together for a candid closing dialogue on the future of data protection

In the Report of the Brussels Privacy Symposium 2024, you can read the key takeaways from the highlights mentioned above, along with many more practical and actionable insights on the complex interplay between the different elements of the EU data strategy architecture. 

Future of Privacy Forum Publishes Report Exploring Organizations’ Emerging Practices and Challenges Assessing AI Risks

As AI models and systems become more widespread and powerful, FPF’s report finds many organizations  are taking a four-step approach to managing potential risks

With growing focus from policymakers and regulators on the impact of artificial intelligence (AI) systems, and as organizations strive to responsibly use AI systems, organizations are increasingly embracing AI impact assessments to assess risks and take steps to minimize them. In response to the growing use—and uncertainty around—AI impact assessments, the Future of Privacy Forum (FPF) Center for Artificial Intelligence published a new report, “AI Governance Behind the Scenes: Emerging Practices For AI Impact Assessments” to examine the considerations, emerging practices, and challenges that companies are experiencing as they endeavor to harness AI’s potential while mitigating potential harms.

“Companies are embedding AI into their systems for a variety of uses from research to enterprise and entertainment, though questions remain around how to implement AI models in a responsible, ethical manner,” said Daniel Berrick, FPF’s Counsel for Artificial Intelligence and the report’s author. “This report underscores that much more work needs to be done to ensure that companies can operationalize AI impact assessments, identify risks, and implement robust risk management practices. We hope this resource, built from conversations with a range of stakeholders, can serve as a resource for those evaluating how to deploy emerging technologies responsibly.” 

Though recent years have witnessed a growing number of laws and resources on AI governance, many organizations remain uncertain about what AI impact assessments entail or which framework to use. In light of this emerging dynamic, FPF surveyed over 60 private sector stakeholders to gain insight into what common approaches companies are employing and the challenges they face when conducting AI impact assessments. FPF found that companies are converging on several practices for conducting AI impact assessments, such as accounting for both intended and unintended uses of AI models and systems. However, practitioners continue to face several challenges at different points in the assessment process. 

FPF found:

Organizations seeking to enhance their AI Impact Assessments should consider:

“FPF’s Center for Artificial Intelligence was created to act as a collaborative force for shared knowledge between stakeholders and support the responsible development of AI. The Center’s report addresses key knowledge gaps and promotes collaboration,” said John Verdi, Senior Vice President for Policy at FPF. “FPF’s report was created with input from dozens of expert stakeholders, and it is the culmination of six months of convenings, interviews and workshops aimed at describing  the state of play.”

The report dives deeper into the trends and challenges companies take at each step when conducting AI impact assessments and the circumstances that trigger them. To learn more, read the new report, here

###

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. 

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.

Reach out to [email protected] with any questions. 

Technologist Roundtable: Key Issues in AI and Data Protection Post-Event Summary and Takeaways

Co-Authored with Marlene Smith, Research Assistant for AI

On November 27, 2024, the Future of Privacy Forum (FPF) hosted a Technologist Roundtable with the goal of convening an open dialogue on complex technical questions that impact law and policy, and assisting global data protection and privacy policymakers in understanding the relevant technical basics of large language models (LLMs). We invited a wide range of academic technical experts to convene with each other and data protection regulators and policymakers from around the world.

We were joined by the following experts:

As a result of the emergence of LLMs, data protection authorities and lawmakers are exploring a range of novel data protection issues, including how to ensure lawful processing of personal data in LLMs, and how to comply with obligations such as data deletion and correction requests. While LLMs can process personal data at different stages,1 including in training and in the input and output of models, there is an emerging question of the extent to which personal data exists “within” a model itself.2 Navigating these complex emerging issues increasingly requires understanding the technical building blocks of LLMs.

This post-event summary contains highlights and key takeaways from three parts of the Roundtable on 27 November. 

The post-event summary contains highlights and key discussion takeaways regarding the following:

  1. Basics of Transformer Technology and Tokenization 
  2. Training and Data Minimization 
  3. Memorization, Filters, and “Un-Learning”

We hope that this document supports ongoing efforts to explore and understand a range of novel issues at the intersection of data protection and artificial intelligence models.

If you have any questions, comments, or wish to discuss any of the topics related to the Roundtable and Post-Event Summary, please do not hesitate to reach out to FPF’s Center for AI at [email protected].

Commissioners Discussed Global Privacy Regulations at the Second Japan Privacy Symposium

November 26, 2024 — This week, the Future of Privacy Forum (FPF), a global non-profit focused on data protection, privacy,  and emerging technologies, hosted the second annual Japan Privacy Symposium with support from S&K Brussels LPC and in cooperation with the Personal Information Protection Commission of Japan (PPC) and the Japan DPO Association. 

This event, on the sidelines of the 62nd Asia-Pacific Privacy Authorities (APPA) Forum, brought together leaders in the Japanese privacy community and data protection and privacy regulators from across the globe at the Ritz-Carlton in Tokyo, Japan.

Commissioner OHSHIMA Shuhei opened the event with a keynote speech outlining some of PPC’s key regulatory priorities going forward. Subsequently, Philippe Dufresne, Commissioner, Office of the Privacy Commissioner, Canada; Ashkan Soltani, Executive Director, California Privacy Protection Agency; Nazri Kama, Commissioner, Personal Data Protection Department of Malaysia; Thienchai Na Nakorn, Chairman, Personal Data Protection Committee, Thailand; and Josh Lee Kok Thong, Managing Director for APAC, Future of Privacy Forum, also discussed upcoming regulatory priorities for data protection authorities, and key trends around regulatory priorities in the APAC region.

“We are excited to have had a successful second edition of this valuable event that brings together data protection and privacy regulators from around the world alongside the Japanese privacy community,” Gabriela Zanfir-Fortuna, FPF’s Vice President for Global Privacy, said. “Tokyo is a perfect location to host these important global conversations and it provides a valuable forum for commissioners from around the globe to share their perspectives with privacy leaders and community members. We are grateful to our partners, the Personal Information Protection Commission of Japan, the Japan DPO Association, S&K Brussels LPC and our Senior Fellow Kaori Inui, for their steadfast partnership and support.”

gzf jps 11.25.204

Pictured: Gabriela Zanfir-Fortuna addressing attendees at the second annual Japan Privacy Symposium November 25, in Tokyo, Japan.

###

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks and develop appropriate protections.FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on X and LinkedIn.

Contact [email protected] for any questions.

FPF Launches New Privacy-Enhancing Technologies Repository

For some time now, stakeholders at the intersection of data, privacy, and new technologies have increasingly recognized the potential of a range of technical and computational approaches and techniques to mitigate privacy risks. This set of tools and methods are otherwise known as privacy-enhancing technologies (PETs) and are understood as a group of techniques that can help enhance and preserve data privacy while maintaining its utility. As novel technological challenges continue to evolve, the conversation around PETs as possible enablers of secure, trusted, and fair data sharing and use has similarly intensified.

Recognizing the need for a deeper understanding of the potential and limitations of these technologies, FPF has actively contributed to shaping policymaking around PETs through discussion papers, reports, and stakeholder engagement. In 2023, FPF began facilitating the convenings of the Global PETs Network, an informal forum for global Data Protection Authorities (DPAs) and regulators interested in the development and policy implications of the adoption and implementation of PETs. In July 2024, FPF also launched the Research Coordination Network for Privacy-Preserving Data Sharing and Analytics, bringing together a group of cross-sector and multidisciplinary experts dedicated to exploring the potential of PETs in the context of AI and emerging technologies, and steward their adoption and scalability. 

Through these initiatives and other efforts, FPF has identified a growing demand among stakeholders to promote and enhance a better understanding and awareness of the potential and limitations of PETs. Many regulators and organizations have made significant contributions to advancing PETs development and deployment through research, guidance, and testing, many of which have been publicly available for social use and benefit. Building on these efforts and its own initiatives, FPF recently launched the PETs Repository, a webpage that consolidates available resources and developments around the development and deployment of PETs. 

FPF’s PETs Repository is a centralized, trusted, and up-to-date resource where individuals and organizations interested in these technologies can find practical and useful information in the field. The Repository is currently organized into three segments: 

  1. Regulatory Activity: Including official guidance, statements, blogs, tools, and reports on PETs from DPAs and regulators around the world.
  2. External Reports: Relevant and contemporary publications from global organizations offering insights into PETs.
  3. Sandboxes: Links to official resources detailing use cases, applications, and results from testing environments such as sandboxes and other initiatives led by governments and organizations.

While FPF acknowledges the extensive field of academic research around PETs, the Repository intends to primarily consolidate developments and official resources reflecting the existing regulatory thinking and policymaking around these technologies. Through the Repository, FPF contributes to facilitating a better understanding of PETs and increasing the visibility of global initiatives in the field from governments and organizations to a broader audience. 

FPF Unveils Report on the Anatomy of State Comprehensive Privacy Law

Today, the Future of Privacy Forum (FPF) launched a new report—Anatomy of State Comprehensive Privacy Law: Surveying the State Privacy Law Landscape and Recent Legislative Trends. By distilling this broad landscape to identify the “anatomy” of state comprehensive privacy law, this report highlights the strong commonalities and the nuanced differences between the various laws, showing how they can exist within a common, partially-interoperable framework while also creating challenging compliance difficulties for companies within their overlapping ambits. Until a federal privacy law materializes, this ever changing state landscape will continue to evolve as lawmakers iterate upon the existing frameworks and add novel obligations, rights, and exceptions to respond to changing societal, technological, and economic trends.

Between 2018 and 2024, nineteen U.S. states enacted comprehensive consumer privacy laws. This rapid adoption of privacy legislation has caused the legal landscape to explode in depth and complexity as each new law iterates upon those that came before it. This report summarizes the legislative landscape and identifies the “anatomy” of state comprehensive privacy law by comparing and contrasting the two prevailing models for state laws and identifying commonalities and differences in the laws’ core components. These core components of a comprehensive privacy law include:

The report concludes with an overview of five emerging legislative trends: 

The African Union’s Continental AI Strategy: Data Protection and Governance Laws Set to Play a Key Role in AI Regulation

By Chuma Akana, Former FPF Global Privacy Summer Fellow and Mercy King’ori, FPF Policy Analyst, Global Privacy

The African Union (AU) Executive Council, composed of representatives of the 55 African Member States, approved the highly anticipated AU AI Continental Strategy (the Strategy) in July 2024. The adoption of the Strategy follows a period of stakeholder consultations that sought to broaden the understanding of African AI needs and create awareness of the risks of AI use and development in Africa. The adoption of the Strategy marks a significant step in global AI policy and will serve as a guiding light for African countries developing and adopting national AI rules

The Strategy takes a development-focused and inclusive approach to AI, and places emphasis on five focus areas: harnessing AI’s benefits, building AI capabilities, minimizing risks, stimulating investment, and fostering cooperation. These focus areas give rise to fifteen action points that will help to achieve the strategic objectives of the AU with regard to AI use in Africa, while addressing the societal, ethical, security, and legal challenges associated with AI-driven systems. One of the Strategy’s first action points focuses on establishing appropriate AI governance systems and regulations at regional and national levels. Another focuses on encouraging cross-border data sharing among AU Member States to support the development of AI.

This blog post offers a comprehensive analysis of the Strategy, with a focus on its approach to AI governance as a fundamental building block of AI adoption in the continent. We explore the role of AI governance within the Strategy before delving into its specific components, such as strengthening data governance frameworks, balancing innovation and responsibility, harmonizing data protection laws, establishing AI regulatory bodies, and developing ethical principles to address AI risks. This post also covers some developments in AI policy across the continent and analyzes the tension in African policymaking between pursuing the transformative capabilities of AI and mandating strong safeguards to manage its risks. 

AI Governance as an Essential Building Block of the Strategy

The Strategy centers AI governance as a foundational aspect for the successful development and deployment of AI in the continent. In fact, the Strategy considers AI governance as an essential element for addressing all of the focus areas, including minimizing risks associated with the increasing use of AI and as a catalyst for the realization of all other action areas. To achieve this goal, the Strategy calls on Member States to develop national strategies that would, in addition to providing a roadmap for implementing the priority areas, facilitate the creation of normative governance frameworks that are adapted to local contexts and are transparent and collaborative.  

The Strategy emphasizes the importance of adequate governance to ensure that AI development and use is inclusive, aligned with African priorities, and does not harm African people, societies, or the environment. It calls for robust AI governance based on ethical principles, democratic values, human rights, and the rule of law, consistent with the AU’s Agenda 2063

As countries and regions around the world have been developing their own AI frameworks, an emerging task for AI policymakers has been to identify context-specific and inclusive components of a governance system. The AU’s AI Strategy attempts to solve this by proposing a multi-tiered governance approach that will ensure responsible AI ecosystems. The multi-tiered governance approach for Africa consists of five core activities: 

  1. 1. Amendment and application of existing laws and frameworks: According to the Strategy, legal frameworks relating to data protection, cybersecurity, consumer protection, and inclusion are essential for responsible AI development in Africa. Enacting and fully implementing these laws will be crucial, and Member States may need to amend existing laws to address AI-related risks effectively. Most African countries have enacted at least some of these laws. Also, some African data protection laws specifically impose restrictions on the automated processing of personal data that produces legal effects or has similar significant effects. The Strategy considers data protection laws crucial to addressing data-related concerns of AI. On the enforcement level, there are few AI-specific actions to examine, though data protection agencies in Senegal and Morocco have previously issued administrative actions against the use of facial recognition technologies. It is likely that as existing data protection laws mature, a more comprehensive picture of how they are applied across the AU will emerge. Other data-related legal frameworks to be considered include open data policies, necessary for availing data for AI. 
  1. 2. Identification of regulatory gaps: Governments, with the support of the AU and Regional Economic Communities, will need to consider what regulatory gaps exist to safeguard the development and use of AI and ensure the rule of law in its adoption across the continent. The Strategy recommends reviewing labor protections, AI procurement standards, and healthcare approvals, while aligning social media regulations with international standards. Other regulatory gaps to be filled relate to protection against algorithmic bias and discrimination.
  1. 3. Establishment of enabling policy frameworks: The Strategy stresses the importance of national AI strategies that align with development priorities, focusing on areas like job creation, health, and education. These strategies should be developed through open consultations with a broad range of stakeholders, including the public and private sectors, academia, and civil society. 
  1. 4. Development and roll-out of AI assessment and evaluation tools and institutional mechanisms: The Strategy underscores the importance of independent review mechanisms, including impact assessments like UNESCO’s Ethical Impact Assessment, in mitigating AI-related harms. These tools will help evaluate and measure AI’s impact on individuals and societies, offering a way to understand and address potential risks by drawing on various methodologies, including consultations with affected communities.
  1. 5. Continuous research and evaluation: Ongoing African-led research is needed to assess new risks arising from AI development and use in Africa; evaluate the efficacy of governance tools to promote the development and use of AI systems that are inclusive, fair, sustainable, and just; review best practices in AI governance coming out of similar country contexts worldwide; develop policy innovations with policy-makers and stress-test them in a safe environment; and support regulatory sandboxing initiatives. 

To support these governance measures, the Strategy suggests that Member States should consider global best practices such as the recent EU AI Act while aligning with existing national and continental frameworks to address regulatory gaps and policy needs.

Comparing the AI Strategy with Existing National AI Frameworks on the Continent

Discussions about AI governance in Africa predate the Strategy and have continued following its adoption. Notable efforts include the release of national AI strategies by multiple AU Member States including Algeria, Benin, Egypt, Mauritius, Nigeria, and Senegal. Rwanda is the only country with a national policy while other countries like Ethiopia, Morocco, Ghana, Kenya, South Africa, Mauritania, Tanzania, and Tunisia are making significant steps to define their AI strategies. As a result, the Strategy is designed to inform an environment of ongoing efforts aimed at ensuring robust AI governance in the continent.

Many countries with existing strategies appear to have considered some of the foundational principles in the AU’s Strategy even if their efforts predate its adoption, demonstrating some convergence in AI governance across Africa. The key similarities between the Strategy and various national AI strategies and policies include an emphasis on:

On the other hand, some notable differences relate to the flagship sectors under consideration in the various national AI strategies. While the AU’s regional AI Strategy marks the agricultural, healthcare, public service delivery, climate change, peace, and security sectors as those that stand to benefit from AI solutions, Rwanda includes these and others such as construction, banking, digital payments, and e-commerce. On AI governance, while the AU proposes a multi-tiered approach as explained above, countries such as Benin view their path to AI governance as mostly consisting of updating existing institutional and regulatory frameworks for AI. 

From Theory to Implementation of Practical AI Governance Frameworks in Africa: Balancing Innovation with Responsibility

The Strategy’s timeline for implementation extends from 2025 to 2030, with a preparatory phase in 2024. The process is set to unfold in two phases:

The Strategy appreciates that the road to establishing normative AI governance frameworks is multi-pronged and will require bringing together a variety of different stakeholders, with the AU playing a pivotal role. For example, private sector actors are expected to play an important role and contribute to responsible AI initiatives by funding such initiatives and developing AI solutions that meet the objectives laid out by the Strategy. Public actors, such as Member State governments, are encouraged to develop policies that provide a conducive environment for AI development and promote the rule of law. As with the AU’s Continental Data Policy Framework (2022), which sets out a common vision for the use of data in Africa, a key tenet of the AI Strategy is to reach a unified level of AI governance despite differing levels of development among countries

In exploring how harmonized AI rules can be developed across the continent, the Strategy highlights the steps taken by other regions in advancing AI governance, such as the EU’s  AI Act, which is part of a broader policy package promoting trustworthy AI; the Association of Southeast Asian Nations’ (ASEAN) Guide on AI Governance and Ethics to establish common principles; and the 2024 Santiago Declaration for Latin America and the Caribbean which aims to strengthen regional cooperation in AI governance. As Africa’s regional body, the AU identifies several considerations for ensuring a harmonized, regional AI governance landscape for Africa, including:

1. Strengthening Data Governance as a Prerequisite for Responsible AI

The AU has consistently sought to develop consultative frameworks, particularly on data governance, for Member States to adopt when shaping their domestic policies. In 2014, the AU adopted the Malabo Convention to establish general rules and principles in three key areas: personal data protection, electronic commerce, and cybersecurity across the continent. The Malabo Convention was designed to provide a holistic, continent-wide framework to harmonize African data protection policies and promote digital rights, including privacy and internet freedom. Although adopted in 2014, the Malabo Convention did not come into force until receiving its 15th national ratification in 2023. With only 15 ratifying nations of 55 AU Member States, the Convention’s impact and influence have been limited. Aside from the Convention, in 2022 the AU released its Data Policy Framework to provide guidance on data governance for Africa’s growing data market. 

The AI Strategy emphasizes the critical role of data in AI innovation and development, noting that AI systems rely on identifying patterns in existing data and applying this knowledge to new datasets. To effectively identify these patterns, a large volume of data is required. This data must be high-quality, diverse, inclusive, and locally sourced to effectively address local challenges. While protecting personal data is essential, it is equally important to ensure open and secure access to data to support the development of AI algorithms. This makes the AU Data Policy Framework vital, as it offers the necessary guidance to strike a balance between these priorities.

 In line with this, the AI Strategy encourages:

2. Establishment of Regulatory Bodies to Oversee the Implementation of the AI Strategy 

Regulatory bodies are crucial to the implementation of the AI Strategy. In this regard, the Strategy:

3. Encouraging Data Sharing Among Stakeholders

The AI Strategy notes a significant gap in the quality, inclusiveness, and availability of data for AI models across Africa. Much of the data from the public and private sectors remains inaccessible because many organizations lack the necessary infrastructure, resources, and data-management protocols to collect and make this data available, which is crucial for accelerating AI adoption. To address these challenges, the Strategy proposes:

4. Harmonizing Data Protection Laws

The Strategy recognizes that enhancing data privacy and security is a key component of safeguarding human rights in the context of AI. It highlights the significant challenges that arise from AI systems collecting and processing vast amounts of personal data, particularly concerning privacy breaches and the unauthorized use of sensitive information. The Strategy further notes that while privacy concerns have a direct impact on individuals’ rights and freedoms, they disproportionately affect vulnerable groups such as children, women, and girls. 

The Strategy notes that a key privacy concern in Africa is the low awareness of privacy rights, and emphasizes the importance of promoting media and information literacy to help people understand how their data is processed as well as the potential consequences of processing. Additionally, it calls for strengthening and re-aligning the continental, regional, and national legal and regulatory regimes related to child online safety to integrate risks posed by AI and build AI skills of law enforcement agencies and regulatory bodies dealing with child protection. 

The Strategy acknowledges the progress made in addressing data protection issues across Africa, as seen with the growing number of data protection laws and authorities. Furthermore, the  Strategy notes that 25 African countries have launched national open data portals, and nearly all of these countries have adopted open data policies, strategies, and plans. Certain African countries, such as Ghana, Nigeria, Rwanda, Sierra Leone, Senegal, and South Africa, have recognized the importance of data in the development of AI and have drafted comprehensive data strategies.

These strategies emphasize data literacy, data infrastructure, open government data, data sovereignty, and the responsible use of data.

Beyond data governance and personal data protection, the Strategy also underscores the need for legal protection against algorithmic bias and discrimination. It recognizes that existing legal frameworks may need to be updated to address the new challenges posed by AI, including compensating for bias and discrimination based on race, gender, or other factors, as well as addressing the potential loss of personal privacy through predictive analytics and other AI-driven processes. The Strategy advocates for a comprehensive approach to AI governance that integrates data protection principles with broader ethical considerations to ensure the responsible development and deployment of AI technologies across the continent.

5. Creating Ethical AI Systems to Address AI Risks 

Crucially, the AU’s Strategy recognizes bias, widening inequalities, marginalization of groups who are not ready to embrace AI, loss of culture and identity, and the widening of social and technological gaps as risks to be avoided. 

It therefore emphasizes that AI ethics should be a foundational element in the development and use of AI, ensuring that these systems are deployed in ways that benefit society and avoid harm to individuals or groups. The Strategy urges African countries to prioritize ethical AI practices by establishing unified legal frameworks that define AI ethics and support the ratification and implementation of relevant regional and international conventions and recommendations. It calls for the development and adoption of codes of ethics for AI developers and users, while noting that systems such as Generative AI pose particularly timely ethical concerns. 

Closing Reflections

The Strategy offers African countries a structured approach to AI governance. Presently, many African nations lack comprehensive AI policy frameworks that could support responsible AI implementation, regulate AI-enabled business models, and promote AI-driven socioeconomic growth. The Strategy encourages African nations to develop governance frameworks, including legislation, that facilitate AI adoption, particularly in countries without existing AI strategies or regulatory frameworks. As the phases of implementation of the AI Strategy begin, the AU and its Member States will have to address potential regulatory fragmentation across the region and the presence of varying AI governance structures that continue to persist, including differing privacy protection processes, security safeguards, and transparency measures. As African countries explore AI governance frameworks, it is important that these frameworks integrate and harmonize data protection principles and other ethical considerations, to ensure responsible AI development optimizes socioeconomic benefits.