With contributions from Judy Wang, Communications Intern
2024 was a landmark year for the Future of Privacy Forum, as we continued to grow our privacy leadership through research and analysis, domestic and global meetings, expert testimony, and more – all while commemorating our 15th anniversary.
Expanding our AI Footprint
While 2023 was the year of AI, 2024 was the year of navigating how AI was used in practice and its influence across policy and emerging technologies. FPF further expanded its AI with the launch of FPF’s Center for Artificial Intelligence.
The FPF Center for AI supports FPF’s role as the leading pragmatic and trusted voice for those who seek impartial, practical analysis of the latest challenges for AI-related regulation, compliance, and responsible use.
Earlier this month, the Center officially launched its first report, “AI Governance Behind The Scenes: Emerging Practicers For AI Impact Assessments,” which examines the key considerations, emerging practices, and challenges that arise in the evaluations companies use to identify and address potential risks associated with AI models and systems.
Check out some other highlights of FPF’s AI work this year:
Detailed the complex policy, legal, and technical challenges posed by California’s AB 1008.
Produced a new report on confidential computing and how it differs from other PETs, as well as an in-depth analysis of its sectoral applications and policy considerations.
Presented the Government of Singapore with the inaugural Global Responsible AI Leadership Award for the country’s pragmatic work in establishing frameworks for AI regulation and governance. The award also granted privacy experts Jim Halpert and Patrice Ettinger its Career Achievement Award and Excellence in Career Award.
Updated our Generative AI internal compliance document with new content addressing organizations’ ongoing responsibilities, specific concerns (e.g., high-risk uses), and lessons taken from recent regulatory enforcement related to these technologies.
With the enactment of the Digital Personal Data Protection Act (DPDPA), provided five ways the DPDPA could shape the development of AI in India.
Highlighted the African Union AI Continental Strategy and how it centers AI governance as a foundational aspect for the successful development and deployment of AI in the continent.
Published a Two–Page Fact Sheet overview of The Council of Europe’s (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (Framework Convention on AI).
Summarized key elements of the Colorado AI Act and identified significant observations about the law.
Bringing Our Expertise Across the Globe
2024 continued to be pivotal for our global experts, as they followed privacy developments across the Asia Pacific, Europe, Latin America, and Africa. We also participated in key events in Brussels, South Korea, France, Tokyo, and Tel Aviv.
Europe
FPF brought together European data protection experts through high-level convenings, blogs, and reports. We developed key takeaways from the Commission’s second Report on the GDPR, with an overview and analysis of the findings from various stakeholders, including DPAs, and a new key resources page covering all aspects of the EU AI Act. At CPDP.ai, a multi-stakeholder comparative panel, we explored what we can learn from regional and international approaches to AI regulation and how these may facilitate a more global, interoperable approach to AI laws. Finally, we held our 8th Annual Brussels Symposium in collaboration with the Brussels Privacy Hub of Vrije Universiteit Brussel (VUB), where lively in-person discussions took place covering this year’s topic, “Integrating the AI Act in the EU Data Governance Ecosystem: Bridging Regulatory Regimes.”
The Asia-Pacific
FPF’s APAC office entered its fourth year of continued growth and became a main component of our global research. We provided a comprehensive analysis of strategy documents and key regulatory actions of the DPAs in 10 jurisdictions, published or developed in 2023 and 2024, setting out regulatory priorities for the following years. This includes Australia, China, Hong Kong, the Special Administrative Region of China (SAR), Japan, Malaysia, New Zealand, the Philippines, Singapore, South Korea, and Thailand.
In July, FPF participated in Personal Data Protection Week 2024 (PDP Week), an event organized and hosted by the Personal Data Protection Commission of Singapore, examining emerging technologies, including generative AI, India’s landmark data protection legislation, and PETs. Our second annual Japan Privacy Symposium in conjunction with the 62nd Asia-Pacific Privacy Authorities (APPA) Forum, was a big success. In cooperation with the Personal Information Protection Commission of Japan (PPC), the Japan DPO Association, and S&K Brussels LPC, this year’s Symposium featured a keynote speech from Commissioner OHSHIMA Shuhei, which focused on emerging data protection and privacy trends in Japan.
We dissected “neurorights,” a set of proposed rights that specifically protect mental freedom and privacy, which have captured the interest of many governments, scholars, and advocates, which is very apparent in Latin America. FPF looked into several countries that are actively seeking to enshrine these rights in law, including Chile, Mexico, and Brazil.
The African Continent
We gave an overview of harmonization efforts in regional and continental data protection policies in Africa and the role of Africa’s 8 Regional Economic Communities (RECs) and submitted comments to the Nigeria Data Protection Commission (NDPC) on the proposed General Application and Implementation Directive (GAID).
Federal and State U.S. Legislation
FPF played a critical role in informing both federal and state government entities on protecting data privacy interests.
We provided recommendations and filed comments with the following:
U.S. Department of Transportationin response to their request for information on opportunities and challenges of AI transportation and again in response to the National Highway Traffic Safety Administration (NHTSA) and the DOT Advanced Notice of Proposed Rulemaking regarding advanced impaired driving prevention technology.
Federal Trade Commission (FTC)in response to its request for comment on the Children’s Online Privacy Protection Act (COPPA) proposed rule, again in response to the FTC’s Supplemental Notice of Proposed Rulemaking as well as in response to its request for comment on the Children’s Online Privacy Protection Act (COPPA) proposed rule.
Office of Management and Budget (OMB)regarding the agency’s Request for Information on how privacy impact assessments (PIAs) may mitigate privacy risks exacerbated by AI and other advances in technology and again to Request for Information (RFI) regarding responsible procurement of AI in government.
Department of Justice (DOJ) regarding the Advance Notice of Proposed Rulemaking on Access to Americans’ Bulk Sensitive Personal Data and Government-Related Data by Countries of Concern (ANPRM).
Bureau of Industry and Security (BIS) and the United States Department of Commerce’s (DOC) Advanced Notice of Proposed Rulemaking (ANPRM)in response to securing the information and communications technology and services supply chain of connected vehicles.
California Civil Rights Councilin response to their proposed modifications to the state Fair Employment and Housing Act (FEHA) regarding automated-decision systems (ADS) and again regarding their Proposed Modifications to the Employment Regulations Regarding Automated-Decision Systems.
Federal Communications Commission (FCC)in response to the FCC’s Notice of Proposed Rulemaking (NPRM) on the use of artificial intelligence (AI) to generate content for political advertisements and again in response to the Notice of Inquiry (NOI) on technologies that can alert consumers that they may be interacting with an AI-generated call based on real-time phone call content analysis.
New York State Senateto inform forthcoming rulemaking for the implementation of a pair of bills aimed at creating heightened protections for children and teens online.
D.C. Council Committee on Healthgave feedback on the role of consent in the Consumer Health Information Privacy Protection Act of 2024 (“CHIPPA”).
This year also marked the 14th annual Privacy Paper for Privacymakers Award on research for policymakers in the U.S. Congress, U.S. federal agencies, and international data protection authorities. The event was kicked off at Capitol Hill, featuring an opening keynote by U.S. Senator Peter Welch (D-VT). FPF honored winners of internationally focused papers in a virtual conversation the following week.
Youth & Education
In 2024, federal and state policymakers continued to work on legislation that protects children online, including the Kids Online Safety and Privacy Act (KOSPA) and the California Age-Appropriate Design Code Act (AADC). FPF’s work includes a breakdown of bills related to children’s online safety and a checklist designed for K-12 schools to help vet generative AI tools.
FPF published a blog in August that contextualized the Kids Online Safety and Privacy Act (KOSPA), which includes two bills that gained significant traction in the Senate in recent years: the Kids Online Safety Act (KOSA) and Children and Teens Online Privacy Protection Act (“COPPA 2.0”).
In July, we explored how the California Age-Appropriate Design Code Act (AADC) catalyzed conversations in America around protecting kids and teens online. We also analyzed the implications of the CA AADC and the evolving landscape of children’s online privacy.
As children spend more time online, lawmakers have continued introducing legislation to enhance the privacy and safety of kids’ and teens’ online experiences beyond the Children’s Online Privacy Protection Act (COPPA) framework. FPF analyzed the status quo of knowledge standards under COPPA and provided key observations on the current knowledge standards in various state privacy laws.
We also released a checklist and accompanying policy brief designed specifically for K-12 schools to help them vet generative AI tools for compliance with student privacy laws, outlining key considerations when incorporating generative AI into a school or district’s edtech vetting checklist.
With young people adopting immersive technologies like extended reality (XR) and virtual world applications, companies have expanded their presence in digital spaces, launching brand experiences, advertisements, and digital products. FPF analyzed recent regulatory and self-regulatory actions related to youth privacy in immersive spaces while also pulling out key lessons for organizations building spaces in virtual worlds.
Diving Deeper into Privacy Enhancing Technologies (PETs) Research and Large Language Models (LLMs)
2024 also marked further exploration into Privacy Enhancing Technologies (PETs) with FPF’s establishment of the PETs Research Coordination Network (RCN) and the creation of the PETs Repository. Additionally, we further explored large language models (LLMs) and whether or not they contained personal information.
In February, the National Science Foundation (NSF) and the Department of Energy (DOE) awarded FPF grants to support its establishment of a Research Coordination Network (RCN) for Privacy-Preserving Data and Analytics. FPF’s work will support developing and deploying Privacy Enhancing Technologies (PETs) for socially beneficial data sharing and analytics.
In July, FPF also launched the Privacy-Enhancing Technologies (PETs) Research Coordination Network (RCN), bringing together a group of cross-sector and multidisciplinary experts dedicated to exploring PETs’ potential in AI and emerging technologies and stewarding their adoption and scalability. Building on these initiatives and other efforts, FPF launched the PETs Repository, a webpage that consolidates available resources and developments around the development and deployment of PETs.
FPF further delved into LLMs to explore if they contain personal data. If they do, what requirements must companies follow for processing personal data for training AI models? Recent analysis focused on Brazil’s Autoridade Nacional de Proteçao de Dados Pessoais (ANPD) and issuing a preliminary decision on the legal basis for processing personal data in LLMs. We also wrote a blog on California’s recently passed Assembly Bill 1008 applying CCPA privacy rights to LLMs and whether personal data exists in an AI model. An online discussion in a LinkedIn Live featuring FPF experts also delved into LLMs and personal data.
Facilitating Privacy Thought Leadership Home and Abroad
To celebrate the milestone of 15 years, FPF convened leading data protection regulators and FPF members at our 15th Anniversary Spring Social. The event also marked the transition of FPF Board Chairman Christoper Wolf, recognizing his founding role at FPF and many years of leadership. We welcomed our new Board Chair, Alan Raul.
High-level engagement from the year included:
Our first DC Privacy Forum: AI Forward, accompanied by the launch of FPF’s new Center for Artificial Intelligence.
The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics was launched with a virtual kick-off and White House Roundtable events. The Virtual Kick-off event featured over 40 global experts who helped shape the RCN’s work for the next three years. Hosted by the White House Office of Science and Technology Policy, we began a collaborative effort to advance PETs and their use in developing more ethical, fair, and representative AI.
The above is only a partial list of FPF initiatives from the year but highlights some of our major achievements. We thank all those who contributed, participated, advised and supported. Continue to follow FPF’s work by subscribing to our monthly briefing and following us on LinkedIn, Twitter/X, and Instagram. On behalf of the FPF team, we wish you a very Happy New Year and look forward to what’s to come in 2025!
OAIC’s Dual AI Guidelines Set New Standards for Privacy Protection in Australia
On 21 October 2024, the Office of the Australian Privacy Commissioner (OAIC) released two sets of guidelines (collectively, “Guidelines”), one for developing and training generative AI systems and the other one for deploying commercially available “AI products”. This marks a shift in OAIC’s regulatory approach from enforcement-focused oversight to proactive guidance.
The Guidelines establish rigorous requirements under the Privacy Act and its 13 Australian Privacy Principles (APPs), particularly emphasizing accuracy, transparency, and heightened scrutiny of data collection and secondary use. Notably, the Guidelines detail conditions that must be met for lawfully collecting personal information publicly available online for purposes of training generative AI, including through a detailed definition of what “fair” collection means.
This regulatory development aligns with Australia’s broader approach to AI governance, which prioritizes technology-neutral existing laws and voluntary frameworks while reserving mandatory regulations for high-risk applications. However, it may signal increased regulatory scrutiny of AI systems processing personal information going forward.
This blog post summarizes the key aspects of these Guidelines, their relationship to Australia’s existing privacy law, and their implications for organizations developing or deploying AI systems in Australia.
Background: AI Regulation in Australia and the Role of OAIC
Australia, like many jurisdictions globally, is currently in the process of developing its approach to AI regulation. Following a public consultation on “Safe and Responsible AI in Australia” in 2023, the Australian Government issued an “Interim Response” outlining an approach that seeks to regulate AI primarily through existing, technology-neutral laws and regulations, prioritizing voluntary frameworks and soft law mechanisms, and potentially reserving future mandatory regulations for high-risk areas. This stands in contrast to the European Union’s AI Act, which introduces a comprehensive regulatory framework covering a broader range of AI systems.
While the Australian Government has been giving shape to the country’s overall approach to AI regulation, several Australian regulators, as part of the Digital Platform Regulators (DP-REG) Forum, have been closely following developments in AI technology, co-authoring working papers on large language models (2023) and more recently, multimodal foundation models (2024).
The OAIC issued its first ever guidance on complying with the Privacy Act in the context of AI in a DP-REG working paper on multimodal foundation models released in September 2024. It followed up the next month with two sets of more detailed guidelines that provide practical advice for organizations on complying with the Privacy Act and the APPs in two important contexts:
The “Guidance on Developing and Training Generative AI Models” (AI Development Guidelines) targets developers andfocuses specifically on privacy considerations that may arise from training generative AI models on datasets containing personal information. It identifies obligations regarding the collection and processing of such datasets and highlights specific challenges that may arise from practices like data scraping and obtaining datasets from third parties.
The “Guidance on Privacy and the Use of Commercially Available AI Products” (AI Product Guidelines) is directed at organizations deploying commercially available AI systems that process personal information, in order to offer products or services internally or externally. It also covers the use of freely accessible AI products, such as AI chatbots.
Both Guidelines are complementary, acknowledging and referring to each other, while addressing distinct phases in the AI lifecycle and different stakeholders within the broader AI ecosystem. However, they are not intended to be comprehensive. Instead, they aim to highlight the key privacy considerations that may arise under the Privacy Act when developing or deploying generative AI systems.
The Guidelines Recognize Both AI’s Benefits and Significant Privacy Risks
Both Guidelines acknowledge AI’s potential to benefit the Australian economy through improved efficiency and enhanced services. However, they also emphasise that AI technologies’ data-driven nature creates substantial privacy risks that must be managed carefully. Key risks highlighted include:
Loss of control for individuals over how their personal information is used in AI training datasets.
Bias and discrimination: Inherent biases in training data can be amplified, leading to discriminatory outcomes.
Inaccuracies: Outputs of AI systems may be inaccurate and are not always easily explainable, impacting trust and decision-making.
Re-identification: Aggregation of data from multiple sources increases the risk of individual re-identification.
Potential for misuse: Generative AI in particular can be misused for malicious purposes, including disinformation, fraud, and creation of harmful content.
Data breaches: Vast datasets used in training increase the risk and potential impact of data breaches.
To address these risks, both Guidelines emphasize that it is important for organizations to adopt a “Privacy by Design” approach when developing or deploying AI, and conducting Privacy Impact Assessments to identify and mitigate potential privacy impacts throughout the AI product lifecycle.
The Guidelines Establish Rigorous Accuracy Requirements
Organizations are required under APP 10 to take reasonable steps to ensure personal information is accurate, up-to-date, and complete when collected, and also relevant when used or disclosed.
Both Guidelines emphasize that the accuracy obligation in APP 10 is vital to avoid the risks that may arise when AI systems handle inaccurate personal information, which range from incorrect or unfair decisions, to reputational or even psychological harm.
For AI systems, identifying “reasonable steps” under APP 10 requires organizations to consider:
the sensitivity of the personal information being processed;
the organization’s size, resources, and expertise – factors which affect their capacity to implement accuracy measures; and
the potential consequences of inaccurate processing for individuals, as higher risks of harm necessitate more robust safeguards.
The Guidelines emphasize that generative AI models in particular present distinct challenges under APP 10 because they are trained on massive internet-sourced datasets that may contain inaccuracies, biases, and outdated information which can be perpetuated in their outputs. The probabilistic nature of these models also makes them prone to generating plausible but factually incorrect information, and their accuracy can deteriorate over time as they encounter new data or their training data becomes outdated.
To address these challenges, the Guidelines recommend that organizations should implement comprehensive measures, including thorough testing with diverse datasets, robust data quality management, human oversight of AI outputs, and regular monitoring and auditing. The key theme is that organizations must take proactive steps to ensure accuracy throughout the AI system’s lifecycle, with the stringency of measures proportional to the system’s intended use and potential risks.
The Guidelines Make Transparency a Core Obligation Throughout the AI System Lifecycle
The OAIC’s guidelines also establish transparency as a fundamental obligation throughout the lifecycle of an AI system. Notably, however, the guidelines see transparency as an obligation that operates on multiple levels.
The transparency obligation is rooted in APP 1, which requires organizations to manage personal information openly and transparently (including by publishing a privacy policy), and APP 5, which requires organizations to notify individuals about how their personal information is collected, used, and disclosed.
The Guidelines emphasize that in an AI context, privacy policies must provide clear explanations of how AI systems process personal information and make decisions. When AI systems collect or generate personal information, organizations must give timely and specific notifications that provide individuals genuine insight into how their information is processed and empower them to understand AI-related decisions that affect them.
To support this transparency framework, organizations must invest in comprehensive staff training to ensure employees understand both the technical aspects and privacy implications of their AI systems, enabling them to serve as knowledgeable intermediaries between complex AI technologies and affected individuals. This human oversight is to be complemented by regular audits and monitoring, which help organizations maintain visibility into their AI systems’ performance, address privacy issues proactively, and generate the information needed to maintain meaningful transparency with individuals.
The Guidelines Place Heightened Scrutiny on Data Collection and Secondary Use
The Guidelines underscore the need for heightened scrutiny on data collection practices under APP 3 and the secondary use of personal information under APP 6 in the AI context. The Guidelines also emphasize that organizations may face distinct challenges across different collection methods.
With regard to challenges in data collection methods, the AI Developer Guidelines highlight that the collection of training datasets that may contain personal information through web scraping – defined as “the automated extraction of data from the web” – raises several concerns under APP 3.
Notably, the Guidelines caution that developers should not automatically assume that information posted publicly can be used to train AI models. Rather, developers must ensure that they comply with APP 3 by demonstrating that:
It would be unreasonable or impracticable to collect the personal information directly from the individuals concerned;
The collection of personal information through web scraping is lawful and fair. Noting that collection of personal information via web scraping is often done without the direct knowledge of data subjects, the Guidelines identify 6 factors to consider in determining whether such collection is fair:
Individuals’ reasonable expectations;
The sensitivity of the information;
The intended purpose of the collection, including the intended operation of the AI model;
The risk of harm to individuals;
Whether the individuals concerned intentionally made the information public; and
The steps the developer will take to prevent privacy impacts, including deletion, de-identification, and mechanisms to increase individuals’ control over how their information is processed; and
Insofar as the dataset contains “sensitive information” (as defined under Australia’s Privacy Act), individuals have provided express consent for this information to be used to train an AI model.
The Guidelines therefore do not prohibit the collection of training data through web scraping, but they lay out detailed requirements that must be fulfilled to lawfully do so. Notably, the Guidelines define what “fair” collection of personal data through web scraping requires, bringing forward several dimensions to consider, from individuals’ perception of the collection and attitude when making the information public, to intrinsic characteristics of the information collected, to extrinsic assessments of risks of harm, to technical and organizational measures that are privacy-enhancing. The Guidelines acknowledge that organizations may face significant challenges in meeting many of these requirements.
Further, the Guidelines note that many of the above considerations under APP 3 also apply to third-party datasets.The Guidelines therefore recommend that organizations seeking to rely on such datasets conduct thorough due diligence regarding data provenance and the original circumstances in which the information was collected.
By contrast, when organizations seek to use their existing datasets to train AI models, the main consideration under the Guidelines is complying with APP 6, whichgoverns secondary use of personal information.This principle requires organizations to either obtain informed consent or carefully evaluate whether AI training aligns with individuals’ reasonable expectations based on the original collection purpose.
Throughout all methods, organizations must adhere to the principle of data minimization, limiting collection of personal information to what is strictly necessary, and must also consider techniques like de-identification or the use of synthetic data to further reduce risks to individuals.
The AI Product Guidelines Require Organizations to Pay Attention to Privacy Throughout the Deployment Lifecycle
The AI Product Guidelines advocates for a “privacy by design” approach that integrates privacy considerations throughout the AI product lifecycle.
They specifically call on organizations to conduct thorough due diligence before adopting AI products. Recommended steps include assessing the appropriateness of these products for their intended use, evaluating the quality of training data, understanding security risks, and analyzing data flows to identify parties that can access inputted information.
In the deployment and use phase, organizations must exercise strict caution when inputting personal information into AI systems, particularly systems that are provided to the public for free, such as AI chatbots. They emphasize the need to comply with APP 6 for any secondary use of personal information, minimizing data input, and maintaining transparency with individuals about how their information will be used.
While the AI Product Guidelines primarily focus on APPs 1, 3, 5, 6, and 10, they also emphasize that several other APPs may play crucial roles, depending on how the AI product is being used. These APPs include:
APP 8, which governs cross-border data transfers when AI systems process information on overseas servers;
APP 11, which requires reasonable security measures to protect personal information in AI systems from unauthorized access and misuse; and
APPs 12 and 13, which ensure individuals can access and correct their personal information, respectively.
Looking Ahead: The Guidelines Signal Increased Privacy Scrutiny for AI
The OAIC’s guidelines represent a significant step in regulating AI use in Australia that not only aligns with broader Australian government initiatives, such as the Voluntary AI Safety Standard, but also reflects a broader global trend of data protection authorities issuing rules and guidance on AI governance through existing privacy laws.
The OAIC’s guidelines establish a foundation for privacy-protective AI development and deployment, but organizations must remain vigilant as both the technology and regulatory requirements continue to develop. The release of the Guidelines may hint at increased regulatory scrutiny of AI systems that process personal information, meaning that organizations that develop or deploy such systems will need to carefully consider their obligations under the Privacy Act and implement appropriate safeguards.
Insights from the Second Japan Privacy Symposium: Global Data Protection Authorities Discuss Their 2025 Priorities, from AI, to Cross-Regulatory Collaboration
The Future of Privacy Forum (FPF) hosted the Second Japan Privacy Symposium (Symposium) in Tokyo on November 15, 2024. The Symposium brought together leading data protection authorities (DPAs) from around the world to discuss pressing issues in privacy and data governance. The Symposium featured in-depth discussions on international collaboration, artificial intelligence (AI) governance, and the evolving landscape of data protection laws.
The Symposium kickstarted the Personal Information Protection Commission of Japan’s (PPC) Japan Privacy Week, and was an official side-event of the 62nd Asia-Pacific Privacy Authorities (APPA) Forum (APPA 62). FPF is grateful for the collaboration and support from the PPC, the Japan DPO Association, and S&K Brussels LPC.
In this blog post, we share some of the key takeaways from the Symposium.
Japan Privacy Symposium features global privacy regulators in Tokyo
The Symposium welcomed an esteemed line-up of speakers. Commissioner Shuhei Oshima from the PPC delivered the opening keynote. In his keynote, Commissioner Oshima shared about the PPC’s regulatory priorities for 2025. These included cross-border data transfers and the Data Free Flow with Trust initiative, as well as further collaboration with the G7 DPAs and bilaterally with various international regulators.
Following the keynote, Gabriela Zanfir-Fortuna, Vice-President for Global Privacy at FPF moderated a panel on the regulatory strategies of APAC and global DPAs in 2024 and beyond. Gabriela was joined by Philippe Dufresne, Privacy Commissioner of Canada, Office of the Privacy Commissioner of Canada, Ashkan Soltani, Executive Director of the California Privacy Protection Agency (CPPA), Dr. Nazri Kama, Commissioner, Personal Data Protection Commissioner’s Office of Malaysia (PDPD), Thienchai Na Nakorn, Chairman, Personal Data Protection Committee of Thailand (PDPC), and Josh Lee Kok Thong, Managing Director for Asia-Pacific at FPF.
Regulators in APAC have some common priorities, such as cybersecurity and cross-border data transfers
The panel kicked off with highlights from a recent report published by FPF’s APAC office, “Regulatory Strategies of Data Protection Authorities in the Asia-Pacific Region: 2024, and Beyond”, presented by Josh. In line with similar FPF work focusing on the EU, Latin America and Africa, the report provides a comprehensive analysis of strategy documents and key regulatory actions of DPAs in 10 major jurisdictions in Asia-Pacific, as well as an overview of key trends in the region.
There are three top common priorities for APAC’s major DPAs:
First, cybersecurity and data breach responses, with 90% of the DPAs included in the Report prioritising this. However, jurisdictions are at various stages of implementing measures in these areas, while enforcement approaches also differ significantly.
Second, cross-border data transfers, which are a priority for 80% of APAC DPAs. Jurisdictions are similarly taking a diversity of approaches, from taking a leading role in international initiatives, such as the Global Cross-Border Privacy Rules (CBPR) System (for instance, Japan and Singapore), to promoting the use of standardized contractual clauses (for instance, China, Japan and Singapore).
Third, AI governance, with 70% of regulators prioritising this. Some have developed comprehensive policy frameworks and regulations for AI, while others have focused on issuing guidelines or addressing AI within existing regulatory structures.
Cross-regulatory and cross-border collaboration is a shared priority for regulators in APAC and beyond
During the panel discussion, one top regulatory priority surfaced was on cross-border collaboration. Commissioner Dufresne emphasized the importance of international cooperation in addressing privacy challenges. “At the OPC, we will continue to be focused on topics such as international collaboration,” he noted. Commissioner Dufresne discussed the OPC’s efforts to collaborate with domestic and international partners, including other regulators in fields such as competition, copyright, broadcasting, telecommunications, cybersecurity, and national security. “Data protection is key to so many of those things,” Commissioner Dufresne said. “It touches other regulators, so working very closely is something we’ve been discussing, including at the G7.”
Expanding regional and international collaboration was similarly a key priority for Malaysia. Commissioner Nazri noted that Malaysia’s PDPD had visited fellow regulators in the UK, EU, Japan, South Korea and Singapore. The PDPD had also just joined the APPA Forum, as well as the APEC Cross-Border Privacy Enforcement Arrangement (CPEA). Going forward, Commissioner Nazri noted that the PDPD would be “moving towards” applying for the Global Cross-Border Privacy Rules (CBPR) certification system. The PDPD is also taking steps towards meeting the EU’s adequacy requirements, with Commissioner Nazri expressing hope that Malaysia would attain EU adequacy “in the next two years.”
Similarly, Chairman Thienchai from Thailand’s PDPC noted that it had sent delegations to attend Global CBPR workshops, and that the PDPC could also be applying to be a member of the Global CBPR system soon.
Regulators are balancing between AI innovation and risk, while managing an ever-growing pool of AI-related issues
AI remains a top concern for regulators worldwide. Commissioner Dufresne stated that ensuring the protection of privacy in the context of emerging and changing technology is a key priority for the OPC. “Certainly, generative AI and other emerging technologies like quantum computing and neurorights are changing the landscape,” he said. “We need to use innovation to protect data.”
He emphasized the importance of leveraging technology to protect privacy, noting that AI can be used as a tool against threats like deepfakes. The OPC is also looking to work with cross-regulatory partners to address issues such as synthetic media. “We’re looking to work with cross-regulatory partners in identifying specific areas and seeing what are the common areas or perhaps different areas of privacy and competition with a specific topic like synthetic media,” he explained.
California’s CPPA has also been at the forefront of rule-making and enforcement actions pertaining to AI and automated decision-making challenges. In this regard, Director Soltani observed that “there is no AI without PI (personal information).” The CPPA has thus had to develop deep expertise in AI while acting as California’s privacy regulator. Besides focusing on rule-making, the CPPA has been conducting enforcement sweeps in various sectors, starting with the connected vehicle sector.
The task of applying data protection laws to AI and issuing relevant industry guidance is also one that Thailand’s PDPC is working on. Chairman Thienchai noted that the PDPC had “established a working group study” on how AI is impacting the protection mechanism under Thailand’s Personal Data Protection Act, with results expected in the first quarter of 2025. Thailand’s PDPC is also working to issue guidelines on the intersection of AI and the PDPA. The guidelines could state, for instance, that in using personal data to train AI systems, developers have to do so on an anonymised basis.
Regulators continue to work on implementing updates to data protection laws to deal with new and emerging challenges
A third theme that emerged from the panel discussion was how regulators were planning to continue working on updates to their data protection laws – and implementing them – in 2025.
For California’s CPPA, Director Soltani highlighted that his agency was deeply engaged in rule-making, especially in these areas: (a) cybersecurity, where companies in California will be required to perform and submit cybersecurity assessments to the CPPA; (b) data protection impact assessments or risk assessments, where companies will be required to perform such assessments including where they deploy AI tools; and (c) automated decision-making technologies and AI. Director Soltani also highlighted the ongoing work of implementing aspects of the California Consumer Privacy Act (CCPA). For instance, with the CPPA’s Data Broker Registry, the CPPA is working on setting up a one-stop shop by January 2026, where Californians will “have the ability to go to one place and request that all of their data be deleted from all of these companies.”
For Malaysia, Commissioner Nazri provided an update on recent amendments to Malaysia’s Personal Data Protection Act (PDPA) that were passed in late-2024. “The amendment was presented to our national parliament in July this year and was officially approved on July 31,” he noted. Commissioner Nazri noted several key changes to the PDPA, including:
A requirement to appoint a Data Protection Officer (DPO);
A mandatory data breach notification system;
Introducing responsibilities for data processors;
Introducing data portability rights;
Revising conditions for cross-border data transfers; and
Increasing penalties for non-compliance.
Commissioner Nazri also noted that the PDPD would be issuing 19 new documents in tranches throughout 2025. Specifically, these were nine pieces of subsidiary legislation, two circulars (or Commissioner’s Orders), seven guidelines, and one standard. Commissioner Nazri further shared that work was ongoing to re-formulate the PDPD into an independent Commissioner’s Office.
For Thailand, while Thailand had passed its PDPA in 2021, Chairman Thienchai noted that Thailand’s PDPA contained a review requirement to update the law if necessary. Chairman Thienchai thus noted that the PDPC would be working in 2025 to introduce a proposal to amend the PDPA “to catch up with the global community.” Further, Chairman Thienchai acknowledged challenges with data breaches, especially in the public sector, and emphasized the need for coordination among agencies. “We have to coordinate with other agencies to improve the enforcement mechanism in the PDPA,” he said.
Finally, the PDPC is prioritizing cross-border data transfers. “We issued some subordinate laws related to cross-border transfers and we adopted ASEAN Model Contractual Clauses (MCCs) and also EU Standard Contractual Clauses (SCCs) in our subordinate laws,” Chairman Thienchai explained, concluding with an update that his office is “promoting ASEAN MCCs with the Thai Chamber of Commerce.”
Conclusion
The second edition of the Japan Privacy Symposium showcased the shared challenges and priorities among global data protection authorities. From AI governance to cross-regulatory collaboration and legal reforms, the Symposium highlighted the need for continued dialogue, cooperation, and information-sharing.
Following the Symposium, FPF was also honored and privileged to have been invited to participate in speaking opportunities during the closed and public sessions of APPA 62. In particular, Gabriela moderated a session on AI governance and regulation, while Josh spoke on a panel on balancing innovation and data protection.
FPF remains committed to facilitating these important conversations and advancing the discourse on privacy and emerging technologies globally.
Five Big Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2025
In the enduring absence of a comprehensive national framework governing the collection, use, and transfer of personal data, state-level activity on privacy legislation has been on a consistent upward trend since the enactment of the California Consumer Privacy Act in 2018. With all 50 U.S. states scheduled to be in session in 2025, stakeholders are anticipating yet another year of expansion and divergence across the state privacy landscape. It is still too early to predict which states will adopt or amend their privacy laws so instead this article explores the five big questions set to shape American privacy law in the coming year.
Will a new consensus emerge on data minimization (and would it change anything)?
State privacy laws have traditionally incorporated the principle of data minimization by prohibiting data processing beyond what is reasonably necessary to accomplish the purposes that are disclosed to a user. Consumer advocates have long objected to this approach, arguing that it incentivizes companies to bury broad disclosures in dense privacy notices, resulting in little, if any, heightened protection. This year, in a potentially paradigm-shifting move, the Maryland Online Data Privacy Act became the first state comprehensive law to attempt to depart from the typical data minimization standard by placing new limits on the collection and use of data tied to the activities necessary to provide a specific product or service requested by a consumer. My colleague Jordan Francis has called the classic approach “procedural data minimization” and the Maryland approach “substantive data minimization.”
While Maryland is the only comprehensive state privacy law to adopt a substantive data minimization approach, proposals in Vermont and Maine that came close to enactment this year contained similar language. Heightened data minimization provisions are also elements of recent sectoral laws including the Washington State My Health My Data Act, the New York Child Data Protection Act, and the Virginia Child Data Privacy Amendment. Taken together, these frameworks portend a new trend toward substantive data minimization standards; however, their statutory requirements vary in subtle but consequential ways. Distinctions include whether new minimization standards (1) apply to the collection or the processing of data, (2) limit data processing to what is “reasonably” necessary, “strictly” necessary, or just plain “necessary” to provide a requested product or service, and (3) are subordinate to other bases for using personal data, such as a list of “permissible purposes” (e.g. protecting data security) or if consistent with consumer consent.
The emergence of substantive data minimization requirements in state privacy laws represents an attempt to depart from the much maligned “notice and consent” approach to consumer privacy law. However, the ultimate impact of these emerging standards is not yet clear, and is expected to be largely shaped by future trends in interpretation, implementation, and enforcement. Consider the following, yet unanswered, questions about these “necessity” data minimization standards:
If personal data satisfies a “necessity” standard that focuses solely on collection, can it then be processed for an unnecessary secondary purpose following initial ingestion by a business?
What data collection is “reasonably” necessary to offer a requested product or service but would not be “strictly” necessary to offer that same service? Practically, what is the difference between these two standards?
If a company’s business model is based on the sale of personal information or the use of data for targeted advertising, are those processing activities necessary in order to offer a product or service?
Does providing consent for data processing make that use a “requested” product or service regardless of the processing purpose?
Will companies have leeway to define data collection and processing activities within the scope of their “products and services” that they offer? (e.g.: “Our service is a photo hosting platform that generates revenue from selling data.”)
Will data brokers face renewed scrutiny?
Perhaps the biggest surprise of the 2024 cycle has been the lack of legislative activity directly focused on the information collection and sharing practices of data brokers. The third party collection and sale of sensitive data, including health and location information, has been the subject of several high profile media investigations and is increasingly cited as a potential threat to national security. However, this year no new states passed data broker specific privacy laws and very few such bills were even introduced.
The scarcity of state level attention is even more noticeable when considering national efforts. New restrictions and enforcement concerning the brokering of personal information was one of the few privacy topics on which federal policymakers were particularly active this year. For example, the Biden Administration’s Executive Order 13873, the Protecting Americans Data from Foreign Adversaries Act, and the Federal Trade Commission’s litigation against Kochava and settlements with Gravy Analytics and Mobilewalla.
However, privacy legislation constraining the activities of data brokers could be set for a comeback in 2025 and lawmakers have a number of options they could pursue. In November, a coalition of data brokers decisively lost a bid to strike down New Jersey’s Daniel’s Law, which empowers certain government employees to request the removal of personal information from public websites. Furthermore, data broker registry laws are now in effect – and increasingly being enforced – in California, Texas, Oregon, and Vermont. California is also attracting attention for its efforts to build a “one stop shop” accessible deletion mechanism intended to allow individuals to request the deletion of their personal information across the entire data broker ecosystem.
On the other hand, it is possible that lawmakers will instead choose to address concerns about the data broker industry through more comprehensive regulatory approaches. There are inherent challenges to singling out a particular industry or practice for regulation that often raise complicated line drawing issues. A possible template for such a broader approach may be the aforementioned Maryland Online Data Privacy Act, which contains a unique standalone restriction on the sale of sensitive personal data.
Which laws will be subject to legal challenges?
Several recent state privacy laws have been met with constitutional challenges, often concerning their intersection with First Amendment protected activity and impact to interstate commerce. To date, the most common litigation (and industry success in seeking injunctions) has involved laws requiring social media companies to conduct age verification and limit to features/access to certain child users. At the same time, lawmakers have continued to iterate on these proposals in search of a framework that can reliably withstand constitutional scrutiny – industry has notably only secured a partial injunction of the Texas SCOPE Act.
Looking ahead to 2025, legislative experimentation and industry litigation concerning children’s online safety and privacy laws are likely to continue apace. However, the tenor of these challenges may evolve following the Supreme Court’s decision in NetChoice v. Moody. While that case involved state laws regulating the content moderation practices of social media companies, several Justices expressed disapproval of how the case was brought as a “facial challenge” prior to enforcement, which may shift litigation strategies to focus on “as applied challenges”.
Stakeholders should also pay close attention to privacy laws in California and Maryland. In California, industry groups have already raised concerns that recent California Privacy Protection Agency rulemaking activity – on both data brokers and automated decisionmaking technology opt-outs – exceeds the bounds of the Agency’s statutory authority and is in violation of the California Administrative Procedure Act. Separately, while the Maryland Age Appropriate Design Code was drafted to remove any direct requirements to moderate content, the law’s risk assessment requirements may still contain “proxies for content” that Ninth Circuit found to likely violate the First Amendment in California’s version of the law.
How will lawmakers approach artificial intelligence?
Opportunities, risks, and hype surrounding advancements in artificial intelligence (AI) technologies have impacted every domain in tech policy, and data privacy is no exception. In fact, privacy rules may emerge as one of the more successful levers for governing AI. For example, existing technology-neutral privacy laws will already apply to AI systems to the extent that they collect, process and output personal information. In particular, transparency, security, risk assessment, and consumer choice requirements under existing laws are poised to have significant influence on the development and use of new AI tools.
It is also important to recognize that AI is not a single technology, but can encompass a range of systems, some of which have been with us for decades (such as facial recognition technology) and some of which are still emerging (such as general purpose ‘foundation’ models). Lawmakers therefore have an array of approaches from which they could address AI safety, transparency, and fairness. For example, they could comprehensively regulate a broad range of technologies and harms, which is the approach taken by the draft Texas Responsible AI Governance Act. They may also seek to regulate a particular AI technology or use case such as “‘deep fakes” in political advertisements. Finally, lawmakers could also bake new AI-specific requirements into comprehensive privacy laws, as Minnesota did this year by creating a new right to contest the result of significant profiling decisions.
President-elect Donald Trump’s promise to repeal the Biden Administration’s AI Executive Order and incoming FTC Chair Ferguson’s leaked agenda to “terminate all initiatives involving so called… AI ‘bias’” could also inspire state lawmakers to focus on the use of AI systems in a manner that results in unlawful discrimination. This was the focus of the Colorado AI Act, which was enacted this year and that may serve as a template for similar state level efforts. However, efforts to establish a harmonized state-level approach to regulating discriminatory outcomes in high-risk systems may be complicated: Colorado’s pathsetting law is likely to be further shaped by amendments and rulemaking prior to taking effect.
How will the new administration and congress impact the state privacy landscape?
Next year, President-elect Trump will enjoy narrow but meaningful majorities in both chambers of congress. The Republican Party has historically supported the enactment of broadly preemptive privacy legislation, raising the possibility – however faint – that some or all of the emerging state privacy ‘patchwork’ could be superseded by new federal legislation. Business groups may see a window of opportunity to advocate for a broadly preemptive national privacy framework modeled on existing state laws like the Texas Data Privacy and Security Act. However, at present there is little to suggest that preemptive comprehensive privacy will be a top priority for Republican lawmakers during the next congress, though bipartisan movement on child-specific online safety legislation appears more likely.
The November election results will influence not only the legislative agenda in Washington D.C., but also legislative activity in the states. Democratic governors and attorneys general are already discussing legislative and legal strategies to attempt to minimize or block various priorities of the Trump agenda. Concerns about the incoming administration’s approach to issues like immigration, law enforcement, and health care may be a motivating factor for commercial privacy legislation in Democrat-controlled states. For example, in a potential sign of things to come, Democratic Senators in Michigan rapidly sought to establish new protections for “reproductive health data” during the State’s ‘lame duck’ session immediately following the November election.
Outside of a few notable examples, recent state privacy laws have typically been enacted on an overwhelmingly bipartisan basis. However, this pattern could shift next year should commercial privacy become increasingly intertwined with other, more polarized issues. Therefore, while 2025 is likely to be as active as ever for legislative activity, this dynamic could ultimately reduce the amount of bills that are enacted compared to prior years.
Do you have the answers to these questions or are you brave enough to make your own predictions? Email the author of this post at [email protected]
In a Landmark Judgment, The Inter-American Court of Human Rights Recognized an Autonomous Right to Informational Self-Determination
The following is a guest post to the FPF blog by Jonathan Mendoza Iserte, Secretary of Personal Data Protection at Mexico’s Instituto Nacional de Transparencia y Acceso a la Información y Protección de Datos Personales (INAI), and Nelson Remolina Angarita, Professor at the Faculty of Law, Universidad de los Andes, (Colombia). The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.
The right to “informational self-determination” has recently emerged as an autonomous fundamental right within the Inter-American legal sphere, following a landmark ruling by the Inter-American Court of Human Rights (IACHR) in the case Members of the José Alvear Restrepo Lawyers’ Collective vs. Colombia, issued on October 18th, 2023. Its protection is essential for the exercise of other fundamental rights, such as the right to privacy, reputation, the right to defense, and the right to security within the Inter-American system of fundamental rights. The case was brought to the attention of the IACHR on July 8, 2020, by the Inter-American Commission on Human Rights and it highlights the obligation of States to protect the right to informational self-determination against practices of surveillance, harassment, and the collection of personal information by state agencies. The Court examined the allegations related to the intelligence activities carried out by the Colombian State against members of the José Alvear Restrepo Lawyers’ Collective (CAJAR), an organization dedicated to the defense of human rights in Colombia, which resulted in threats, intimidation, and a climate of insecurity that forced several of its members into exile.
The facts of the case concern events that began in the 1990s. It has been alleged that during intelligence operations, information was collected about members of CAJAR and that this information was misused, including being handed over to illegal armed groups. It was noted that the victims “did not have access to an effective remedy to address their claims related to accessing the intelligence database” of the State.
Although the ruling covers a wide range of human rights issues, in this piece we will focus solely on matters related to data protection or informational self-determination. The purpose of this analysis is to analyze the most relevant aspects of the case regarding the right to personal data protection, exploring its development and recognition as an autonomous human right that must be respected and upheld within the Inter-American human rights system. Specifically, it will address how the Inter-American Court has integrated this right into the framework of state obligations, and how its violation affects not only the privacy of individuals but also their ability to exercise other fundamental rights.
1. Importance of the CAJAR Ruling regarding personal data processing in the Inter-American human rights system
With the CAJAR landmark ruling, the IACHR expressly recognized informational self-determination as an autonomous human right for the first time, which must be respected and upheld within the Inter-American human rights system. Indeed, in its judgment Series C No. 506 of October 18, 2023, the IACHR concluded:
“586. In the view of the Inter-American Court, the aforementioned elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems in the region, and which finds its basis in the protective content of the American Convention, particularly in the rights enshrined in Articles 11 and 13, and, in terms of its judicial protection, in the right guaranteed by Article 25.”1(…)
“588. Ultimately, it is an autonomous right that, in turn, serves as a guarantee for other rights, such as those concerning privacy, the protection of honor, the safeguarding of reputation, and, in general, human dignity. It is worth noting that this right extends, with the applicable limitations (see paras. 601 to 608 below), to any personal data held by any public body, and it similarly applies to records or databases managed by private entities, issues that are not addressed in detail due to the scope of this international case.” (Emphasis added)
This is a ruling of great significance within the Inter-American human rights system because it imposes obligations on States and opens the door for it to be upheld by international courts of justice.
The Inter-American Human Rights System (IAHRS) is based on the American Convention on Human Rights (ACHR), where States voluntarily commit to respecting and guaranteeing the rights established in the treaty, including the right to informational self-determination. This right encompasses the ability to access and control personal data held in public records. In this context, as noted in the CAJAR ruling, the state’s actions constituted a violation of this right, prompting the Court to issue binding rulings that may require reparations, legislative reforms, or other measures to remedy and prevent future violations.
The IACHR does not have enforcement powers comparable to those of national courts, its rulings are based on the principle of state consent under international law and are reinforced through mechanisms such as diplomatic pressure, reputational accountability, and domestic implementation. States are expected to integrate these rulings into their legal systems, and non-compliance may lead to international scrutiny.
Adopting mechanisms to guarantee this right in practice (not just on paper or in theory) is one of the obligations States must fulfill, as emphasized by the IACHR:
599. In any case, the Inter-American Court reiterates that the effectiveness of the right to informational self-determination requires States to provide adequate, swift, free, and effective mechanisms or procedures to process and address requests, either by the same authority managing the data or by another competent institution in matters of personal data protection or oversightdocs (see para. 582). (…) This requirement, derived from the obligation established in Article 2 of the American Convention, which encompasses the issuance of regulations and the development of practices conducive to the observance of human rights, including appropriate administrative procedures, constitutes an essential guarantee for asserting and exercising this right.”2 (Emphasis added)3
In the operative part of the ruling, the IACHR decided, among other things, the following:
13. The State is internationally responsible for the violation of the right to informational self-determination, recognized in Articles 11.2 and 13.1 of the American Convention on Human Rights, in relation to the obligations to respect and guarantee rights, and to adopt domestic legal provisions as established by Articles 1.1 and 2 of the same international instrument.” Specifically, the IACHR declared the violation of the right to informational self-determination because the victims of arbitrary intelligence activities were not guaranteed “access to the data that the intelligence agencies had collected about them. Furthermore, such access was hindered due to the limited progress in purging the archives of the now-defunct DAS” (paragraph 1011).
Given the above, the IACHR ordered a purge of the archives4 of the defunct Administrative Department of Security (DAS) to ensure that victims can access their information and exercise the eventual correction, cancellation, or deletion of data held in the archives (paragraph 1011). Additionally, the IACHR demands that, during the purging of the archives, “authorities must ensure the protection of sensitive data contained in the archives regarding which public access may eventually be granted” (paragraph 1013).
Moreover, the IACHR ordered that:
36. The State shall proceed with the approval of the necessary regulations to implement reasonable, swift, simple, free, and effective mechanisms or procedures that allow individuals to access and control the data held on them in intelligence archives, in accordance with the scope of the right to informational self-determination, as detailed in paragraphs 1059 and 1060 of this Judgment.
This order vindicates an essential aspect of the right to data protection, which not only includes access to the data but also the existence of effective mechanisms to that end. This means that it is not enough to create formal or theoretical tools, but rather useful and timely tools to ensure that rights are realized or guaranteed in practice.
The IACHR’s decision has been compared to the 1983 ruling of the Federal Constitutional Court of Germany on the law regarding the population, profession, and workplace census (Census Law), which highlighted the importance and scope of the right to “informational self-determination” and outlined the factual, legal, and administrative conditions that should govern the collection and processing of personal data through population censuses.
The right to informational self-determination encompasses the trilogy made up of the person, their personal data, and their constitutional rights. It represents an essential right that is gaining increasing relevance in the face of the growing use of information about individuals, and it is realized in the ability of individuals to decide when and within what limits personal matters are made public, as well as in controlling what happens to their personal data. The ruling points out that the current and future conditions of data processing endanger self-determination because technologies make it easier to:
(1) Archive personal data indefinitely;
(2) Integrate that information with data from other databases anywhere in the world;
(3) Review or consult personal data in seconds.
Added to this is the individual’s inability or difficulty in controlling both the use of their personal data and the quality of the information about them.
As with other rights, informational self-determination is not guaranteed without limits. The ruling clarifies that “the individual does not have unlimited or absolute dominion over their data.” The prevalence of the public interest justifies the imposition of certain restrictions to live in society. For those limitations to be valid and legitimate, they must be based on a legal or constitutional mandate.
2. The right to informational self-determination, as cornerstone of democratic regimes in Latin America
The ruling of the IACHR in the CAJAR case not only represents a milestone in recognizing informational self-determination as an autonomous human right but also presents an urgent challenge for Latin American states regarding the protection of fundamental rights in the digital environment. In a region still facing deep inequalities, conflicts, and institutional fragility, the protection of personal data and privacy is not only essential to safeguarding individual rights, but also to strengthening the democratic regime upon which human rights are based.
A solid democratic regime depends on transparency, accountability, and the unrestricted respect for citizens’ rights, where the right to informational self-determination plays a vital role. Undue state surveillance, mass data collection without control, and information leaks, as evidenced in the CAJAR case, are practices that undermine public trust in institutions and create an environment of insecurity and harassment, especially for those who defend human rights or criticize power. Therefore, protecting personal information becomes a fundamental guarantee for free citizen participation without fear of reprisals.
At the regional level, Latin American countries need to strengthen their legal frameworks to protect personal data and ensure that informational self-determination is respected in practice, not just on paper. In this sense, a key recommendation is that states adopt robust data protection laws aligned with international standards, such as the Council of Europe’s Convention 108 for the Protection of Individuals with regard to Automatic Processing of Personal Data and its additional protocol on supervisory authorities and transborder data flows; the Ibero-American Data Protection Standards of the Ibero-American Data Protection Network, and the updated Principles on Privacy and Personal Data Protection of the Organization of American States (OAS), which can serve as a model.
These laws must establish clear and effective mechanisms for citizens to access, rectify, and delete their data, and these mechanisms must be agile, free, and accessible to all sectors of the population, particularly the most vulnerable. Additionally, it is essential to have independent data protection authorities equipped with sufficient resources to oversee compliance with regulations and with sanctioning powers.
In addition to strengthening legal frameworks, it is imperative that Latin American countries develop secure technologies and platforms that enable accountable data processing. The use of encryption and other Privacy Enhancing Technologies, regular security audits, and the responsible purging of databases are fundamental steps to ensure that sensitive information is protected from unauthorized access. In the case of the now-defunct DAS in Colombia, the IACHR ruling ordered the purging of intelligence files, highlighting the need for states to implement effective protocols to guarantee the deletion or rectification of obsolete personal data or data collected arbitrarily without specific purposes.
Strengthening the democratic regime in the region means recognizing that the protection of personal data and the right to privacy are not privileges, but fundamental pillars for the defense of all human rights. Respect for informational self-determination not only protects citizens from abuses of power but also fosters trust in democratic institutions, creating a more transparent, secure, and participatory environment.
The construction of a strong democracy in Latin America necessarily involves a robust defense of digital rights, where informational self-determination and data protection are unrestricted guarantees for all citizens. As Yuval Noah Harari points out, “It is not enough for a democratic government to refrain from infringing on human and civil rights. It must take steps to guarantee them.”
The operative part of the judgment states the following: ’23. The State shall proceed with the purification of intelligence files in order to guarantee the victims’ right to informational self-determination regarding the data concerning them in such files, in the terms of paragraphs 1011 to 1014 of this Judgment.’ ↩︎
Brussels Privacy Symposium Report 2024
This year’s Brussels Privacy Symposium, held on 8 October 2024, convened global stakeholders from across Europe and beyond for in-depth discussions on the EU AI Act in the context of the broader EU digital ecosystem. Co-organized jointly by the Future of Privacy Forum and the Brussels Privacy Hub of the Vrije Universiteit Brussel, the eighth edition of the Symposium was a melting pot of brilliant minds from across academia, regulatory authorities and policymakers, industry, and civil society.
In addition to three expert panels exploring notions of risk and impact assessments across the EU digital rulebook, prohibitions and obligations for sensitive data processing, and an increasingly complex enforcement landscape, the organizers also welcomed Mark Scott, Senior Resident Fellow at the Atlantic Council’s Digital Forensics Research Lab for the Opening Keynote. With previous roles as chief technology correspondent for Politico, and more than a decade as correspondent for the New York Times, Scott provided a thorough and frank analysis of Europe’s “digital challenge” as the focus shifts from rulemaking to enforcement.
For this year’s program, Professor Adriana Iamnitchi, Chair of Computational Social Sciences at Maastricht University, presented research findings from a cutting-edge project analyzing search trends and patterns on prominent social media platforms to identify mis/disinformation. And finally, European Data Protection Supervisor Wojciech Wiewiórowski and Professor Gloria González-Fuster of the Vrije Universiteit Brussel sat together for a candid closing dialogue on the future of data protection.
In the Report of the Brussels Privacy Symposium 2024, you can read the key takeaways from the highlights mentioned above, along with many more practical and actionable insights on the complex interplay between the different elements of the EU data strategy architecture.
Future of Privacy Forum Publishes Report Exploring Organizations’ Emerging Practices and Challenges Assessing AI Risks
As AI models and systems become more widespread and powerful, FPF’s report finds many organizations are taking a four-step approach to managing potential risks
With growing focus from policymakers and regulators on the impact of artificial intelligence (AI) systems, and as organizations strive to responsibly use AI systems, organizations are increasingly embracing AI impact assessments to assess risks and take steps to minimize them. In response to the growing use—and uncertainty around—AI impact assessments, the Future of Privacy Forum (FPF) Center for Artificial Intelligence published a new report, “AI Governance Behind the Scenes: Emerging Practices For AI Impact Assessments” to examine the considerations, emerging practices, and challenges that companies are experiencing as they endeavor to harness AI’s potential while mitigating potential harms.
“Companies are embedding AI into their systems for a variety of uses from research to enterprise and entertainment, though questions remain around how to implement AI models in a responsible, ethical manner,” saidDaniel Berrick, FPF’s Counsel for Artificial Intelligence and the report’s author. “This report underscores that much more work needs to be done to ensure that companies can operationalize AI impact assessments, identify risks, and implement robust risk management practices. We hope this resource, built from conversations with a range of stakeholders, can serve as a resource for those evaluating how to deploy emerging technologies responsibly.”
Though recent years have witnessed a growing number of laws and resources on AI governance, many organizations remain uncertain about what AI impact assessments entail or which framework to use. In light of this emerging dynamic, FPF surveyed over 60 private sector stakeholders to gain insight into what common approaches companies are employing and the challenges they face when conducting AI impact assessments. FPF found that companies are converging on several practices for conducting AI impact assessments, such as accounting for both intended and unintended uses of AI models and systems. However, practitioners continue to face several challenges at different points in the assessment process.
Many organizations are struggling to obtain the full extent of relevant information from model developers and system providers;
Organizations have different levels of sophistication in their abilities to assess the levels of AI risks across varied contexts;
There is a lack of clarity regarding how best to measure risk management strategies’ effectiveness; and
Novel uses of AI can create uncertainty about when risk has been brought within acceptable levels.
Other insights include: 1) when gathering model-system information, organizations typically seek a variety of information, such as details about an AI model’s training, use cases, capabilities, and more; 2) a growing number of organizations have sought to integrate AI impact assessments into existing enterprise risk management processes, including those around privacy; and 3) when identifying and testing for AI-related risk, organizations may use both qualitative and quantitative approaches.
Organizations seeking to enhance their AI Impact Assessments should consider:
Bolstering processes for gathering information from third-party models developers and system vendors;
Improving internal education about AI risks; and
Enhancing techniques that measure risk management strategies’ effectiveness.
“FPF’s Center for Artificial Intelligence was created to act as a collaborative force for shared knowledge between stakeholders and support the responsible development of AI. The Center’s report addresses key knowledge gaps and promotes collaboration,” saidJohn Verdi, Senior Vice President for Policy at FPF. “FPF’s report was created with input from dozens of expert stakeholders, and it is the culmination of six months of convenings, interviews and workshops aimed at describing the state of play.”
The report dives deeper into the trends and challenges companies take at each step when conducting AI impact assessments and the circumstances that trigger them. To learn more, read the new report, here.
###
About Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.
Technologist Roundtable: Key Issues in AI and Data Protection Post-Event Summary and Takeaways
Co-Authored with Marlene Smith, Research Assistant for AI
On November 27, 2024, the Future of Privacy Forum (FPF) hosted a Technologist Roundtable with the goal of convening an open dialogue on complex technical questions that impact law and policy, and assisting global data protection and privacy policymakers in understanding the relevant technical basics of large language models (LLMs). We invited a wide range of academic technical experts to convene with each other and data protection regulators and policymakers from around the world.
Dr. Yves-Alexandre de Montjoye, Associate Professor of Applied Mathematics and Computer Science, Imperial College London; Lead, Computational Privacy Group, Imperial College London; Special Adviser on AI and Data Protection to EC Justice Commissioner Reynders; Parliament-appointed expert to the Belgian Data Protection Agency
Dr. David Weinkauf, Senior IT Research Analyst, Office of the Privacy Commissioner of Canada; Member, Berlin Group
Dr. Norman Sadeh, Professor in the School of Computer Science, Carnegie Mellon University; Co-Director, Privacy Engineering Program, Carnegie Mellon University.
Dr. Niloofar Mireshghallah, Post-doctoral scholar at the Paul G. Allen Center for Computer Science and Engineering, University of Washington
Dr. Rachel Cummings, Associate Professor of Industrial Engineering and Operations Research, Columbia University; Co-chair of the Cybersecurity Research Center at the Data Science Institute, Columbia University.
Dr. Damien Desfontaines Staff Scientist, Tumult Labs; Expert for the EDPB Support Pool of Experts.
As a result of the emergence of LLMs, data protection authorities and lawmakers are exploring a range of novel data protection issues, including how to ensure lawful processing of personal data in LLMs, and how to comply with obligations such as data deletion and correction requests. While LLMs can process personal data at different stages,1 including in training and in the input and output of models, there is an emerging question of the extent to which personal data exists “within” a model itself.2 Navigating these complex emerging issues increasingly requires understanding the technical building blocks of LLMs.
This post-event summary contains highlights and key takeaways from three parts of the Roundtable on 27 November.
The post-event summary contains highlights and key discussion takeaways regarding the following:
Basics of Transformer Technology and Tokenization
Training and Data Minimization
Memorization, Filters, and “Un-Learning”
We hope that this document supports ongoing efforts to explore and understand a range of novel issues at the intersection of data protection and artificial intelligence models.
If you have any questions, comments, or wish to discuss any of the topics related to the Roundtable and Post-Event Summary, please do not hesitate to reach out to FPF’s Center for AI at [email protected].
Commissioners Discussed Global Privacy Regulations at the Second Japan Privacy Symposium
November 26, 2024 — This week, the Future of Privacy Forum (FPF), a global non-profit focused on data protection, privacy, and emerging technologies, hosted the second annual Japan Privacy Symposium with support from S&K Brussels LPC and in cooperation with the Personal Information Protection Commission of Japan (PPC) and the Japan DPO Association.
This event, on the sidelines of the 62nd Asia-Pacific Privacy Authorities (APPA) Forum, brought together leaders in the Japanese privacy community and data protection and privacy regulators from across the globe at the Ritz-Carlton in Tokyo, Japan.
Commissioner OHSHIMA Shuhei opened the event with a keynote speech outlining some of PPC’s key regulatory priorities going forward. Subsequently, Philippe Dufresne, Commissioner, Office of the Privacy Commissioner, Canada; Ashkan Soltani, Executive Director, California Privacy Protection Agency; Nazri Kama, Commissioner, Personal Data Protection Department of Malaysia; Thienchai Na Nakorn, Chairman, Personal Data Protection Committee, Thailand; and Josh Lee Kok Thong, Managing Director for APAC, Future of Privacy Forum, also discussed upcoming regulatory priorities for data protection authorities, and key trends around regulatory priorities in the APAC region.
“We are excited to have had a successful second edition of this valuable event that brings together data protection and privacy regulators from around the world alongside the Japanese privacy community,” Gabriela Zanfir-Fortuna, FPF’s Vice President for Global Privacy, said. “Tokyo is a perfect location to host these important global conversations and it provides a valuable forum for commissioners from around the globe to share their perspectives with privacy leaders and community members. We are grateful to our partners, the Personal Information Protection Commission of Japan, the Japan DPO Association, S&K Brussels LPC and our Senior Fellow Kaori Inui, for their steadfast partnership and support.”
Pictured: Gabriela Zanfir-Fortuna addressing attendees at the second annual Japan Privacy Symposium November 25, in Tokyo, Japan.
###
About Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks and develop appropriate protections.FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on X and LinkedIn.
FPF Launches New Privacy-Enhancing Technologies Repository
For some time now, stakeholders at the intersection of data, privacy, and new technologies have increasingly recognized the potential of a range of technical and computational approaches and techniques to mitigate privacy risks. This set of tools and methods are otherwise known as privacy-enhancing technologies (PETs) and are understood as a group of techniques that can help enhance and preserve data privacy while maintaining its utility. As novel technological challenges continue to evolve, the conversation around PETs as possible enablers of secure, trusted, and fair data sharing and use has similarly intensified.
Recognizing the need for a deeper understanding of the potential and limitations of these technologies, FPF has actively contributed to shaping policymaking around PETs through discussion papers, reports, and stakeholder engagement. In 2023, FPF began facilitating the convenings of the Global PETs Network, an informal forum for global Data Protection Authorities (DPAs) and regulators interested in the development and policy implications of the adoption and implementation of PETs. In July 2024, FPF also launched the Research Coordination Network for Privacy-Preserving Data Sharing and Analytics, bringing together a group of cross-sector and multidisciplinary experts dedicated to exploring the potential of PETs in the context of AI and emerging technologies, and steward their adoption and scalability.
Through these initiatives and other efforts, FPF has identified a growing demand among stakeholders to promote and enhance a better understanding and awareness of the potential and limitations of PETs. Many regulators and organizations have made significant contributions to advancing PETs development and deployment through research, guidance, and testing, many of which have been publicly available for social use and benefit. Building on these efforts and its own initiatives, FPF recently launched the PETs Repository, a webpage that consolidates available resources and developments around the development and deployment of PETs.
FPF’s PETs Repository is a centralized, trusted, and up-to-date resource where individuals and organizations interested in these technologies can find practical and useful information in the field. The Repository is currently organized into three segments:
Regulatory Activity: Including official guidance, statements, blogs, tools, and reports on PETs from DPAs and regulators around the world.
External Reports: Relevant and contemporary publications from global organizations offering insights into PETs.
Sandboxes: Links to official resources detailing use cases, applications, and results from testing environments such as sandboxes and other initiatives led by governments and organizations.
While FPF acknowledges the extensive field of academic research around PETs, the Repository intends to primarily consolidate developments and official resources reflecting the existing regulatory thinking and policymaking around these technologies. Through the Repository, FPF contributes to facilitating a better understanding of PETs and increasing the visibility of global initiatives in the field from governments and organizations to a broader audience.