GPA 2025: AI development and human oversight of decisions involving AI systems were this year’s focus for Global Privacy regulators
The 47th Global Privacy Assembly (GPA), an annual gathering of the world’s privacy and data protection authorities, took place between September 15 and 19, 2025, hosted by South Korea’s Personal Information Protection Commission in Seoul. Over 140 authorities from more than 90 countries are members of the GPA, and its annual conferences serve as an excellent bellwether for the priorities of the global data protection and privacy regulatory community, providing the gathered authorities an opportunity to share policy updates, priorities, collaborate on global standards, and adopt joint resolutions on the most critical issues in data protection.
This year, the GPA adopted three resolutions after completing its five-day agenda, including two closed-session days for members and observers only:
- Resolution on the collection, use and disclosure of personal data to pre-train, train and fine-tune AI models
- Resolution on meaningful human oversight of decisions involving AI systems
- Resolution on Digital Education, Privacy and Personal Data Protection for Responsible Inclusive Digital Citizenship
The first key takeaway from the results of GPA’s Closed Session is a substantial difference in the scope of the resolutions relative to prior years. In contrast to the five resolutions adopted in 2024 or the seven adopted in 2023, which covered a wide variety of data protection topics from surveillance to the use of health data for scientific research, the 2025 resolutions are much more narrowly tailored and primarily focused on AI, with a pinch of digital literacy. Taken together with the meeting’s content and agenda, these resolutions provide insight into the current priorities of the global privacy regulatory community – and perhaps unsurprisingly, reflect a much-narrowed focus on AI issues compared to previous years.
Across all three resolutions adopted in 2025, a few core issues become apparent:
- First, regulators are continuing to promote shared conceptual frameworks for data protection regulation, with a particular focus on raising awareness of privacy and data protection issues throughout the world.
- Second, regulators are starting to zoom into specific issues related to AI and personal data processing, departing from the general, broad approach shown so far: training and fine-tuning of AI models and meaningful human oversight over individual decisions involving AI were the two concrete topics subject to convergence of regulatory perspectives this year.
- Third, a risk-based consensus for evaluating AI seems to be holding, with all three resolutions framing discussions of AI policy in the context of risk, and discussing the specific problem of bias in the context of AI-related data processing.
- Fourth, there remains great interest in mutual cooperation through the GPA or other international fora; all three of the 2025 resolutions explicitly promote this goal.
Finally, exploring what topics the Assembly didn’t address is also interesting. A deeper dive into each resolution is illustrative of some of the shared goals of the global privacy regulatory community – particularly in an age where major tech policymakers in the U.S., the European Union, and around the world are overwhelmingly focused on AI. It should be noted that the three resolutions passed quasi-unanimously, with only one abstention among GPA members noted in the public documents (US Federal Trade Commission).
- Resolution on the collection, use and disclosure of personal data to pre-train, train and fine-tune AI models
The first resolution, covering the collection, use and disclosure of personal data to pre-train, train, and fine-tune AI models, was sponsored by the Office of the Australian Information Commissioner and co-sponsored by 15 other GPA member authorities. The GPA resolved to four specific steps after articulating a greater number of underlying concerns – specifically, that:
- The collection, use and disclosure of personal data for the pre-training, training, and fine tuning of AI models is within the scope of data protection and privacy principles.
- The members of the GPA will promote these privacy principles and engage with other policy makers and international bodies (specifically naming the OECD, Council of Europe, and the UN) to raise awareness and educate AI developers and deployers.
- The members of the GPA will coordinate enforcement efforts on generative AI technologies in particular to ensure a “consistent standard of data protection and privacy” is applied.
- The members of the GPA will commit to sharing developments on education, compliance and enforcement on generative AI technologies to foster the coherence of regulatory proposals.
The specific resolved steps indicate a particular focus on generative AI technologies, and a recognition that in order to be effective, it is likely that regulatory standards will need to be consistent across international boundaries. Three of the four steps also emphasize cooperation among international privacy enforcement authorities; although notably this resolution does not include any specific proposals for adopting shared terminology directly.
The broader document relies on a rights-based understanding of data protection rights and notes several times that the untrammeled collection and use of personal data in the development of AI technologies may imperil the fundamental right to privacy, but casts the development of AI technologies in a rights-consistent manner as “ensur[ing] their trustworthiness and facilitat[ing] their adoption.” The resolution repeatedly emphasizes that all stages of the algorithmic lifecycle are important in the context of processing personal data.
The resolution also provides eight familiar data protection principles that are reminiscent of the OECD’s data protection principles and the Fair Information Practice Principles that preceded them – under this resolution personal data should only be used throughout the AI lifecycle when its use comports with: a lawful and fair basis for processing; purpose specification and use limitation; data minimization; transparency; accuracy; data security; accountability and privacy by design; and the rights of data subjects.
The resolution does characterize some of these principles in ways specific to the training of AI models – critically noting that:
- Related to the first principle of lawfulness, “the public availability of [personal] data does not automatically imply a lawful basis for its processing, which must always be assessed in light of the data subject’s reasonable expectation of privacy.”
- Regarding the third principle of data minimisation, “consideration should be given to whether the AI model can be trained without the collection or use of personal data.”
- Concerning the fifth principle, accuracy, that developers should “undertake appropriate testing to ensure a high degree of accuracy in [a] model’s outputs.”
- A component of the sixth principle, data security, is an obligation on entities developing or deploying AI systems to put in place “effective safeguards to prevent and detect attempts to extract or reconstruct personal data from trained AI models.”
This articulation of traditional data protection principles demonstrates how the global data protection community is considering how the existing principles-based data privacy frameworks will specifically apply to AI and other emerging technologies.
- Resolution on meaningful human oversight of decisions involving AI systems
The second resolution of 2025 was submitted by the Office of the Privacy Commissioner of Canada and was joined by thirteen co-sponsors, and focused on addressing how the members could synchronize their approaches to “meaningful human oversight” of AI decision-making. After explanatory text, the Assembly resolved four specific points:
- GPA Members should promote a common understanding of the notion of meaningful human oversight of decisions, which includes the considerations set out in [the second] resolution.
- GPA Members should encourage the designation of overseers with “necessary competence, training, resources, and awareness of contextual information and specific information regarding AI systems as a means of meaningful oversight.”
- The Assembly should use the GPA Ethics and Data Protection in Artificial Intelligence Working group to share knowledge and best practices to support practical implementation of “meaningful human oversight” in their respective jurisdictions.
- The Assembly should continue to promote the development of technologies or processes that advance explainability for AI systems.
This resolution, topically much more narrowly focused than the first one analyzed above, is based on the contention that AI systems’ decision-making processes may have “significant adverse effects on individuals’ rights and freedoms” if there is no “meaningful human oversight” of system decision-making and thus no effective recourse for an impacted individual to challenge such a decision. This is a notable premise, as only this resolution (of the three) also acknowledges that “some privacy and data protection laws” establish a right not to be subject to automated decision-making along the lines of Article 22 GDPR.
Ahead of the specifically resolved points, the second resolution appears to identify the potential for “timely human review” of automated decisions that “may significantly affect individuals’ fundamental rights and freedoms” as the critical threshold for ensuring that automated decisionmaking and AI technologies do not erode data protection rights. Another critical piece is the distinction the Assembly makes between “human oversight” – which may occur throughout the decision-making process, and “human review” – which may occur exclusively after the fact – the GPA explicitly identifies “human review” as only one activity within a broader concept of “oversight.”
Most critically, the GPA identifies specific considerations in evaluating whether a human oversight system is “meaningful”:
- Agency – essentially, whether the overseer has effective control to make decisions and act independently.
- Clarity of [overseer] role – preemptively setting forth what the overseer does with AI decisions – whether they are to accept, reject, or modify rejections, and how they are to consider AI system outputs.
- Knowledge and expertise – ensuring that overseers have appropriate knowledge and training to evaluate an AI system’s decision, including awareness of specific circumstances where a system’s outputs may require additional scrutiny.
- Resources – ensuring overseers have sufficient resources to oversee a decision.
- Timing and effectiveness – ensuring oversight is appropriately integrated into decisionmaking processes such that overseers may “agree with, contest, or mitigate the potential impacts of the AI system’s decision.”
- Evaluation and Accountability – ensuring overseers are evaluated on the basis of whether oversight was performed, rather than the outcome of the oversight decision.
The resolution also considers tools that organizations possess in order to ensure that “meaningful oversight” is actually occurring, including:
- Clarifying the “intention” and value of oversight
- Training
- Designing the oversight process
- Escalation
- Documentation
- Assessments
- Evaluation and testing of the process
- Evaluation of outcomes
Overall, the resolution notes that human oversight mechanisms are the responsibility of developers and deployers, and are critical in mitigating the risk to fundamental rights and freedoms posed by potential bias in algorithmic decision making, specifically noting the risks of self-reinforcing bias based on training data or the improper weighting of past decisions as threats meaningful oversight processes can counteract.
- Resolution on Digital Education, Privacy and Personal Data Protection for Responsible Inclusive Digital Citizenship
The third and final resolution of 2025 was submitted by the Institute for Transparency, Access to Public Information and Protection of Personal Data of the State of Mexico and Municipalities (Infoem), a new body that has replaced Mexico’s former GPA representative, the the National Institute for Transparency, Access to Information and Personal Data Protection (INAI). This resolution was joined by only seven co-sponsors, and reflected the GPA’s commitment to developing privacy in the digital education space and promoting “inclusive digital citizenship.” Here, the GPA resolved five particular points, each accompanied by a number of recommendations for GPA Members:
- GPA Members should promote privacy and technology ethics as cross-cutting issues across the full spectrum of education, from early childhood to university.
- States and authorities should ensure education related to digital privacy promotes lawfulness and diversity for all, particularly children and vulnerable communities.
- GPA Members should promote the “understanding, exercise, and defense of personal data rights” as well as consideration of ongoing issues around the use of emerging technologies.
- GPA Members should work to strengthen regulatory frameworks, align strategies with international human rights and data protection instruments, and actively engage in international cooperation networks alongside other international bodies related to data protection and education.
- Promote a “culture of privacy” relying on awareness-raising, continuous training, and capacity building.
The resolution also evidences the 2025 Assembly’s specific concerns relating to generative AI, including a statement “reaffirming that … generative artificial intelligence, pose[s] specific risks to vulnerable groups and must be addressed using an approach based on ethics and privacy by design” and recommending under the resolved points that GPA members “[p]romote the creation and inclusion of educational content that allows for understanding and exercising rights related to personal data — such as access, rectification, erasure, objection, and portability, among others — as well as critical reflection on the responsible use of emerging technologies.”
Among its generalized resolved points, the Assembly critically recommends that GPA Members may:
- Promote the creation of a base or certification on data protection for educational institutions that integrate best practices in data protection and digital citizenship, in collaboration with networks such as the GPA or the Ibero-American Data Protection Network (RIPD).
- Promote the implementation of age assurance safeguards, within specific reference to the Joint Statement on A Common International Approach to Age Assurance;
- Promote participation in international networks that foster cooperation on data protection in education, with the aim of sharing experiences, methodologies, and common frameworks for action – again referencing the GPA working group on Digital Education and the Ibero-American Data Protection Network specifically.
Finally, the third resolution also includes an optional “Glossary” that offers definitions for some of the terminology that it uses. Although the glossary does not seek to define “artificial intelligence”, “personal data,” or, indeed, “children,” the glossary does offer definitions for both “digital citizenship” – “the ability to participate actively, ethically, and responsibly in digital environments, exercising rights and fulfilling duties, with special attention to the protection of privacy and personal data” and “age assurance” – “a mechanism or procedure for verifying or estimating the age of users in digital environments, in order to protect children from online risks.” Glossaries such as this one are useful in evaluating where areas of conceptual agreement in terminology (and thus, regulatory scope) are emerging among the global regulatory community.
- Sandboxes and Simplification: not yet in focus
It is also worth noting a few specific areas that the GPA did not address in this year’s resolutions. As previously noted, the topical range of the resolutions was more targeted than in prior years. Within the narrowed focus on AI, the Assembly did not make any mention of regulatory sandboxes for AI governance, nor challenged or referred to the ongoing push for regulatory simplification, both topics increasingly common to the discussion relative to AI regulation around the globe. Something to follow for next year’s GPA will be how privacy regulators will engage with these trends.
- Concluding remarks
The resolutions adopted by the GPA in 2025 indicate increasing focus and specialization of the world’s privacy regulators onto AI issues, at least for the immediate future. In contrast to the multi-subject resolutions of previous years (some of which were AI related, true) this years’ GPA produced resolutions that were essentially only concerned with AI, although still approaching the new technology in the context of its impact on pre-existing data protection rights. Moving into 2026, it would be wise to observe whether the GPA (or other internationally cooperative bodies) pursue mutually consistent conceptual and enforcement frameworks, particularly concerning the definitions of AI systems and associated oversight mechanisms.