What to Expect in Global Privacy in 2025
Next year, in 2026, we will celebrate a decade after the adoption of the GDPR, a law with an unprecedented regulatory impact around the world, from California to Brazil, across the African continent, to India, to China, and everywhere in between. The field of data protection and privacy has become undeniably global, with GDPR-inspired laws (from a lesser to a bigger degree) adopted or updated in many jurisdictions around the world throughout the past years. This could not have happened in a more transformative decade for technologies relying on data, with AI decidedly getting out of its winter, and “connected-everything,” from cars to eyewear, increasingly shaping our surroundings.
While jurisdictions around the world were catching up with the GDPR or gearing their own approach to data protection legislation, the EU leaped in the past five years towards comprehensive (and sometimes incomprehensible) regulation of multiple dimensions of the digital economy: AI itself, online platforms through intermediary liability, content moderation and managing systemic risks on very large online platforms and search engines, online advertising in electoral campaigns, digital gatekeepers and competition, data sharing and connected devices, data altruism and even algorithms used in the gig economy.
Against this backdrop, I asked my colleagues in FPF’s offices around the world, who passionately monitor, understand, and explain legislative, regulatory, and enforcement developments across regions, what we should expect in 2025 in Global Privacy. From data-powered technological shifts and their impact on human autonomy, to enforcement and sectoral implementation of general data protection laws adopted in the past years, to AI regulation, cross-border data transfers, and the clash of online safety and children’s privacy, this is what we think you should keep on your radar:
1. AI becoming ubiquitous will put self-determination and control in the center of global privacy debates
“Expect AI to become ubiquitous in everything we do online,” signals Dr. Rob van Eijk, FPF Managing Director for Europe. This will not only bring excitement for tech enthusiasts but also a host of challenges, heightened by the expected increase in consumers using AI agents. “The first challenge is maintaining personal autonomy in the face of technological development, particularly regarding AI,” weighs in Rivki Dvash, Senior Fellow with ITPI – FPF Israel.
Rivki foresees two prominent dimensions of this topic: first, at the ethical level, and second, at the regulatory level, particularly concerned “with the limits of the legitimacy of the use of AI while trying to contour the uniqueness of a person over a machine and the desire to preserve personal autonomy in a space of choice.” “What does it mean to be a human in an Agentic AI future?” is a question that Rob says will ignite a lot of thinking in the policy world in 2025. This makes me think of an older paper from Prof. Mireille Hildebrandt, “Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning” (2019), where she described a framework that could “provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.”
I expect the idea of “control” over one’s persona and personal information in the world of Generative and Agentic AI to increasingly permeate and fuel regulatory debates. In its much-expected Opinion on AI systems and data protection law published over the Holidays, the European Data Protection Board (EDPB) identified “the interest in self-determination and retaining control over one’s own personal data” as chief among individuals’ interests that must be taken into account and balanced, both when personal data is gathered for the development of AI models and with regards to personal data processed once the model is deployed.
Putting self-determination and control at the center of AI governance will not be just academic. For instance, the EDPB asked for an “unconditional opt-out from the outset,” “a discretionary right to object before the processing takes place” for developing and deploying AI systems, “beyond the conditions of Article 21 GDPR,” in order for legitimate interests to be considered as a valid lawful ground legitimizing consentless processing of personal data for AI models.
Rob adds that in 2025, we will see users “becoming increasingly reliant on AI companions for decision-making, from small choices like what to watch on streaming services to larger life decisions.” He highlights what will be one of the key privacy and data protection implications of all this: “AI companions will get unprecedented access to sensitive personal data, from financial transactions to private conversations and daily routines.” Protecting sensitive data in this context, especially with inferences broadly recognized as being covered by such enhanced safeguards under data protection law regimes, will be a key challenge that will keep privacy experts busy this year.
But the ideas of “control,” “self-determination,” and “autonomy” in relation to one’s own personal data are particularly fragile when it comes to non-users or bystanders whose data is collected through another person’s use of a service or device. This is one of the big issues that Lee Matheson, FPF Deputy Director for Global Privacy, sees as defining an enforcement push from Data Protection Authorities (DPAs) from Canada to Europe this year, particularly as it relates to Augmented Reality and connected devices: “It’s a cross-cutting technology that implicates lawful bases for collection/processing, AI and automated decision-making (particularly facial recognition), secondary uses, and data transfers (as unlike smartphones, activity is less likely to be kept on-device). I think a particular focus could be on how to vindicate the rights of non-user data subjects whose information is captured by these kinds of devices.”
2. Three different speeds for AI legislation: Moderation in APAC, Implementation in Europe, Acceleration in Latin America
AI governance and data protection are closely linked, as shown above, which makes AI legislation a particularly poignant topic to follow. “Whether through hard or soft law approaches, preventing significant fragmentation of AI rules globally will be high on the agenda,” observes Bianca-Ioana Marcu, FPF Deputy Director for Global Privacy. Bianca has been closely following initiatives of international organizations and networks in the AI governance space throughout the last year, like the efforts of the UN, the OECD, or the G7 in this space, and she believes that in 2025, “international fora and the principles and guidelines agreed upon within such groups will act as the driving force behind AI standard-setting.” Bianca adds that we might see efforts towards “harmonizing regional data protection rules in the interests of supporting the governance and availability of AI training data.” I can see this happening, for instance, across economic regions in Africa, or even at the ASEAN level.
As for legislative efforts around the world targeting AI, the team identifies three different speeds. In the Asia-Pacific (APAC) region, Josh Lee Kok Thong, FPF Managing Director for APAC, foresees a “possible cooling down” of the race to adopt AI laws and other regulatory efforts. “There will be signs of slight regulatory fatigue in AI governance and regulatory initiatives in APAC. This is especially so among the more mature jurisdictions, such as Japan, Singapore, China, and Australia. Rather than developing new headline regulatory or governance initiatives, efforts are likely to focus on the development of tools for evaluation and content provenance,” he says. Josh notes that jurisdictions across APAC will be closely watching how the implementation of the EU AI Act unfolds, as well as the US regulatory stance towards AI under President Trump’s administration before deciding what steps to take.
In contrast, Latin America will likely move full speed ahead toward AI legislation. Maria Badillo, Policy Counsel for Global Privacy, explains that “this year will mark significant progress on initiatives to govern and regulate AI across multiple countries in Latin America. Brazil has taken a leading role and is getting closer to adopting a comprehensive law in 2025 after the Senate’s recent approval of the AI bill. Other countries like Chile, Colombia, and Argentina have introduced similar frameworks.” Maria says that this will happen mainly under the influence of the EU AI Act, but also from Brazil’s AI bill.
When it comes to AI legislation, the EU is catching its breath this year, focusing on the implementation of the EU AI Act, which was adopted last year and whose application starts rolling out in a month. Necessary Codes of Conduct – like the one dedicated to general purpose AI, implementing acts, and specific standards are expected to flow within the next 18 months or so. But this year, we will certainly see the first signs of whether this new law will successfully achieve its goals. A good indicator will be observing in practice the intricate web of authorities tasked by the EU AI Act with oversight, implementation, and enforcement of the law. “The lack of a one-stop-shop mechanism and the presence of several authorities in the same jurisdiction will be a first test of the efficiency of the AI Act and the authorities’ ability to coordinate,” highlights Vincenzo Tiani, Senior Policy Counsel in FPF’s Brussels office.
Meanwhile, it is expected that DPAs will gain a more prominent role in enforcing the law on matters at the intersection of the GDPR with the various new EU acts regulating the digital space, including the EU AI Act. “DPAs will be increasingly called to step up and drive enforcement actions on a broad number of issues also falling under other EU regulatory acts, but which involve the processing of personal data and the GDPR,” says Andreea Serban, FPF Policy Analyst in Brussels. This will be particularly evident regarding AI systems, after a first infringement decision in a series of complaints surrounding ChatGPT was issued by the Italian DPA, the Garante, at the end of 2024.
The space in AI governance that the GDPR occupies will visibly expand this year, including into issues where copyright is considered central. Vincenzo explains that “the licenses provided by newspapers to providers of LLMs, at least so far, do not cover the protection of personal data contained therein.” The Italian DPA has already raised the flag on this issue.
Countervailing some of the biggest risks of Generative AI beyond the processing of personal data will keep regulators across Europe busy, be they DPAs, the European Commission’s AI Office, or other national EU AI Act implementers. Dr. Desara Dushi, Policy Counsel in our Brussels office, anticipates “a sharp focus on controlling the use of synthetic data that fuels harmful content, with the rise of advanced emotional chatbots and the proliferation of deepfakes.” This could happen through “more robust and specific guidelines targeting generative AI’s risks.”
3. International Data Transfers will come back on top of the Global Privacy agenda
As I anticipated last year in my 2024 predictions, international data transfers started intertwining with the broader geopolitical goals of countries caught in the AI race. This trend will become even more visible in 2025, when we expect that issues related to international data transfers will come back to the top of the Global Privacy agenda, fueled this time not only by the geopolitics of AI development, but also by the broader dynamic between a new European Commission in Brussels and a new administration in Washington DC.
“I think transatlantic data transfers issues will be brought back to center stage in the dynamics of EU’s implementation of digital regulations like the DSA and the DMA on one side, and the priorities of the new administration in the US on the other side,” foresees Lee Matheson, who is based in our Washington DC office and who closely follows international data transfers. But, this time around, the pressure on the continuity of data flows between the US and the EU might first come from the US side.
Lee thinks we should follow closely what happens with Executive Order (E.O.) 14117 “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern,” an instrument adopted last year which bans transfers of bulk sensitive data of Americans outside of the U.S. in specific circumstances and only towards designated countries of concern (currently China, Iran, Russia, Venezuela, Cuba and North Korea). The Executive Order could be left as is, amended, repealed, or replaced by the new administration in Washington. But an interesting point Lee raises is that “E.O. 14117 and its associated DOJ Rules, in particular, provide a framework that could be extended to additional jurisdictions.”
On the other hand, the General Court of the CJEU started early this year with a decision that recognized plaintiffs can obtain compensation for non-material damage if their personal data have been transferred unlawfully, in a case involving transfers made by the European Commission to the U.S. before the Data Privacy Framework became effective. This clarification made by the Court could increase the appetite for challenging the lawfulness of international data transfers. In part due to pressure on more traditional data transfer mechanisms, Lee thinks “the world will see alternative systems for international data transfers, such as the Global Cross Border Privacy Rules system, become substantially more prominent.”
Indeed, transatlantic data flows will only be one of many cross-border data flow stories to follow. “We may well see continuing fragmentation of the cross-border data transfer landscape globally and in APAC into clusters of likeminded jurisdictions, ranging from those like Singapore and Japan that are working to promote trusted data flows (especially through initiatives like the Global CBPRs) to those like Indonesia, India, and Vietnam that have recently renewed their interest in adopting data localization measures,” adds Dominic Paulger, FPF Deputy Director for APAC, from our Singapore office. He also thinks that geopolitical and regulatory trends in the US and the EU will affect dynamics in APAC. “While there will be tension between data localization requirements in some jurisdictions, navigating the right balance will be crucial in shaping both regulatory strategies and business practices across the region in 2025,” concludes Sakshi Shivhare, Policy Associate for FPF APAC.
4. Convergence of youth privacy and online safety will take the spotlight around the world
Convergence of children’s and teen’s privacy and online safety issues into new legislative action, regulatory initiatives, or public policy measures is being emphatically highlighted as a top issue to watch in 2025 by my colleagues across APAC, India, EU, and, to some extent, Latin America.
Dominic explains that jurisdictions in APAC are increasingly incorporating online safety provisions into data protection laws, with some focusing on age verification or age-appropriate design requirements. This highlights tensions between real concerns about young people’s online safety and the substantial privacy risks that are posed by age assurance technologies and related mandates. Experts have raised the need for more cross-cutting conversations to identify and address privacy and security risks created by regulatory efforts. He expects the focus on youth safety to continue throughout 2025, “especially following Australia’s recent ban on social media use for under-16s.” This approach has been criticized by some youth safety and privacy experts while being lauded by others. Several jurisdictions, including Singapore, are considering emulating this model, and many more will be watching to see how it plays out.
“The dialogue around online youth safety will likely intensify in the EU as well, with a notable focus on children’s overall well-being and how that intersects with youth privacy rights,” foresees Desara, who comes to FPF’s Brussels office with extensive research and policy work in this space. “The narrative may broaden to encompass a more holistic approach to child protection, leading toward ‘child rights by design’ requirements,” she adds.
The Child Sexual Abuse Regulation (CSAR) proposal in the EU will continue to be the subject of fierce debate in 2025. The CSAR debate has been characterized by proponents noting the measure’s noble goals and critics characterizing the proposal as technically unworkable and certain to undermine core privacy and security measures. Desara concludes: “With early insights emerging from the UK’s Online Safety Act, the ongoing intersection of privacy and youth safety promises to be a defining issue in the year ahead.”
5. We have a new law, now what? Implementation and groundwork for enforcement will be central in APAC, LatAm, Africa, and EU
Several jurisdictions across all regions will focus on starting the implementation of recently adopted data protection laws. Perhaps this is most visible in the APAC region, which “is seeing a significant maturation of data protection frameworks,” as Sakshi Shivhare notes. Examples include “the promulgation of India’s DPDPA Rules, the phased implementation of Malaysia’s PDPA amendments, the much-awaited finalization of implementing regulations for Indonesia’s PDP Law, and the implementation of Australia’s first tranche of Privacy Act amendments,” explains Josh Lee.
This year, significant attention will be paid to India’s DPDPA Implementing Rules. “With the draft rules now released, attention will shift to public consultations and how the government addresses feedback,” notes Bilal Mohamed, FPF Policy Analyst based in New Delhi. He points out that some of the key concerns discussed so far relate to “the possible reintroduction of data localization norms, (Rules 12(4) and 14) and the practical concerns with the implementation of Verifiable Parental Consent,” also adding to two of the trends we identified above related to international data transfers and children’s privacy and online safety. “Together, these shifts suggest that 2025 will be pivotal for creating a more cohesive, though not necessarily uniform, privacy landscape across APAC,” concludes Sakshi.
Jurisdictions across Africa will face similar challenges this year. Mercy King’ori, FPF Policy Manager for Africa, based in Nairobi, thinks we should expect “more sectoral regulations as controllers and processors continue to seek clarity on the practical implementation of legal provisions in most data protection laws across the continent. This is the continuation of a trend from 2024 where DPAs have been identifying gaps in the implementation of the laws and proposing regulations and guidelines in data-intensive sectors such as education, marketing, and finance.”
She adds that, in parallel, DPAs are dealing with an increasing number of complaints: “The rise of complaints has been due to heightened awareness of data subject rights and DPAs eager to push for compliance with national data protection regimes. The move towards enforcing compliance has even seen DPAs initiate assessments on their own volition, such as South Africa’s Information Regulator leading to enforcement notices and penalties.”
Secondary or implementing regulations are also expected to drive the agenda in Latin America, with a priority on “protecting children’s data, data subject rights, and processing of personal data in the context of AI,” points out Maria Badillo. She specifically notes that “active DPAs in the region, such as those from Brazil and Argentina, have identified AI regulation, exercise of data subject rights, and processing of children’s data among the priority areas for developing secondary regulations and guidance in 2025.”
Even the EU will have implementation fever this year – which is to be expected after intense lawmaking of everything digital and data during the first von der Leyen Commission. “In 2025, we should see a policy shift, prioritizing the application and implementation of existing frameworks, like the EU AI Act, the DSA, the DMA, and so forth, rather than proposals of new legislation,” points out Andreea Serban, who also notes recent messaging in Brussels signaling a decreased focus on regulation, especially in the aftermath of the Draghi report.
This is indeed how the Brussels agenda reads, but it shouldn’t be a surprise if new legislation, like the Digital Fairness Act, will make its way as an official proposal as soon as this year. And with other files like the CSAR still on the legislative train, or the constant “hide and seek” with the ePrivacy Regulation, the Brussels legislation machine might slow down, but it will not halt.
6. Bigger public policy debates will end up shaping global privacy: from “Innovation v. Regulation,” to checks and balances over government access to data
The “Innovation v. Regulation” dichotomy has been omnipresent in the European public debate since the publication of the Draghi report last year, even as some are positing this is a false choice (see Anu Bradford or Max von Thun).
“With a new European Commission taking the reins in Brussels, and with political tides changing across the EU, the innovation versus regulation debate will continue to polarize the digital policy community. Repercussions will be felt in discussions regarding not only the application and enforcement of the DSA and the DMA but also for data protection law as we await new GDPR enforcement rules,” explains Bianca-Ioana Marcu. However, she suggests that this debate might be louder than having effects in practice, as Brussels will move ahead with its regulatory agenda of the new Commission. It is clear, though, that Brussels may experience a “shift towards promoting EU competitiveness,” as Andreea framed it, and that this will impact, even if incrementally, all the “digital agenda” files.
While most of the attention in India might be focused on the DPDPA Implementing Rules, promoting the country’s competitiveness is a bigger goal for many, which could result in regulatory changes supporting it. Bilal signals that there are interesting data-sharing initiatives coming up at a sectoral level. “For instance, MeitY plans to launch an IndiaAI datasets platform to provide high-quality datasets (non-personal data) for AI developers and researchers. Similar initiatives are underway in sectors such as healthcare, e-commerce, and agriculture,” he says. These initiatives are quite similar to the EU Data Spaces, which are also expected to advance. “It will be fascinating to see how these initiatives align with the DPDPA, and how this shapes the definition of ‘non-personal data’ in India,” adds Bilal.
One last bigger public policy debate that may impact concrete data protection this year remains the checks and balances over government access to personal data. For instance, Rivki, based in our Tel Aviv office, highlights that this year she expects the privacy community to confront the long-term privacy consequences of the exceptional measures taken by the government during the war, such as storage of fingerprints in databases or authorization of intrusion into security cameras without consent. The privacy community will likely be focused to “ensure that any measures implemented during this period do not persist or become the new standard for privacy,” she says.
Government access to data shapes up to also be top of mind in policy debates in India, with Bilal noting that “on a broader scale, constitutional challenges related to government exemptions under the DPDPA may surface in the Supreme Court once the implementing rules are officially notified.”
7. A dark horse prediction and further reading
Before ending the round-up of issues to follow in 2025 in Global Privacy, I will make my dark horse prediction: The reopening of the GDPR might appear more convincingly on the regulatory agenda this year, once the procedural reform is done. What seemed almost sacrilegious a couple of years ago will now look more likely, especially in the light of DPAs becoming active in enforcing the GDPR on AI systems, and eventual hiccups of non-DPA enforcers applying the digital strategy package at the intersection with GDPR provisions.
Finally, for a good understanding of what the year might bring to US policymaking, check out this analysis by Jules Polonetsky, FPF CEO, for TechPolicy Press, “2025 May be the Year of AI Legislation: Will we see Consensus Rules or a Patchwork?,” as well as FPF Senior Director for U.S. Legislation Keir Lamont’s blog, “Five Big Questions (and Zero Predictions) for the US State Privacy Landscape in 2025.”
For media inquiries reach out to [email protected].