2026: A Year at the Crossroads for Global Data Protection and Privacy
There are three forces twirling and swirling to create a perfect storm for global data protection and privacy this year: the surprise reopening of the General Data Protection Regulation (GDPR) which will largely play out in Brussels over the following months, the complexity and velocity of AI developments, and the push and pull over the field by increasingly substantial adjacent digital and tech regulations.
All of this will play out with geopolitics taking center stage. At the confluence of some of these developments, the protection of children online and cross-border data transfers – with their other side of the coin, data localization in the broader context of digital sovereignty, will be two major areas of focus.
1. The GDPR reform, with an eye on global ripple effects
The gradual reopening of the GDPR last year came as a surprise, without much debate or public consultation, if any. It passed its periodic evaluation in the summer of 2024 with a recommendation for more guidance and better implementation to suit SMEs and harmonization across the EU, as opposed to re-opening or amending it. Moreover, exactly one year ago, in January 2025, at CPDP-Data Protection Day Conference in Brussels, not one, but two representatives of the European Commission, in two different panels (one of which I moderated) were very clear that the Commission had no intention to re-open the GDPR.
Despite this, a minor intervention was first proposed in May to tweak the size of entities under the obligation to keep a register of processing activities through one of the simplification Omnibus packages of the Commission. But this proved to just crack the door open for more significant amendments to the GDPR proposed later on, under the broad umbrella of competitiveness and regulatory simplification the Commission started to pursue emphatically. Towards the end of the year, in November 2025, major interventions were introduced within another simplification Omnibus dedicated to digital regulation.
There are two significant policy shifts the GDPR Omnibus proposes that should be expected to reverberate in data protection laws around the world in the next few years. First, it entertains the end of technology-neutral data protection law. AI – the technology, is imprinted all over the proposed amendments, from the inconspicuous ones, like the new definition proposed for “scientific research”, to the express mentioning of “AI systems” in new rules created to facilitate their “training and operations” – including in relation to allowing the use of sensitive data and to recognizing a specific legitimate interest for processing personal data for this purpose.
The second policy shift – and perhaps the most consequential for the rest of the data protection world, is the narrowing down of what constitutes “personal data”, by adding several sentences to the existing definition to transpose what resembles the relative approach to de-identification which was confirmed by the Court of Justice of the EU (CJEU) in the SRB case this September. To a certain degree, the proposed changes bring the definition to pre-GDPR days, when some data protection authorities were indeed applying a relative approach in their regulatory activity.
The new definition technically adds that the holder of key-coded data or other information about an identifiable person, which does not have means reasonably likely to be used to identify that person, does not process personal data even if “potential subsequent recipients” can identify the person to whom the data relates. Processing of this data, including publishing it or sharing it with such recipients, would thus be outside of the scope of the GDPR and any accountability obligations that follow from it.
If the language proposed will end up in the GDPR, this would likely mark a narrowing of the scope of application of the law, leaving little room for supervisory authorities to apply the relative approach on a case-by-case basis following the test that the CJEU proposed in SRB. This is particularly notable, considering that the GDPR has successfully exported the current philosophy and much of the wording of the broad definition of personal data (particularly its “identifiability” component) to most data protection laws adopted or updated around the world since 2016, from California, to Brazil, to China, to India.
The ripple effects around the world of such significant modifications of the GDPR would not be felt immediately, but in the years to come. Hence, the legislative process unfolding this year in Brussels on the GDPR Omnibus should be followed closely.
2. The Complexity and Velocity of AI developments: Shifting from regulating data to regulating models?
There is a lot to unpack here, almost too much. And this is at the core of why AI developments have an outsized impact on data protection. There is a lot of complexity related to understanding the data flows and processes underpinning the lifecycle of the various AI technologies, making it very difficult to untangle the ways in which data protection is applicable to them. On top of it, the speed with which AI evolves is staggering. This being said, there are a couple of particularly interesting issues at the intersection of AI and data protection to be necessarily followed this year, with an eye towards the following years too.
One of them is the intriguing question of whether AI models are the new “data” in data protection. Some of you certainly remember the big debate of 2024: do Large Language Models (LLMs) process personal data within the model? While it was largely accepted that personal data is processed during training of LLMs and may be processed as output of queries done within LLMs, it was not at all clear that any of the informational elements related to AI models post-training, like tokens, vectors, embeddings or weights, can amount by themselves or in some combination to personal data (or not). The question was supposed to be solved by an Opinion of the European Data Protection Board (EDPB) solicited by the Irish Data Protection Commission, which was published in December 2024.
Instead, the Opinion painted a convoluted regulatory answer by offering that “AI models trained on personal data cannot, in all cases, be considered anonymous”. The EDPB then dedicated most of the Opinion on laying out criteria that can help assess whether AI models are anonymous or not. While most, if not all of the commentary around the Opinion usually focuses on the merits of these criteria, one should perhaps stop and first reflect on the framework of the analysis – namely assessing the nature of the model itself rather than the nature of the bits and pieces of information within the model.
The EDPB did not offer any exploration of what non-anonymous (so, then, personal?) AI models might mean for the broader application of data protection law, such as data subject rights. But with it, the EDPB may have – intentionally or not, started a paradigm shift for data protection in the context of AI, signaling a possible move from the regulation of personal data items to the regulation of “personal” AI models. However, the Opinion was ostensibly shelved throughout last year as it did not seem to appear in any regulatory action yet. I would have forgotten about it myself if not for a judgment of a Court in Munich in November 2025, in an IP case related to LLMs.
The German Court found that song lyrics in a training dataset for an LLM were “reproducibly contained and fixed in the model weights”, with the judgment specifically referring to how models themselves are “copies” of those lyrics within the meaning of the relevant copyright law. This is because of the “memorization” of the lyrics in the training data by the model, where weights and vectors are “physical fixations” of the lyrics. This judgment is not final, with a pending appeal. But it will be interesting to see whether this perspective of focusing on the models themselves as opposed to bits of data within them will find more ground this year and immediately following ones, pushing for legal reform, or will fizzle out due to over-complexity of making it fit within current legal frameworks.
Key AI developments which might push the limits of existing data protection and privacy frameworks to a breaking point, as they descend from research to market, will be:
- hyper-personalization – think of the decades-old debate around targeting and individual profiling, but on steroids;
- AI agents by themselves or acting together – for one thing, “control” of a person over their information is at the core of data protection, while the fundamental proposition of AI agents is to take over control in certain contexts;
- World models and AI wearables – perhaps a good comparison would be a hyperbolized Internet of Things and all of its implications for by-stander privacy, consent, informational self-determination. Notwithstanding the fact that it is perhaps a naive comparison, particularly if the previous two points will be layered on top of this one, which would also integrate LLMs.
3. A concert of laws adjacent to data protection and privacy steadily becoming the digital regulation establishment
A third force pressing onto data protection for the foreseeable future are all the novel data-and-digital adjacent regulatory efforts solidifying into a new establishment of digital regulation, with their own bureaucracies, vocabulary and compliance infrastructure: online safety laws – including their branch of children’s online safety laws, digital markets laws, data laws focusing on data sharing or data strategies including personal and non-personal data, and the proliferation of AI laws, from baseline acts to sectoral or issue-specific laws (focusing on single issues, like transparency).
It may have started in the EU five years ago, but this is now a global phenomenon. Look, for instance, at Japan’s Mobile Software Competition Act, a law regulating competition in digital markets focusing on mobile environments which became effective in December 2025 and draws strong comparisons with the EU Digital Markets Act. Or at Vietnam’s Data Law which became effective in July 2025 and is a comprehensive framework for the governance of digital data, both personal and non-personal, applying in parallel to its new Data Protection Law.
Children’s online safety is taking increasingly more space in the world of digital regulatory frameworks, and its overlap and interaction with data protection law could not be clearer than in Brazil. A comprehensive law for children’s online safety, the Digital ECA, was passed at the end of last year and it is slated to be enforced by the Brazilian Data Protection Authority starting this spring.
It brings interesting innovations, like a novel standard for such laws to be triggered – “likelihood of access” of a technology service or product by minors, or “age rating” for digital services, requiring providers to maintain age rating policies and continuously assess their content based on it. It also provides for “online safety by design and by default” as an obligation for digital services providers. From state level legislation in the US on “age appropriate design”, to an executive decree in UAE on “child digital safety” – the pace of adopting online safety laws for children is ramping up. What makes these laws more impactful is also the fact that age limits of minors falling under these rules are growing to capture teenagers up until 16 and even 18 year-olds in some places, bringing vastly more service providers in scope than first generation children online safety regulations.
The overlap, intersection and even tensions of all these laws with data protection become increasingly visible. See, for instance, the recent Russmedia judgment of the CJEU, which established that an online marketplace is a joint-controller under the GDPR and it has obligations in relation to sensitive personal data published by a user, with consequences for intermediary liability that are expected to reverberate at the intersection of the GDPR and Digital Services Act in practice.
The compliance infrastructure of this new generation of digital laws and its need for resources (human resources, budget) break their way into an already stretched field of “privacy programs”, “privacy professionals”, and regulators, with the visible risks of moving attention from, and diluting meaningful measures and controls stemming from privacy and data protection laws.
4. Breaking the fourth wall: Geopolitics
While all these developments play out, it is particularly important to be aware that they unfold on a geopolitical stage that is unpredictable and constantly shifting, resulting in various notions of “digital sovereignty” taking root from Europe, to Africa, to elsewhere around the world. From a data protection perspective, and in the absence of a comprehensive understanding of what “digital sovereignty” might mean, this could translate into a realignment of international data transfers rules through more data localization measures, more data transfers arrangements following trade agreements, or more regional free data flows arrangements among aligned countries.
Ten years after the GDPR was adopted as a modern upgrade of 1980s-style data protection laws for the online era, successfully promoting fair information practice principles, data subject rights and the “privacy profession” around the world, data protection and privacy are at an inflection point: either hold the line and evolve to meet these challenges, or melt away in a sea of new digital laws and technological developments.