FPF New Resource Takes the Guesswork out of Buying Privacy Tech
A new FPF resource helps buyers determine which privacy tools are the most appropriate for their business needs. The Privacy Tech Buyer Framework is a step-by-step tool that provides guidance on buying the best privacy technology through three phases that include simplified steps and case studies.
Navigating the privacy tech acquisition process can be tricky given an increase in privacy tools and services available to businesses.
Understanding your achievable outcomes for your business is the crucial first step in the privacy tech acquisition process. Outcomes may vary for different businesses, and differ between stakeholders in an organization. Phase 1 of the Framework takes you through several categories of business outcomes that can be achieved with privacy technologies.
Phase 2: Privacy Tech Product Categories
Once you identify the business outcomes you want to achieve, the next step in the process is matching business outcomes to categories of privacy technology products. In Phase 2, the Framework outlines several categories of privacy tech products, drawing from the U.S. National Institute of Standards and Technology (NIST)’s Privacy Framework.
Phase 3: Business-Level Tools and Services
In Phase 3 of the Framework, business-level tools and services are separated into categories based on their functionality. The five main categories include: Identification, Governance, Control, Protection, and Communication. Each category contains tools and services organized by their functionality, for example, Protection includes data security, protective technology, and more. Identifying which categories of functions these privacy technologies fall into will help you select the business-level tools and services that best suit your business needs.
Below is just one of several Framework case studies that illustrate hypothetical scenarios in which you might use this to move from a general business outcome towards a specific buy decision for your business.
CASE STUDY: Company Product Development
A company is developing a product and wants to have the right privacy safeguards in place.
In Phase 1, led by the CIO, the company identified one business outcome; data availability and movability. In Phase 2, the Buyer chose the category of Protection to ensure that personal data is excluded from product development while maintaining the ability to provide useful data to users. The Buyer then selected disassociated processing as their business-level tool in Phase 3, with the intention of removing directly identifying information and using technical privacy modes to ensure transformations minimize exposure.
The Privacy Tech Buyer Framework is a gap-filling document meant to aid buyers in identifying what tools and services are available to help their businesses responsibly and legally use personal information to meet business needs and achieve business outcomes. The ultimate aim of the framework is to simplify and clarify the buyer process, making it easier for users to determine which privacy tools and services they should purchase from vendors.
What the Biden Executive Order on Digital Assets Means for Privacy
Author: Dale Rappaneau
Dale Rappaneau is a policy intern at the Future of Privacy Forum and a 3L at the University of Maine School of Law.
On March 9, the Biden Administration issued an Executive Order on “Ensuring Responsible Developments of Digital Assets” (“the Order”), published together with an explanatory Fact Sheet. The Order states that the growing adoption of digital assets throughout the economy and inconsistent controls to mitigate their risks necessitates a new governmental approach to regulating digital assets.
The Order outlines a whole-of-government approach to address a wide range of technological frameworks, including blockchain protocols and centralized systems. The Order frames this approach as an important step toward safeguarding consumers and businesses from illicit activities and potential privacy harms involving digital assets. In particular, it calls for a list of federal agencies and regulators to assess digital assets, consider future action, and ultimately provide reports recommending how to achieve the Order’s numerous policy goals. The Order recognizes the importance of incorporating data and privacy protections into this approach, which indicates that the Administration is actively considering the privacy risks associated with digital assets.
1. Covered Technologies
Definitions
Digital Assets – The Order defines digital assets broadly, including cryptocurrencies, stablecoins, and all central bank digital currencies (CBDCs), regardless of the technology used. The term also refers to any other representations of value or financial instrument issued or represented in a digital form through the use of distributed ledger technology relying on cryptography, such as a blockchain protocol.
CBDC – The Order defines a Central Bank Digital Currency (“CBDC”) as digital money that is a direct liability of the central bank, not of a commercial bank. This definition aligns with the recent Federal Reserve Board CBDC report. A U.S. CBDC could support a faster and more modernized financial system, but it would also raise important policy questions including how it would affect the current rules and regulations of the U.S. financial sector.
Cryptocurrencies – These are digital assets that may operate as a medium of exchange and are recorded through distributed ledger technologies that rely on cryptography. This definition is notable because blockchain is often mistaken as the only form of distributed ledger technology, leading some to believe that all cryptocurrencies require a blockchain. However, the Order defines cryptocurrencies by reference to distributed ledger technology – not blockchain – and seems to cover both mainstream cryptocurrencies utilizing a blockchain (e.g., bitcoin or Ether) and alternative cryptocurrencies built on distributed ledger technology without a blockchain (e.g., IOTA).
Stablecoins – The Order recognizes stablecoins as a category of cryptocurrencies featuring mechanisms aimed at maintaining a stable value. As reported by relevant agencies, stablecoin arrangements may utilize distributed or centralized ledger technology.
Implications of Covered Technologies
From a technical perspective, distributed ledger technologies such as blockchain stand in stark contrast to centralized systems. Blockchain protocols, for example, allow users to conduct financial transactions on a peer-to-peer level, without requiring oversight from the private sector or government. Centralized ledger technology, as used by most credit cards and banks, typically requires a private sector or government actor to facilitate financial transactions. In this environment, the data flows through the actor, who carries obligations to monitor and protect the data.
Despite the technical differences between these approaches, the Order appears to group these very different financial transaction systems into the single umbrella term of digital assets. It does this by including within the scope of the definition of digital assets all CBDCs, even ones utilizing centralized ledger technology, and other assets using distributed ledger technology. This homogenization of technological concepts may indicate that the Administration is seeking a uniform regulatory approach to these technologies.
2. Privacy Considerations of the EO
Section 2 of the Order states the principal policy objectives with respect to digital assets, which include: exploring a U.S. CBDC; ensuring responsible development and use of digital assets and their underlying ledger technologies; and mitigating finance and national security risks posed by the illicit use of digital assets.
Notably, the Administration uses the word “privacy” five times in this section, declaring that digital assets should maintain privacy, shield against arbitrary or unlawful surveillance, and incorporate privacy protections into their architecture. The need to ensure that digital assets preserve privacy raises notable, albeit different, implications for both centralized and decentralized technologies.
Privacy Implications of a United States CBDC
The Order places the highest urgency on developing and deploying a U.S. CBDC, which must be designed to include privacy protections. The Order states that a United States CBDC would be the liability of the Federal Reserve, which is currently experimenting with a number of CBDC system designs, including centralized and decentralized ledger technologies, as well as alternative technologies. Although the Federal Reserve has not chosen a particular system, the monetary authority has listed numerous privacy-related characteristics that should be incorporated into a United States CBDC regardless of the technology used.
First, the Federal Reserve recognizes that a CBDC would generate data about users’ financial transactions in the same ways that commercial banks and nonbanks do today. This may include a user’s name, email address, physical address, know-your-customer (KYC) data, and more. Depending on the design chosen for the CBDC, this data may be centralized under the control of a single entity or distributed across ledgers held by multiple entities or users.
Second, because of the robust rules designed to combat money laundering and financing of terrorism, a CBDC would need to allow intermediaries to verify the identity of the person accessing CBDC, just as banks and financial institutions currently do so. For this reason, the Federal Reserve states that a CBDC would need to safeguard an individual’s privacy while deterring criminal activity.
This intersection between consumer privacy and the transparency needed to monitor criminal activity gets to the heart of the Order. On one hand, a United States CBDC would provide certain data security and privacy protections for consumers under the current rules and regulations imposed on financial institutions. The Gramm-Leach-Bliley Act (GLBA), for example, includes privacy and data security provisions that regulate the collection, use, protection, and disclosure of nonpublic personal information by financial institutions (15 U.S.C.A. §§ 6801 to 6809). But on the other hand, the CBDC would likely require the Federal Reserve, or entrusted intermediaries, to monitor and verify the identity of users to reduce the likelihood of illicit transactions.
It is unclear whether current rules and regulations would apply if the CBDC utilizes distributed ledger technology, given that they typically establish scope via definitions of applicable entities using particular data. Because users (and not financial institutions) hold copies of the data ledger under distributed ledger technology systems, pre-existing privacy laws may fail to cover large amounts of data processing and provide adequate safeguards to consumers. In addition, as the next section suggests, it is unclear how monitoring and verification would occur under a CBDC that uses distributed ledger technology. This raises further questions in how policymakers can navigate the intersection of privacy and transaction monitoring.
Privacy Implications of Distributed Ledger Technologies
Distributed ledger technologies often attempt to create an environment where users do not have to reveal their personal information. Transactions under these systems typically do not filter through a singular entity such as the Federal Reserve, but instead happen on a peer-to-peer level, with users directly exchanging digital assets without third-party oversight. In this environment, users can complete transactions utilizing hashed identifiers rather than their own information, and these transactions usually occur without the supervision of a private or government entity. Together, the use of hashed identifiers and lack of supervision creates a digital environment ripe with identity-shielding protections.
However, experts recognize that distributed ledger technologies also create a multitude of financial risks. If users can conduct transactions on a peer-to-peer level without supervision or revealing their identity, they can more easily conduct illicit activities, including money laundering, terror funding, and human and drug trafficking.
The Order acknowledges these benefits and risks. The Fact Sheet prioritizes privacy protections and efforts to combat criminal activities, which indicates that the Order seeks to emphasize the privacy-preserving aspects of new distributed ledger technologies while finding ways to restrict illicit financial activity. Such an emphasis may represent an enhanced governmental effort to address criminal activities in the digital asset landscape while avoiding measures that would create risks to privacy and data protection.
3. Future Action: Privacy and Law Enforcement Equities
The Order’s repeated emphasis on privacy seems to align with the Biden Administration’s current focus on prioritizing privacy and data protection rulemaking. The Order acknowledges both necessary safeguards to combat illicit activities and the need to embed privacy protections in the regulation of digital assets.
The U.S. Department of the Treasury and the Federal Reserve have articulated concerns regarding how bad actors exploit distributed ledger technologies for illicit purposes, and those agencies will likely make recommendations to strengthen government oversight and supervision capabilities. However, the Order’s emphasis on privacy seems to indicate that the Administration wants to ensure privacy protections while also enabling traceability to monitor users, verify identities, and investigate illicit activities.
The question is, will the Administration find a way to preserve the privacy protections of centralized and distributed ledger technology, while also promoting the efficacy of monitoring illicit activities? That answer will likely come once agencies and regulators start providing reports that recommend steps to achieve the Order’s goals. Until then, the answer remains unknown, and entities utilizing cryptocurrencies or other digital assets should stay aware of a possible shift in how the Federal Government regulates the digital asset landscape.
FPF Launches Infographics in Chinese
As FPF’s work expands to include an international audience, we are pleased to relaunch FPF’s popular infographics in various languages. Because conversations around data protection have become more global, the need for high-quality information and new forms of communication in different languages continues to increase.
The infographics translation project aims to help FPF provide a more inclusive and diverse platform for stakeholders to engage with colleagues and experts across the world and to better navigate the most complex issues and developments in data protection. Previously, the Israel Tech Policy Institute (ITPI) translated some FPF infographics into Hebrew, an effort that provided the groundwork for FPF to target other languages and regions around the world.
The Chinese translation of these infographics was led by Nanda Min Htin and Hunter Dorwart. FPF APAC has translated three infographics into Chinese: Data and the Connected Car, Data De-identification, and Smart Cities.
The infographic, “Data and the Connected Car,” describes the core data-generating devices and data flows in today’s connected vehicles. The infographic aims to help consumers, businesses, and regulators understand the emerging data ecosystems that underlie this incredibly dynamic and complex industry.
The Data De-Identification infographic highlights complex issues around what constitutes personal information, the scope and possibility of de-identification, and divergences in approaches to the categories of de-identification. This infographic aims to provide stakeholders with an overview of various concepts, methods, and practical examples of de-identification.
This infographic–Shedding Light on Smart City Privacy–is meant to help citizens, companies, and other stakeholders understand the technologies that create and facilitate smart city and smart community projects. It highlights the wide range of connected technologies and services in today’s cities and provides important context to these new technologies and services, particularly the privacy concerns emanating from this complex ecosystem.
Comparative Look at Models of Data Protection – Series of Webinars Led by the Israel Tech Policy Institute (ITPI)
Authors: Kavisha Patel and Lee Matheson
Kavisha Patel is a current student at Georgetown Law and an FPF Global Privacy Intern.
As a result of Israel’s recently proposed comprehensive privacy law update, the Protection of Privacy Bill, the Israel Tech Policy Institute led a series of three webinars in February 2022 discussing comparative models of data protection around the world. Organized by ITPI Senior Fellow Adv. Rivki Dvash, the webinars aimed to bring together practitioners and experts in the data protection field to explain their perspectives on existing arrangements in various countries with the goal of enriching the ongoing discussion in Israel.
The first webinar, “Legal Bases for Data Processing”, took place on February 9, 2022. The panel for this webinar included Dr. Yaacov Lozowick, Historian, Israel’s Previous Chief Archivist, Lecturer at Bar-Ilan University; Dr. Gabriela Zanfir-Fortuna, Vice President for Global Privacy at The Future of Privacy Forum; Dr. Clarisse Girot, Managing Director for Asia Pacific at the Future of Privacy Forum; and Dr. Bruno Bioni, Director-Founder of Data Privacy Brazil.
The second webinar, “Data Protection Authorities’ Powers of Enforcement and Sanctions”, took place on February 16, 2022. Panelists for this webinar included Adv. Reuven Eidelman, Head of Legal Department at the Israeli Privacy Protection Authority at the Ministry of Justice; Adv. Florence Raynal, Head of the International and European Affairs Department of the CNIL; J.D. Stacey Gray, Director of Legislative Research and Analysis at the Future of Privacy Forum; and Adv. Lore Leitner, Partner at Goodwin in London, UK.
The third webinar, “Civil and Class Actions Under Privacy and Data Protection Law Frameworks in Israel, the EU and the US”, took place on February 23, 2022. Panelists for this webinar included Professor Peter Swire, Professor of Law and Expert on Privacy and Cybersecurity, Georgia Tech University; Sebastião Barros Vale, EU Policy Fellow at the Future of Privacy Forum; and Professor Assaf Hamdani, Professor of Law at Tel Aviv University.
The webinars were moderated by Adv. Limor Shmerling Magazanik, Managing Director of the Israel Tech Policy Institute. Below is a summary of the main points and insights from the series. The recordings of the three sessions are available here.
First Webinar: Legal Bases for Data Processing
Key Insights:
The purpose of personal data protection law is not to completely prohibit the processing of personal information, but to establish a robust means of protection for the processing of personal data so that it will be used in a manner that respects the rights and freedoms of individuals.
There is no hierarchy between the various legal bases for processing under the GDPR, which include consent, contract, legal obligations, vital interests, public interests, and legitimate interests.
Under the GDPR, the legitimate interest basis for lawful processing of data is complex and nuanced. Legitimate interests may only serve as a lawful ground for data processing if those interests are not overridden by the data subjects’ rights, interests and freedoms, requiring a balancing test between the two that must be conducted by the data controller on a case-by-case basis for each processing activity.
Under the Brazilian LGDP, the legitimate interest basis for legal processing of data is similarly subject to principles of necessity, the balancing of rights and freedoms, and sufficient safeguards.
In APAC jurisdictions, the landscape for lawful grounds of data processing are very fragmented, which makes it difficult for cross-border businesses to comply on a systemic basis.
Some controllers interpret the laws and regulations incorrectly, making it very important for regulators to clarify their interpretations.
In practice, due to divergences between countries, many businesses have been building their compliance programs focused on consent, as consent is often a common denominator between data protection regimes as a legal basis for data processing.
Recent recognition of legitimate interests as a lawful ground for data processing in the Singaporean PDPA could influence other APAC countries to do the same. This can be a substitute for consent-based lawful processing.
In the Israeli context, the same laws dictate how contemporary data and archival data is protected. Dr. Lozowick argues that this should change as legitimate interests for scientific and historical research arise when data is dated enough that it is not reflective of current reality.
Meaningful consent (i.e., whether consent is given voluntarily, the extent to which consent is based on understanding, etc.) continues to be a core issue in data protection. As a result, strengthening this basis for data processing is important. This could include increasing transparency through UX/UI design and consideration of users’ vulnerabilities and degrees of literacy.
Any legal system dealing with the crucial issues of data protection today should consider the impact of the passage of time on data privacy interests. Some propose that contemporary data protection arrangements should consider the passage of time as a criterion that reduces the interest in the protection of personal information and strengthens the interest in making information accessible to researchers for the benefit of the public.
Second Webinar: Data Protection Authorities’ Power of Enforcement and Sanctions
Key Insights:
While the GDPR itself does not impose criminal sanctions, some states have added a criminal provision to local law. Even in these countries, criminal enforcement will not be carried out by the national Data Protection Authority (DPA). The latter must pass the matter to a prosecutor, but these rarely take such cases due to limited resources. Nevertheless, there will be criminal enforcement by target authorities in related areas, such as wiretapping and computer hacking.
The GDPR allows for a variety of administrative sanctions, in addition to financial sanctions. The enforcement tool used is tailored to the specific violation.
Each EU DPA determines financial sanctions differently, resulting in a high variance between the fines imposed. Some may not even have the power to issue them on their own: in France, there is a separate body authorized to impose financial sanctions, which is not subordinate to the national DPA (CNIL).
The United States has a wide range of different federal, state, and local laws that apply to privacy and data security both directly and indirectly. The new state laws passed regarding privacy and consumer protection, such as the CCPA and the CPA, differ in some respects.
The Federal Trade Commission (FTC) is the lead data privacy enforcer in the United States. It has no formal complaint mechanism in that it is not required to respond to individual or group complaints. Rather, investigations often come directly from the staff at the FTC or from something raised by Congress.
The FTC often settles cases with companies through agreed arrangements called “consent decrees” which requires the companies to affirmatively take certain actions with respect to privacy and data security (i.e., privacy audits, impact assessments, hiring privacy officers etc.) Many major technological companies are currently under such consent decrees, including Facebook and Google. Although the FTC does not have the authority to initially impose fines under Section 5 of the FTC, violation of the consent decree allows for the imposition of financial sanctions.
The main justification for personal capacity criminal enforcement in Israel is the need for a deterrent against offenders that operate as individuals. The mechanism does not apply personal liability to corporate officers.
Third Webinar: Civil and Class Actions
Key Insights:
In Israel, there is a possibility of class actions arising from a privacy cause of action but only if the plaintiffs are in a special relationship covered by the relationship permissions for class actions. This does not include relationships with government entities, NGOs, or other businesses that do not contract directly with plaintiffs.
Professor Assaf Hamdani suggests that privacy class actions should only be used as a tool if public enforcement is insufficient and where the specific violation merits the use of such a tool. Generally, public enforcement is not driven by fee considerations and public enforcers examine broader issues like the impact of a case.
Under the GDPR, data subjects’ the right to compensation that data subjects have extends to material and non-material damages suffered as a result of a breach. The concept of “non-material damage” is contentious, as national courts across the EU have given it different interpretations. The Court of Justice of the European Union (CJEU) will soon clarify whether the data subject’s sense of discomfort with an unauthorized data disclosure, or their worries, fears, and anxieties count as non-material damages for the purposes of the GDPR. This underlines the importance of having a clear definition of non-material damages in the law when it comes to privacy breaches.
In the EU, representative actions under data protection law may be brought against both private and public entities, just like regular civil actions brought by affected data subjects. In some EU Member-States, national laws allow non-profits to go to court even without the data subject’s mandate, although in those cases they cannot seek compensation on behalf of data subjects. Barros Vale explained that the recently-passed EU Collective Redress Directive is more flexible in this regard, as data subjects may explicitly or tacitly express their wish to be bound by a collective compensation claim even after the action is brought.
In order to combat the possibility of underenforcement or overenforcement, Professor Swire suggests looking at three variables: 1) how likely it is that the company has a privacy violation, 2) how likely it is that an enforcement action will be taken against the company, and 3) how much the company would have to pay in damages, including attorney fees.
FPF at the 2022 IAPP Global Privacy Summit
Last week, IAPP held its first in-person annual Global Privacy Summit in Washington, DC since 2019! Through expert panels and our expo booth, FPF remained active during this two-day conference, with our CEO Jules Polonetsky holding a conversation with FTC Commissioner Noah Phillips, our data privacy experts speaking and providing their expert analysis at the FPF booth.
New Chair, New Agenda: The FTC’s Priorities and Identity Under New Leadership
In a riveting conversation with Commissioner Phillips, Jules discussed various topics that are relevant to the current data protection space. They discussed the relationship between privacy and competition, the FTC’s authority when health data breaches occur, and whether the FTC has the ability to modernize and govern civil rights. But, perhaps the topic that attendees were most interested in was federal privacy legislation and the FTC’s new privacy agenda.
“There is a need for a federal privacy law. We have a proliferation of state laws, making it hard for business, and consumers too. The right place for this debate is not within the FTC, but Congress.”
– Noah Phillips, Commissioner, FTC
On the topic of federal privacy legislation, Commissioner Phillips noted there is a lack of agreement on what privacy means, with people having different ideas on what the problem is, and therefore, varying ideas of what those solutions should be, leading to huge trade-offs.
On the Commission’s new privacy agenda, the two discussed competition, limits on FTC’s authority, and more. Ending the session, Commissioner Phillips discussed the future of the FTC.
Privacy Metrics: Best Practices to Drive Improvement in Performance and Trust
In a standing-room panel of 250+ attendees, FPF Senior Fellow and Goodwin Procter LLP Partner, Omer Tene along with an expert panel of speakers including Barbara Cosgrove, VP, CPO of Workday, Harvey Jang, VP, CPO of Cisco, and Amanda Weare, VP, Deputy General Counsel – Product and Privacy, DPO, of Collibra discussed privacy metrics and its impact on businesses.
In conversation, the panel discussed how privacy metrics build trust among consumers and businesses, the auditing process for businesses, and how it’s important to build metrics to understand how to integrate systems into projects that collect data.
“What we try to achieve is trust–whether it’s consumer trust or business trust–and privacy metrics can be a tool to demonstrate that.”
Evaluating Algorithms: Incorporating Privacy and Ethics into AI/ML Projects
FPF’s Managing Director for Europe, Dr. Rob van Eijk participated in another riveting standing-room panel, along with an expert panel consisting of Bret Cohen, Partner, Privacy and Cybersecurity Group at Hogan Lovells, Britanie Hall, Product Counsel, Speech and Assistant at Google Speaker, and Alexandra Ross, Senior Director, Senior Data Protection, Use Ethics Counsel at Autodesk.
During the one-hour session, the panel discussed how automated decision-making is an upcoming trend in global regulation, current U.S. regulation of algorithms, pending EU AI regulation, examples of fair AI principles, and more.
“In terms of trying to get an answer, it’s up to scientists to use tools like the line model.”
– Dr. Rob van Eijk, EU Managing Director, FPF
To view the slides from this discussion, visit this post on Rob’s LinkedIn profile.
We hope you enjoyed this year’s IAPP Global Privacy Summit as much as we did! If you missed us at our booth, visit FPF.org for all our reports, publications, and infographics. Follow us on Twitter, LinkedIn, and subscribe to our newsletter for the latest.
The ebb and flow of trans-Atlantic data transfers: It’s the geopolitics, stupid!*
The following is a guest post to the FPF blog from Lokke Moerel, Professor of Global ICT Law at Tilburg University and a Dutch Cyber Security Council member.
Guest blog posts do not necessarily reflect the views of FPF.
1. Introduction
There is a call for a rationaldebate on trans-Atlantic data transfers. Frustrations increase as companies work towards Schrems II compliance by executing mitigating measures to ensure U.S. government entities cannot access their data. Yet, EU data protection authorities (DPAs) continue to block their way. The DPAs increasingly adopt an absolutist approach, whereby mitigating measures are disregarded irrespective of the actual risk for data protection after transfer. Industry organizations are frantically advocating for a new EU-U.S. Privacy Shield to continue trans-Atlantic transfers, arguing that EU data protection laws have always been about enabling personal data to flow while protecting the rights and freedoms of individuals. If only we could have a rational discourse to find the right way forward, as the GDPR may well be interpreted in ways that are not in conflict with the information economy.
Data protection experts focus on the merits of the state surveillance aspect of transfers. Emotions run high as criticism of the DPAs on the U.S. state surveillance powers are like the pot calling the kettle black as the state authorities of some of the Member States may well have similar powers (or use these in a similar way). Discussions further focus on the risk-based approach of the GDPR, highlighting the theoretical risks of access after the mitigating measures are implemented.
These discussions are no longer on point. Data transfers are by now a geopolitical issue. A case in point is the announcement last week by U.S. President Biden and EU Commission President Ursula von der Leyen that they reached a new agreement in principle on trans-Atlantic data transfers.[1] A week ago a renewed Privacy Shield seemed unattainable, but all became liquid under pressure of the Russian invasion of Ukraine. The U.S. and EU are strengthening ties at an unprecedented scale, most notably by the creation of an energy task force to help the EU avoid using Russian oil. In light of the geo-political threats to the EU, the U.S. is the EU’s main ally and U.S. government access powers seem a relative minor worry; vice versa, where the EU is your main ally, protecting also the personal data of EU citizens seems a minor concession.
The details of the renewed transfer EU-U.S. transfer agreement will take time to develop, and for sure, we will see a lengthy third round of challenges by Mr. Max Schrems before the EU courts. In the meantime, compliance with Schrems II for data transfers to the U.S. is mission impossible.
The bigger geopolitical picture is that, also with a renewed trans-Atlantic transfer agreement, companies are currently caught between the European Commission’s push for European digital sovereignty and the global business models of the large digital services providers. Companies are well-advised to apply the serenity prayer–accept what you cannot change now—individual companies will not be able to force fundamental changes in the current ecosystem of these global players—and concentrate on what you can influence.
I predict that intercompany transfers that are required to run your business will be able to continue as the GDPR–as rightly advocated–facilitates data flows. The renewed trans-Atlantic agreement will facilitate that. Companies should focus on implementing mitigating measures for these transfers. Transfers that are not inherently required by the services provided by the large digital services providers, will be addressed by the EU’s digital policy. This article discusses the threats to EU digital sovereignty to help companies better understand the EU digital policy and its disruptive impacts, especially on data transfers.
2. Threats to EU digital sovereignty
The European Union (EU) feels the threat of what is coined digital colonialism,[2] where EU member states are increasingly dependent on digital infrastructures that are in the hands of a handful of dominant foreign market players.
The digital identity of most European citizens depends on foreign email addresses, and 92% of European data resides in the clouds of U.S. technology companies, of which 80% are with only five suppliers.[3] With the EU having no large digital platform companies, data transfers are a one-way street. Besides supply chain dependencies, these companies operate proprietary ecosystems, which offer limited interoperability and portability of data and applications, resulting in EU data being locked in and having little value for artificial intelligence-driven innovation.[4]
The realization has set in that Europe’s digital dependencies are so great that its digital sovereignty[5] is under pressure. The fears are justified; EU sovereignty (as the sovereignty of any state around the world for that matter) is under pressure due to a toxic combination of disruptive digital transformation, the exponential growth of cyberattacks (in which smaller countries and non-state actors now also enter the global battlefield), and rising geopolitical tensions, leading to a sovereignty gap.[6]
The sovereignty concerns have led to a U-turn in EU policy. Until recently, Europe favored the open, liberal market economy, and EU research had to be open to the world. Restoring Europe’s digital sovereignty is now a core ambition of the European Commission (EC). Whereat first digital sovereignty was discussed in the context of cybersecurity and defense, the discussion now extends to concerns about the economy and society-at-large.
The ultimate challenge is how Europe and its member states can retain control over their economies (control over essential economic ecosystems) and their democracies, and the rule of law (trust in their legal system and quality of democratic decision-making) in the digital world.[7] Due to the multifaceted nature of the causes that pressure our digital sovereignty and rapid geopolitical developments, there is no one-size-fits-all solution. To understand the series of EU policy initiatives to restore Europe’s digital sovereignty, it is essential to know why Europe’s ability to make decisions autonomously is threatened.
3. What are the threats?
3.1 Disruptive digital transformation
Friend and foe agree that our society is undergoing a digital revolution (in official terms: the fourth industrial revolution) that will lead to a transformation of our society as we know it.[8] In addition to all economic and social progress and prosperity, every technological revolution also brings with it disruption and friction. The first law of technology is that it is not good, not bad, but also not neutral.[9] The new digital technologies (and, in particular, artificial intelligence (AI) and quantum computing) are in and of themselves already disrupting societies and creating new vulnerabilities –weakening control over innovation and knowledge can jeopardize sovereignty. For example, AI and encryption will play an increasingly crucial role in cyber resilience.[10] If there is not enough innovation, there will be new dependencies.
Example – Encryption
Without proper encryption, we will not be able to protect the valuable and sensitive information of our governments, companies, and citizens. Current encryption will not hold against the computing power of future quantum computers. We will, therefore, have to innovate now to protect our critical information in the future. This is not only relevant for future information, but also for current information. Do not forget that currently hostile states systematically intercept and preserve encrypted communications in anticipation that these may be decrypted at a later stage and analyzed by deploying AI. We, therefore, have to invest in post-quantum encryption now in order to be able to protect strategic information that requires long-term protection.
Current EU research investments in quantum computing and AI are dwarfed by the billions invested by Chinese and U.S. governments,[11] combined with the investments from large U.S. and Chinese tech companies, such as Google[12] and Tencent.[13] Where foreign companies are at the forefront of (further) development and implementation of new technologies, such as AI and quantum computing, but also satellite and 5G networks, potentially new dependencies arise. These dependencies go beyond the specific technological applications themselves. For example, to be able to make large-scale use of data analysis by means of AI, enormous computing power is required. It is expected that the cloud infrastructure required for this will become the foundation for the European innovation and knowledge infrastructure. Maintaining control over this is an essential part of the EU’s digital sovereignty.[14]
3.2 Increasing cybersecurity threats
An important dimension of digital sovereignty is the cyber resilience of our critical sectors, processes, and data. The ever-increasing cybersecurity threats–in which smaller countries and non-state actors are also entering the global battlefield[15]–undermine our digital sovereignty. These concern the entire spectrum of direct threats to our vital infrastructure (sabotage), systematic theft by foreign states of intellectual property from our knowledge-intensive industries (economic espionage), digital extortion (ransomware attacks), targeted misinformation (fake news), and systematic infiltration of social media to influence elections and democratic processes.
As far as cyber threats are concerned, digital sovereignty cannot be separated from the three basic principles of information security, also known as the CIA of cyber security: confidentiality, integrity, and availability. In these three domains, autonomy must be safeguarded, not only at the level of a specific system in a specific sector (such as an information communications and technology (ICT) system in the criminal justice chain) but also in the larger framework of the economy and democracy.
Examples: Control over essential economic ecosystems
Economic espionage: the systemic theft by hostile states of intellectual property and know-how of our high-tech companies and universities undermines Europe’s future earning capacity.
Cloud infrastructure: the EU is mainly dependent on digital infrastructures owned by a number of major foreign market players, which offer limited portability and interoperability of data and applications. For innovation with AI, you need large quantities of harmonized data and computing power to process these data. Individual companies do not have sufficient data to innovate, and, therefore, the data of companies in a specific industry sector will have to be combined. This is currently difficult as companies’ data are stored in silos in the clouds of foreign tech providers. As a result, these have limited availability for European innovation. Access to harmonized data and cloud infrastructure will become the foundation for the European innovation and knowledge infrastructure. Maintaining control over this is an essential part of digital sovereignty.
Examples: Control over democratic processes and the rule of law
Manipulation of the election process: when our governments are not in control of critical democratic processes like elections, it mainly affects the internal legitimacy of thestate (the trust of citizens in the state). However, when a state is not in control of the election process because it has been infiltrated and manipulated by foreign powers, its external legitimacy may also be compromised. For example, during the pandemic, both China and Russia blatantly pushed “fake news” to undermine our governments’ COVID-19 responses. This undermined not only the internal legitimacy of our governments but also their external legitimacy. Whereas before COVID-19 China and Russia at least tried to hide their involvement in cyberattacks, they are now doing so blatantly. It shows Europe’s weakness; these states do not fear that retaliations will be forthcoming, undermining the EU’s external legitimacy.
Infiltration of a vital government process can also undermine trust in the rule of law. Illustrative is an incident in Germany. In January 2020, Der Spiegel reported that the Berlin High Court (responsible for terrorism cases) had been systematically infiltrated by a Russian hacker group probably sponsored by the Russian government, identified as APT 28 (Advanced Persistent Threat). This hacker group had previously been held responsible for the infiltration of the German Bundestag. The attack focused on data exfiltration, accessing the entire database with identities of suspects, victims, witnesses, and undercover agents, and informants.[16] These types of infiltration both undermine a governments’ internal and external legitimacy.
3.3 Increasing geopolitical tensions
EU policy options are seriously hampered by the increasing geo-political tensions. The EU increasingly finds itself the piggy-in-the-middle in a bipolar world. Digital technologies have become the battleground for the race for global leadership between the U.S. and China (aka the tech cold war).[17] The battle is mainly about leadership in the field of 5G/6G, quantum computing, computer chip technology, and AI. Both the U.S. and China have chosen the route of tech protectionism, regularly drawing the national security card to justify addressing critical supply chain issues (exposed by the pandemic) by bringing manufacturing back to their countries,[18] imposing stricter export controls of critical technology, and stepping up controls of foreign direct investments (FDI).[19] Recent U.S. executive orders ensure that almost any ICT-related activity in the U.S. connected to China is now subject to regulatory review by the U.S. government.[20] Not surprisingly, China is retaliating.[21]
The restrictions imposed by the U.S. and China play a role throughout Europe in, for example, the choice of suppliers for 5G equipment, for which Huawei was initially an important potential candidate. Over time, restrictions will likely extend to other equipment, such as Huawei servers that support cloud services, the presence of Chinese suppliers in the Internet of Things (IoT), cameras, airport scanners, and other surveillance equipment, and drones of Chinese origin. Giving in to U.S. pressure will potentially in turn lead to further Chinese pressure on European governments, including threats of Chinese import restrictions on European equipment and products. This ultimately affects European digital sovereignty and makes it more urgent for the EU to develop its own offerings as well.
3.4 Data as a weapon
Concerns of the superpowers go beyond ICT-supply chain dependencies and extend to concerns about what their adversary can do with the data of their companies and citizens (they consider data as a weapon), resulting in bans on the export of important data outside their territories.
President Trump kicked off tensions by banning popular Chinese apps – such as TikTok and WeChat – from the U.S. app stores because these would undermine its “national security, foreign policy and economy.”[22] Trump’s ban was met with severe skepticism; it was considered part of the trade war with China, more than based on true concerns about the privacy of U.S. citizens. However, subsequent reports about the massive mining by China of Western social media data to equip its government agencies, military, and police with information on foreign targets, should give anyone pause.[23] President Biden dropped President Trump’s ban, only to replace it with an executive order that provides powers to protect sensitive data of U.S. citizens from foreign adversaries.[24]
In response, in November 2021, China issued two pieces of sweeping privacy legislation, banning all exports outside China of “important data,” being any data that may endanger national security or public interests. Reviewing the categories of data caught by this definition shows that it is difficult to envisage what data could still be exported (e.g., covered are already personal data relating to more than 100,000 citizens). More telling is that China is willing to crack down on its own tech companies to prevent data of Chinese citizens from ending up in the U.S. In June 2021, when Didi, the Chinese equivalent to Uber, got listed on the New York Stock Exchange, Chinese regulators retaliated by banning the Didi app from the Chinese app stores, alleging that Didi was illegally collecting users’ personal data. China subsequently announced stricter control over foreign listings of Chinese companies.
Example – concern about China harvesting biological data
In January 2021, it was widely reported in the U.S. media that at the outbreak of the pandemic, the world’s largest biotech firm (based in China and with strong ties to the Chinese government) made an offer to the governors of six U.S. states to help build and run state-of-the-art COVID-19 testing labs against very favorable conditions.[25] So good that it seemed like an offer the states could not refuse. However, when the governors compared notes, they concluded that some offers are indeed too good to be true. The ulterior motive of the offer was likely to obtain biometric information of large parts of the American population to be used for Chinese DNA science, to develop vaccines, and precision medicine. The offer led U.S. officials to issue public warnings to hospitals and governmental agencies that “Foreign powers can collect, store and exploit biometric information from Covid tests.”[26] The Chinese quest to control biodata and control health care’s future is also called the new space race.
The concerns about large-scale harvesting of social media data extend beyond the individual privacy of citizens, they also concern the protection of our collective data. Analysis of data of a large enough portion of a population will be predictive for the entire population. The General Data Protection Regulation (GDPR) does not provide protection here. For example, if sufficient EU citizens provide consent for analysis of their DNA by a Chinese company, this will potentially impact us all. Concerns about the Chinese harvesting of social media data (via apps like TikTok) become more understandable when one considers that hereditary data (from DNA) can now be combined with socioeconomic data (information about how we live, what we eat, when we exercise and sleep). With information about heredity and environment, suddenly precision medicine will be possible, potentially bypassing doctors. China itself is well aware of the risks, and clamped down on any access to their biological data and samples.[27]
Note that where both the U.S. and China have large digital service providers importing EU data and limit exporting their own data, data transfers by the EU are increasingly a one-way-street. In response, we see the EU also reconsidering its policy options, resulting in data localization requirements creeping in at, for example, the EU standard setting level for cloud services[28] and data export restrictions on non-personal data under the draft Data Act, stricter even than under the GDPR for personal data.[29]
4. Europe’s push for digital sovereignty
The European Commission (EC) acknowledges that Europe’s sovereignty will have to be supported by a “smart” combination of measures as becoming self-sufficient is not realistic for Europe, and also not desirable.[30] With the EU policy measures, the EC is aiming to pave a third way, aiming to avoid falling into the trap of tech protectionism. The policy is, for example, not to exclude foreign digital providers, nor for Europe to build its own hyperscalers. And rightly so, if you have concerns about vendor and data lock-in with current big tech companies, you will have similar concerns with their EU equivalent. Rather than blocking foreign suppliers, EU digital strategy is about breaking through vendor/data lock-in by a policy based on open data, open infrastructure, and open source.
Note that concerns about vendor and data lock-in are not limited to the EU. Governments around the world (including the U.S. and China) are currently considering their policy responses and antitrust investigations are underway on all continents.[31] The dominant positions (winner takes all) are a sign of the times and should not be taken as a given. As said, our society is undergoing a technological revolution, which brings along disruption and friction. History shows that whenever new technologies disrupt society, it needs time to adjust and regulators always play catch-up. At this time, the digital society is still driven by the possibilities of technology rather than social and legal norms.[32] These frictions will ultimately be addressed. For example, the first industrial revolution brought child labor, abuse of workers, and the skies of London were so full of soot that people fell ill. The barons of the new industry (steel, oil, copper, and coal) reigned supreme, with worsening inequalities due to their monopolist positions. Ultimately many new laws were introduced, most notably the first antitrust regulation, which broke up the monopolies. Illustrative here is that President Biden, when introducing his Executive Order on Promoting Competition in the American Economy,[33] made several references to the importance of abiding to the original principles of antitrust regulation also in the new digital economy:
“It is the policy of my Administration to enforce the antitrust laws to meet the challenges posed by new industries and technologies, including the rise of the dominant Internet platforms, especially as they stem from serial mergers, the acquisition of nascent competitors, the aggregation of data, unfair competition in attention markets, the surveillance of users, and the presence of network effects.”
My point here is that governments around the world (including the U.S., China, and the EU), are currently considering their policy response and antitrust investigations are underway on all continents.[34] Once these have done their work, the world will look very different indeed, which includes international data transfers. A sign of the times is Microsoft’s announcement in May 2022, to create an EU boundary of the Microsoft Cloud, promising all EU customers to process and store all their data in the EU by the end of 2022.[35] A similar commitment has been made by Zoom.[36]
4.1 Open data
The cornerstone of EU digital policy is the EU Strategy for Data,[37] which aims to democratize access to data assets and drive data sharing in open digital ecosystems across the whole EU economy. It also aims to create a single market for data to be exchanged across sectors efficiently and securely within the EU in a way that fits European values of self-determination, privacy, transparency, security, and fair competition. The centerpiece of the European Data Strategy is the concept of European data spaces, bringing together EU data of nine defined sectors (including financial, health, and government) so that the scale of data required for AI-related innovation can be achieved. The design of the data spaces will be based on full interoperability and data sovereignty, whereby users will be provided tools to decide about data sharing and access.[38] With the actual parties that generate the data regaining control, large hyperscalers will no longer be able to achieve vendor/data lock-in in their proprietary eco-systems. In this context also fits the Data Governance Act,[39] opening up public data for innovation through independent intermediaries, and the draft Data Act, providing a harmonized framework for all data sharing, conditions for access by public bodies, portability and interoperability requirements for cloud services, and data export restrictions for non-personal data even more strict than those prescribed by Schrems II.[40]
4.2 Open infrastructure
Another flagship initiative is the GAIA-X project,[41]which is aimed at achieving interoperability between cloud offerings to achieve the required scalability of the cloud infrastructure for AI-related innovation, not by creating Europe’s own vertical hyperscalers but by networking (making interoperable) the current European offer of cloud infrastructure, enabling clients to scale up within that network (i.e., scaling up in a horizontal way). This is achieved by setting common technical standards and legal frameworks for the digital infrastructure and standardizing contract conditions. This form of interoperability goes beyond portability of data and applications from one vendor to another to prevent vendor lock-in; it really concerns the creation of open APIs, interoperability of key management for encryption, unambiguous identity, and access management, etc. Cloud providers will be expected to offer a choice as to where personal data are stored and processed.
The GAIA-X project is not a comprehensive European policy, but it is a concrete realization of the open interfaces, standards, and interconnection needed for the European policy and is explicitly based on principles of sovereignty-by-design. The project is open to foreign suppliers as long as they embrace the principles. From a digital sovereignty perspective, the GAIA-X project is a logical and promising initiative that is gaining more and more traction.[42] The expectation is that once the design principles are agreed upon, these may well become mandatory for all cloud services in Europe. Some of the elements (portability and interoperability requirements and data export restrictions for non-personal data) are already included in the draft Data Act.
4.3 Open source
The EC has an active open source software strategy, where open source solutions are preferred when equivalent in functionalities, total cost, and cybersecurity,[43] which facilitates decentralized and federated services that can be independently audited, contributing to public trust. Open source technologies can further be worked on collectively, which provides benefits of scale (combining the EU R&D to potentially match the R&D budgets of the big tech companies), but also ensures self-sovereignty as open source can always be subsequently forked individually for specific solutions.[44]
In conclusion
History shows that whenever new technologies disrupt society, it needs time to adjust and regulators always play catch-up. At this time, the digital society is still driven by the possibilities of technology rather than social and legal norms. This inevitably leads to social unrest and calls for new rules. Threats to EU digital sovereignty have led to a flurry of EU digital policy measures that will disrupt the digital landscape as we know it by working towards open infrastructure, open data, and application of open source technology.
The data transfer debate is no longer a culture war about differences in what are acceptable state powers to access data, but about being in control of the digital infrastructure and data required for EU digital innovation. The invasion of Ukraine by Russia, will only strengthen the EU’s resolve to become more independent. Once EU digital policy has done its work, the world will look very different indeed. The EC well recognizes the value of data transfers where required for running a cross-border business. Companies are advised to implement Schrems II compliance there. These transfers will ultimately be facilitated by the renewed trans-Atlantic transfer agreement when it materializes and is upheld before the EU courts. For the rest, companies will have to wait for how EU policy settles and how this impacts the global service models of the large technology providers.
[2] Kwet, M., “Digital colonialism: US empire and the new imperialism in the global south,” Race & Class 60:4, 3-26.
[3] Amiot, E., I. Palencia, A. Baena, and C. de Pommerol, 2020, “European digital sovereignty: syncing values and value,” Oliver Wyman, https://owy.mn/3LOpGf7.
[5] For definitions see: Timmers, P., 2019, “Strategic autonomy and cybersecurity,” E.U. Cyber Direct, May 10, https://bit.ly/3v67gAu.
[6] Timmers, P., 2019, “Challenged by ‘digital sovereignty,’” Journal of Internet Law 23:6, 1, 18.
[7] See for in-depth discussion see Timmers, P., and L. Moerel, 2020, “Reflections on digital sovereignty,” E.U. Cyber Direct, January 15, https://bit.ly/3s7sz2K.
[8] For an accessible book, see Brynjolfsson, E., and A. McAfee, 2014, Second machine age: work, progress, and prosperity in a time of brilliant new technologies, W.W. Norton & Company, which gives a good overview of the friction and disruption that arose from the industrial revolution and how society ultimately responded and regulated negative excesses and a description of the friction and disruption caused by the digital revolution. A less accessible, but very instructive, book on the risks of digitization and big tech for society is Zuboff, S., 2019, The age of surveillance capitalism, Public Affairs, [hereinafter: Zuboff (2019)].
[9] Kranzberg, M., 1986, “Technology and history: “Kranzberg’s laws,”” Technology and Culture 27:3, 544-560.
[10] Van Boheemen, P., L. Kool, and J. Hamer, 2019, “Cyber resilience with new technology – opportunity and need for digital innovation,” Rathenau Instituut, July 20, https://bit.ly/3LN7YsB. See also the Dutch Cyber Security Council Recommendation, 2020, “Towards structural deployment of innovative applications of new technologies for cyber resilience in the Netherlands,” CSR Opinion 2020, no. 5, p. 3.
[11] See for an overview of U.S. and Chinese research investments, Smith-Goodson, P., 2019, “Quantum USA vs. quantum China: the world’s most important technology race,” Forbes, October 10, https://bit.ly/3sWJowv.
[12] In October 2019, Google claimed to have reached quantum supremacy with its Google quantum computer called Sycamore (https://go.nature.com/3JIJ9vL). On December 3, 2020, Chinese quantum computing researchers also claimed quantum supremacy (https://bit.ly/3vckY4W).
[13] Keen not to fall behind major U.S. tech firms in quantum computing, the Chinese company Tencent announced that it plans to invest U.S.$70 bln in infrastructure and quantum computing (https://bit.ly/3s7RkMc).
[14] Timmers, P., 2020, “There will be no global 6G unless we resolve sovereignty concerns in 5G governance,” Nature Electronics 3, 10-12. See also the German “Industrial strategy 2030. Guidelines for a German and European industrial policy,” (https://bit.ly/3t1c7Am) in which it is recognized that insufficient grip on new technologies poses a direct risk to the preservation of the technological sovereignty of the German economy.
[15] Sanger, D. A., 2018, The perfect weapon: war, sabotage, and fear in the cyber age, Scribe U.K.;Kello, L., 2017, The virtual weapon and international order, Yale University Press; Corien Prins also points out that the new digital weaponry is changing the (geopolitical) order: “The balance of power is shifting, now that smaller countries can also enter the global battlefield. Without having to engage in a large-scale military confrontation or actually enter the territory of another state. In short, it is relatively easy to develop great clout,” https://bit.ly/3JOI8Td.
[16] Kiesel, R., A. Fröhlich, S. Christ, and F. Jansen, 2020, “Russische Hacker könnten Justizdaten gestohlen haben,” Der Tagesspiegel, January 28, https://bit.ly/3v8I1xB.
[18] FACT SHEET: Biden-Harris Administration bringing semiconductor manufacturing back to America,” The White House, January 21, 2022, https://bit.ly/3h7Da7G.
[19] Congressional Research Service, 2021, “U.S. export control reforms and China: issues for Congress,” January 15, https://bit.ly/3s7pe3D.
[20] FACT SHEET: Executive Order addressing the threat from securities investments that finance certain companies of the People’s Republic of China, The White House, June 3, 2021, https://bit.ly/33GprBz.
[22] Executive Order on addressing the threat posed by TikTok – The White House (archives.gov), August 6, 2020 (https://bit.ly/3LRNzlZ); New York Times, 2020, “Trump’s attacks on TikTok and WeChat could further fracture the internet,” September 18, https://nyti.ms/3sUMtxj.
[28] See Position Paper of the Dutch Online Trust Coalition on regulatory developments at ENISA originating from the Cyber Security Act, https://bit.ly/33IyB0y.
[29] Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act), COM(2022) 68 final.
[30] See Timmers and Moerel (2020) for three approaches to achieve digital sovereignty: risk management, strategic partnerships, or working together on a global level to find solutions in the common interest (global common goods).
[32] Moerel, L., Big Data Protection: How to Make the Draft EU Regulation on Data Protection Future Proof (oration Tilburg), Tilburg: Tilburg University 2014, p. 21.
[37] European Commission, 2020, “A European data strategy,” COM(2020)66, February 19.
[38] See for overview of the data space design principles: “Design principles for data spaces,” position paper, https://bit.ly/3p79v2O.
[39] Proposal for a Regulation of the European Parliament and of the Council on European data governance (Data Governance Act), COM/2020/767 final.
[40] Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act), COM(2022) 68 final.
[41] “A Federated data infrastructure as the cradle of a vibrant European ecosystem,” the GAIA-X project initiated by the German and French governments, October 2019, based on principles of sovereignty-by-design.
[42] In the Netherlands, a coalition of TNO and a number of industry associations are actively contributing to the GAIA-X project, https://bit.ly/3p7hbSx.
[43] Communication to the Commission Open Source Software Strategy 2020 – 2023 Think Open, C(2020)7149 final, https://bit.ly/3BNhozx.
Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework
The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield.
This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached.
The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.
This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases.
If you are curious about what the legal process will look like both on the US and EU sides after the agreement “in principle”, check out this blog post by Laila Abdelaziz of the “Privacy across borders project” at American University.
After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement.
In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.
Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.
Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.
The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.
The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”.
Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.
One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.
Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side.
Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.
What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.
Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.
In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.
Measuring Privacy Programs
The risks of falling short on privacy compliance are greater than they have ever been. New laws are going into effect around the world and in the states, enforcement agencies are exercising their authority and media organizations have teams devoted to identifying data protection failures. Legal judgments can run into the billions. And most important, consumers are increasingly empowered and active in responding when they believe their rights are trampled. Companies are hiring compliance staff and investing in privacy management tools and trying to become more sophisticated about measuring performance.
Businesses are increasingly monitoring quantitative and qualitative metrics to track, measure, and improve existing privacy programs. According to a Privacy Benchmark Study by Cisco, 93% of organizations currently track and provide analysis on at least one privacy metric, and 14% use five or more. These privacy metrics provide businesses and other organizations with key information that allows them to enhance trust and relationships with customers, ensure that personal data remains safe in data transfers, and confirm legal and regulatory privacy compliance.
FPF recently convened policy, academic, and industry privacy experts to discuss privacy metrics and their benefits, and published a report based on their discussions. Through these discussions, we learned that beyond demonstrating compliance, privacy metrics have emerged as a key measure to improve privacy program performance and maturity in terms of customer trust, risk mitigation, and business enablement. Privacy leaders can use these metrics to benchmark the maturity of their organization’s privacy program against its strategy and goals and demonstrate how privacy contributes to its strategy and bottom line.
Privacy metrics can be used to measure a variety of data points. Simple operational and compliance metrics measure activities like the number of data subject requests, where privacy executives can track and improve the efficiency of existing organizational processes. More advanced metrics that are customer and business enablement focused measure things like the amount of time needed to respond to requests.
Privacy metrics can be grouped into six categories:
Individual rights: Individual rights metrics measure the rate of consent for data sharing and email marketing, data subject requests, customer satisfaction rates, and more. This information is useful in determining the trust customers have in the privacy program and how well the program protects customer data.
Training & awareness: Training & awareness metrics compile the number of privacy trainings offered to staff as well as the number of staff trained and their engagement with the privacy program. Having staff engaged with privacy-related issues, businesses and organizations can better ensure legal compliance. This information can show gaps in organizational privacy knowledge, improve an organization’s public image, and create operational excellence in privacy.
Commercial: Commercial metrics measure how many customers have signed data processing agreements, external vendor reviews of an organization’s privacy program and how many privacy attestations have been completed. This information focuses on customer and business engagement, tracking a privacy program’s ability to support an organization as new technology is adopted. These metrics can drive additional investments from stakeholders, increasing the value of an organization.
Accountability: With accountability metrics, utilizing privacy, data protection and transfer impact assessments, organizations are able to track projects that have received privacy advice and ensure that privacy policies and procedures are current. This allows organizations to demonstrate their ability to comply with relevant laws while keeping their organization competitive and reputable.
Privacy stewards: Privacy stewardship is responsible for turning data policies into common organization practices. These metrics measure the scope of an organization’s privacy products, including the number of personal information management systems, data privacy impact assessments, and any data FAQs that are created.
Policy: Policy metrics measure an organization’s compliance with potential privacy legislation, working to improve the organization’s environmental, social, and governance ratings. This allows organizations to increase public trust, knowing the organization will use and handle their data ethically.
Evaluating the effectiveness and value of privacy initiatives has become a core aspect of many organizations’ strategies. Ignoring privacy issues can create unnecessary risks. The utilization of privacy metrics can help organizations accomplish many objectives including benchmarking against industry standards, ensuring compliance with privacy laws and regulations, increasing customer trust, and asserting the value of existing privacy programs.
If you are interested in learning more, sign up for our monthly briefing, join us at one of our upcoming events, or follow us on Twitter and LinkedIn.
FPF Statement on the EU/US Transatlantic Data Agreement
March 25, 2022 — This morning the European Union and the United States came to a breakthrough agreement in principle, which allows Europeans’ personal data to flow to the United States.
Future of Privacy Forum’s CEO Jules Polonetsky said:
We are encouraged to see progress in the important effort to ensure that cross-border EU-U.S. research, communication, and commerce can continue without disruption. Both the European Commission and U.S. negotiators understand that any deal needs to meet the standard set by the European Court of Justice. Recent U.S. proposals have included significant oversight and extensive redress structures, beyond the Privacy Shield agreement that the European Court of Justice invalidated. We look forward to the details of the latest proposals, including those related to ensuring proportionality of government access to Europeans’ data. We appreciate that the Biden Administration has supported new models of redress and hope that Congress will build on these efforts as it addresses reforms of surveillance legislation in the near future.
We also encourage both the U.S. and EU to recognize the need to ensure surveillance oversight and trusted data flows among democratic allies globally and support the ongoing work of the OECD in this regard.
Read the White House Fact Sheet: the United States and European Commission Announce Trans-Atlantic Data Privacy Framework here. You can also read VP of Global Privacy Dr. Gabriela Zanfir-Fortuna’s analysis here.
ITPI: New OECD-Israel Workshop January 2021 Report
The report was drafted by Limor Shmerling Magazanik, ITPI Managing Director, based on inputs from workshop experts, the OECD Working Party on Health Care Quality and Outcomes, and the OECD Working Party on Data Governance.
The objective of the workshop was to further international dialogue on issues critical for the successful use of health data for the benefit of the public, focusing on the implementation of privacy protection principals and the challenges that arise in the process.