FPF at IAPP’s Europe Data Protection Congress 2022: Global State of Play, Automated Decision-Making, and US Privacy Developments

Authored by Christina Michelakaki, FPF Intern for Global Policy

On November 16 and 17, 2022, the IAPP hosted the Europe Data Protection Congress 2022 – Europe’s largest annual gathering of data protection experts. During the Congress, members of the  Future of Privacy Forum (FPF) team moderated and spoke at three different panels. Additionally, on November 14, FPF hosted the first Women@Privacy awards ceremony at its Brussels office, and on November 15, FPF co-hosted the sixth edition of its annual Brussels Privacy Symposium with the Vrije Universiteit Brussels (VUB)’s Brussels Privacy Hub on the issue of “Vulnerable People, Marginalization, and Data Protection” (event report forthcoming in 2023).

In the first panel for IAPP’s Europe Data Protection Congress, Global Privacy State of Play, Gabriela Zanfir-Fortuna (VP for Global Privacy, Future of Privacy Forum) moderated a conversation on key global trends in data protection and privacy regulation in jurisdictions from Latin America, Asia, and Africa. Linda Bonyo (CEO, Lawyers Hub Africa), Annabel Lee (Director, Digital Policy (APJ) and ASEAN Affairs, Amazon Web Services), and Rafael Zanatta (Director, Data Privacy Brasil Research Association) participated. 

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale (EU Privacy Counsel, Future of Privacy Forum) led a discussion on FPF’s ADM case-law report and impactful cases and relevant concepts for automated decision-making regulation under the GDPR. Ruth Boardman (Partner, Co-head, International Data Protection Practice, Bird & Bird), Simon Hania (DPO, Uber), and Gintare Pazereckaite (Legal Officer, EDPB) participated.

Finally, in the third panel, Perspectives on the Latest US Privacy Developments, Keir Lamont (Senior Counsel, Future of Privacy Forum) participated in a conversation focused on data protection developments at the federal and state level in the United States. Cobun Zweifel-Keegan (Managing Director, D.C., IAPP) moderated it, and Maneesha Mithal (Partner, Privacy and Cybersecurity, Wilson Sonsini Goodrich & Rosati) and Dominique Shelton Leipzig, (Partner, Cybersecurity & Data Privacy; Leader, Global Data Innovation & AdTech, Mayer Brown) also participated.

Below is a summary of the discussions in each of the three panels:

1. Global trends and legislative initiatives around the world

In the first panel, Global Privacy State of Play, Gabriela Zanfir-Fortuna stressed that although EU and US developments in privacy and data protection are in the spotlight, the explosion of regulatory action in other regions of the world is very interesting and deserves more attention.

Linda Bonyo touched upon the current movement in Africa where countries are adopting their own data protection laws, primarily inspired by the European model of data protection regulation, since they trust that the GDPR is a global standard and lack the resources to draft policies from scratch. Bonyo also added that the lack of resources and limited expertise are the main reasons why African countries struggle to establish independent Data Protection Authorities (DPAs). She then stressed that the Covid-19 pandemic revived discussions about a continental legal framework to address data flows. Regarding enforcement, she noted that for Africa, the approach looks rather “preventative” than “punitive.” Bonyo also underlined that it is common for big tech companies to operate outside of the continent and only have a small subsidiary in the African region, rendering local and regional regulatory action less impactful than in other regions.

Annabel Lee offered her view on the very dynamic Asia-Pacific region, noting that the latest trends, especially post-GDPR, include not only the introduction of new GDPR-like laws but also the revision of existing ones. Lee noted, however, that the GDPR is a very complex piece of legislation to “copy,” especially if a country is building its first data protection regime. She then focused on specific jurisdictions, noting that South Korea has overhauled its originally fragmented framework with a more comprehensive one and that Australia will implement a broad extraterritorial element in its revised law. Then Lee stated that when it comes to implementation and interpretation, data protection regimes in the region differ significantly, and countries try to promote harmonization by mutual recognition. With regards to enforcement, she stressed that it is common to see occasional audits and that in certain countries, such as Japan, there is a very strong culture of compliance. She also added that education can play a key role in working towards harmonized rules and enforcement. Lee offered Singapore as an example, where the Personal Data Protection Commission gives companies explanations not only on why they are in breach but also on why they are not in breach.

Rafael Zanatta explained that after years of strenuous discussions, there is an approved data protection legislation in Brazil (LGPD) that has already been in place for a couple of years. The new DPA created by the LGPD will likely ramp up its enforcement duties next year and has, so far, focused on building experimental techniques (to help incentivize associations and private actors to cooperate) and publishing guidelines, namely non-binding rules that will provide future interpretation for cases. Zanatta stressed that Brazil has been experiencing the formalization of autonomous data protection rights with supreme court rulings stating that data protection is a fundamental right different from privacy. He underscored that it will be interesting to see how the private sector applies data protection rights given their horizontal effect and the development of concepts like positive obligations and the collective dimension of rights. He explained that the extraterritorial applicability of Brazil’s law is very similar to the GDPR since companies do not need to operate in Brazil for the law to apply. He also touched upon the influence of Mercosur, a South American trade bloc, in discussions around data protection and the collective rights of the indigenous people of Bolivia in light of the processing of their biometric data. With regards to enforcement, he explained that in Brazil, it is happening primarily through the courts due to Brazil’s unique system where federal prosecutors and public defenders can file class actions.

img 1056

2. Looking beyond case law on automated decision-making

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale offered an overview of FPF’s ADM Report, noting that it contains analyses of more than 70 DPA decisions and court rulings concerning the application of Article 22 and other related GDPR provisions. He also briefly summarized the Report’s main conclusions. One of the main points he highlighted is that the GDPR covers automated decision-making (ADM) comprehensively beyond Article 22, including through the application of overarching principles like fairness and transparency, rules on lawful grounds for processing, and carrying out Data Protection Impact Assessments (DPIA). 

Ruth Boardman underlined that the FPF Report reveals the areas of the law that are still “foggy” regarding ADM. Boardman also offered her view on the Portuguese DPA decision concerning a university using proctoring software to monitor students’ behavior during exams and detect fraudulent acts. The Portuguese DPA ruled that the Article 22 prohibition applied, given that the human involvement of professors in the decisions to investigate instances of fraud and invalidate exams was not meaningful. Boardman further explained that this case, along with the Italian DPA’s Foodhino case, shows that the human in the loop must have meaningful involvement in the process of making a decision for Article 22 GDPR to be inapplicable. She added that internal guidelines and training provided by the controller may not be definitive factors but can serve as strong indicators of meaningful human involvement. Regarding the concept of “legal or similarly significant effects” — another condition for the application of Article 22 GDPR – Boardman noted the link between such effects and contract law. For example, in the case of national laws transposing the e-Commerce Directive in which adding a product to a virtual basket counts as an offer to the merchant and not as a binding contract, no legal effects are triggered. She also added that meaningful information about the logic behind ADM should include the consequences that data subjects can suffer and referred to an enforcement notice from the UK’s Information Commissioner Office concerning the creation of profiles for direct marketing purposes.

Simon Hania argued that the FPF Report showed the robustness of the EDPB guidelines on ADM and that ADM triggers GDPR provisions that are relevant to fairness and transparency. With regards to the “human in the loop” concept, Hania claimed that it is important to involve multiple humans and ensure that they are properly trained to avoid biased decisions. Then he elaborated on a case concerning Uber’s algorithms that match drivers with clients, where Uber drivers requested access to data to assess whether the matching process was fair. For the Amsterdam District Court, the drivers did not demonstrate how the matching process could have legal or similarly significant effects on them, which meant that drivers did not have enhanced access rights that would only apply if ADM covered by Article 22 GDPR was at stake. However, when ruling on an algorithm used by another ride-hailing company (Ola) to calculate fare deductions based on drivers’ performance, the same Court found that the ADM at issue had significant effects on drivers. For Hania, a closer inspection of the two cases reveals that both ADM schemes affect drivers’ ability to earn or lose remuneration, which highlights the importance of financial impacts when assessing the effects of ADM as per Article 22. He also touched on a decision from the Austrian DPA concerning a company that scored individuals on the likelihood they would belong to certain demographic groups, as the DPA mandated the company to inform individuals about how it calculated their individual scores. For Hania, the case shows that controllers need to explain the reasons behind their automated decisions – regardless of whether they are covered by Article 22 GDPR or not – to comply with the fairness and transparency principles of Article 5 GDPR.

Gintare Pazereckaite noted that the FPF Report is particularly helpful in understanding inconsistencies in how DPAs apply Article 22 GDPR. She then stressed that the interpretation of “solely automated processing” should be done in light of protecting and safeguarding data subjects’ fundamental rights. Pazereckaite also referred to the criteria set out by the EDPB guidelines that clarify the concept of the “legal and similarly significant effects.” She added that data protection rules such as accountability and data protection by design play an important role in allowing data subjects to understand how ADM works and what consequences it may bring up. Lastly, Pazereckaite commented on Article 5 of the proposed AI Act – which contains a list of prohibited AI practices – and its importance when an algorithm does not trigger Article 22 GDPR.

img 1063

3. ADPPA and regional laws re-shaping US data protection regime

In the last panel, Perspectives on the Latest US Privacy Developments, Keir Lamont offered an overview of recent US Congressional efforts to enact the American Data Privacy and Protection Act (ADPPA) and outstanding areas of disagreement. For him, the bill would introduce stronger rights and protections than those set forth in existing state-level laws; including a broad scope; strong data minimization provisions; limitations on advertising practices; enhanced privacy-by-design requirements; algorithmic impact assessments; and a private right of action. In contrast, existing state laws typically adhere to the outdated opt-in/opt-out paradigm for establishing individual privacy rights.

Maneesha Mithal explained that in the absence of comprehensive federal privacy legislation, the Federal Trade Commission (FTC) has largely taken on the role of DPA by virtue of having jurisdiction over a broad range of sectors in the economy and acting both as an enforcement and rulemaking agency. Mithal explained that the FTC enforces four existing privacy laws in the US and can also take action against both unfair and deceptive trade practices. For example, the FTC can enforce against any statement (irrespective of whether it is in a privacy policy or the context of user interfaces), material omissions (for example, the FTC has concluded that a company did not inform its clients that it was collecting second by second television data and was further sharing it), and unfair practices in the data security area. Mithal pointed out that since the FTC does not have the authority to seek civil penalties for first-time violations, it is trying to introduce additional deterrents by naming individuals (for example, in the case of an alcohol provider, the FTC named the CEO for failing to prioritize security) and is using its power to obtain injunctive relief. For example, in a case where a company was unlawfully using facial recognition systems, the FTC ordered the company to delete any models or algorithms that were used, and thus FTC applied the fruit to the poisonous tree theory. Mithal also noted that although the FTC has historically not been active as a rulemaking authority due to procedural issues along with the lack of resources and time considerations, it is initiating a major rulemaking involving “Commercial Surveillance and Lax Data Security Practices.”

Finally, Dominique Shelton Leipzig offered remarks on state-level legislation focusing on the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA), adding that Colorado, Connecticut, Utah, and Virginia have similar laws. She elaborated on the CPRA’s contractual language, comparing California’s categorization of “Businesses,” “Contractors,” “Third Parties,” and “Service Providers” to the GDPR’s distinction between controllers and processors. Shelton Leipzig also explained that the CPRA introduced a highly disruptive model for the ad tech industry since consumers can opt out of both the sale of data, as well as the sharing of data. The CPRA also created a new independent rulemaking and enforcement Agency, the first in the US, focusing only on data protection and privacy. Finally, she addressed the recently enacted California Age-Appropriate Design Code Act, which focuses on the design of internet tools, and stressed that companies are struggling to implement it.

img 1077

Further reading:

Five Big Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2023

Entering 2023, the United States remains one of the only global economic powers that lacks a comprehensive, national framework governing the collection and use of consumer data throughout the economy. Congress made unprecedented progress toward enacting baseline privacy legislation in 2022. However, the apparent impasse in the efforts to move H.R. 8152, the American Data Privacy and Protection Act (“ADPPA”) over the finish line is likely to re-center the states as the locus of continued legislative activity on consumer privacy. Stakeholders are eager to learn which (if any) states will establish new privacy rights and protections in the coming year, but it remains too early in the legislative cycle to make predictions with any confidence. So instead, this post explores five big questions about the state privacy landscape that will determine whether 2023 emerges as a pivotal year for the protection of consumer data in the United States.

1. Will any state raise the bar for comprehensive privacy protections?

In four of the past five years, a new high-water mark for American privacy protections has been set through the enactment of comprehensive legislation at the state-level. In 2018, the California Consumer Privacy Act (CCPA) emerged as the nation’s first comprehensive consumer privacy law. The 2020 California Privacy Rights Act (CPRA) ballot initiative expanded California’s privacy regime, establishing heightened protections for certain sensitive personal information and providing a right to correct inaccurate data. In 2021, Virginia (VCDPA) and Colorado (CPA) enacted laws that are notable for creating ‘opt-in’ affirmative consent requirements in addition to California-style ‘opt-out’ privacy rights. Finally, in 2022, Connecticut (CACPDPOM) adopted a privacy law that improved upon prior models by creating clear protections for facial recognition data and an explicit right to revoke consent. 

Will any state continue this trend by enacting a privacy law that establishes new or stronger privacy rights and protections for its citizens in the coming year? As industry groups become increasingly insistent about the dangers of a ‘patchwork’ of divergent state privacy laws raising compliance costs for businesses, it is possible that policymakers will be reluctant to explore new approaches to privacy protection and will instead advance legislation that ‘paints inside the lines’ of the five established laws. 

In considering the forthcoming state privacy landscape, one of the best places to start is with the jurisdictions that came closest to adopting new privacy laws over the past year. In 2022, five states saw privacy legislation clear one chamber of their legislature: Florida (HB 9), Indiana (SB 358), Iowa (HF 2506), Oklahoma (HB 2969), and Wisconsin (AB 957). Of these, the Midwestern proposals (Indiana, Iowa, and Wisconsin) would not have meaningfully expanded privacy rights, protections, or compliance obligations beyond what is already on the books in other states (though they would have established some important privacy rights and protections for their residents).

Alternatively, last year’s bills from Oklahoma and Florida would have significantly reshaped privacy compliance programs for covered entities. Oklahoma’s Computer Data Privacy Act includes more rigorous consent requirements than any comparable state or national law, while Florida’s proposal would have required companies to adhere to strict data retention schedules and provided for enforcement mechanisms that are absent in other state laws, including a private right of action. However, there are reasons to suspect the window of opportunity for each bill may have closed. In Oklahoma, the bill’s most prominent backer, Democratic Rep. Collin Walke, has retired. As for Florida, reports indicate that the bill’s sponsor believes that leadership changes make it unlikely that the state will prioritize privacy legislation in the coming years. 

Although no state appears to have an existing-privacy framework with a demonstrated record of support at the ready to move next year, history has also shown that under the right political conditions novel privacy legislation can rapidly advance in a single legislative session. Potential candidates for similar progress in 2023 include Oregon, where an Attorney General-led multi-stakeholder task force has spent months gearing up to advance a comprehensive consumer privacy bill next legislative cycle. In New York, a set of end-of-session amendments to the 2022 version of the New York Privacy Act (S6701) brought the proposal structurally closer to existing privacy laws, suggesting this legislation could see renewed momentum in the coming year. It is also worth remembering that despite privacy’s emergence as a bipartisan issue, the only states to enact comprehensive privacy legislation to date have had the same party in power in both legislative chambers and the governor’s mansion. It will therefore be worth watching four states that have previously considered privacy legislation and emerged from the November elections with newly formed Democratic Party trifectas in government: Maryland (SB 11), Massachusetts (H 4514), Michigan (SB 1182 & HB 5989), and Minnesota (HF 1492 (2021)).

2. Will there be an ‘ADPPA Effect’?

In 2022, the American Data Privacy and Protection Act (ADPPA) advanced through the House Energy and Commerce Committee by an overwhelmingly bipartisan 53-2 vote. ADPPA appears unlikely to be enacted this Congress as the bill’s backers were unable to secure the support of either Senate Commerce Chair Cantwell or outgoing Speaker Pelosi. Nevertheless, the introduction, enthusiasm, and momentum behind ADPPA represented a seismic event for the U.S. privacy landscape and may exert significant influence on state lawmakers in the coming years.

There are two (potentially competing) theories for how ADPPA’s emergence may impact state governments considering privacy legislation. First, in introducing state privacy legislation, lawmakers have routinely asserted that they are acting in the absence of Congressional action and that they would prefer to see a unified, federal approach to the protection of consumer privacy. As a result, demonstrated bipartisan cooperation on ADPPA and the potential for further progress in the next Congress may make consumer privacy a less salient issue in state legislatures.

On the other hand, it is also possible that ADPPA will substantially drive the content of privacy bills that will be considered in 2023. The majority of state privacy proposals considered in recent years have been modeled on either the California or Washington Privacy Act legislative frameworks, both of which are rooted in the traditional, narrow privacy paradigm of ‘notice and choice.’ However, ADPPA’s framework is significantly stronger and broader than any enacted state law in rights and protections, scope, and enforcement mechanisms. For example, ADPPA would broadly cover businesses and nonprofits, establish strict data minimization requirements, create new civil rights protections, and provide for enforcement by a private right of action. The prominence of ADPPA and its record of bipartisan support make it a potential third model for state privacy legislation. There is already legislation in Michigan (SB 1182) that contains shades of ADPPA in its formulation of a private right of action. What, if any, additional language or concepts from ADPPA will gain traction at the state level?

3. Have we entered the Age of the Age-Appropriate Design Code?

While ‘comprehensive’ privacy laws and proposals continue to capture the bulk of the privacy commentariat’s attention, it is likely that the most significant U.S. consumer privacy development in 2022 was not ‘comprehensive,’ but ‘sectoral’ in nature. On September 15, California Governor Newsom signed the Age-Appropriate Design Code (AB 2273) into law. The AADC is a far-reaching children’s online safety, design, and privacy statute that is loosely modeled on an existing UK code of practice. Come 2024, the AADC will govern online services likely to be accessed by Californian users under 18 years of age and create significant new obligations. Notably, the law could also run contrary to traditional privacy interests and priorities, as it contains age-estimation requirements that will likely cause many companies to collect additional personal information on all their users. California’s AADC has been divisive – lauded by some and criticized as unworkable or unconstitutional by others. But most careful readers agree that the statute leaves many key terms undefined or vague; future rulemaking or other work to bring clarity is likely.

The ‘California effect’, where activity in California is seen to catalyze others to mimicry, is well documented in the privacy context. This means that a key question for consumer privacy in the coming years is whether other states will follow California’s lead and begin to enact their own age-appropriate design laws. Supporters of the AADC certainly intend for it to serve as a model for adoption in additional jurisdictions. However, as with breach notification statutes and comprehensive privacy laws, should other states consider and enact age-appropriate design legislation, there is no guarantee that they will follow neatly in the footsteps of California.

One AADC-style proposal that has already been introduced, the New York Child Data Privacy and Protection Act (S9563), would impose significant new obligations beyond California’s AADC. Perhaps most notably, S9563 would severely limit product development by requiring a risk assessment to be completed for any new online feature of a service targeted toward children, to be reviewed and approved by the Attorney General’s Office before such feature can be made available to the public. The California AADC also contains a broad grant of rulemaking authority, meaning that even if other states adopt identical laws, the contours of the AADC’s rights and responsibilities may continue to shift over the coming years. In sum, age-appropriate design legislation has the potential to dramatically alter online experiences for all users in the coming years; however, the ultimate impact of such frameworks is likely to come into greater focus over the coming months.

4. Will state legislatures prioritize protections for health and location data?

In June, the Supreme Court’s decision in Dobbs v. Jackson Women’s Health overturned decades of precedent to hold that the U.S. Constitution does not confer a right to receive an abortion. Following this decision, dozens of states took rapid action to either criminalize or shore up protections for receiving or providing reproductive health services. For example, California enacted AB-1242, which seeks to prohibit electronic communications providers from complying with out-of-state law enforcement inquiries relating to the investigation or enforcement of laws prohibiting abortion. However, there are indications that come 2023, some Democratic state lawmakers will pursue a new legislative response by regulating the collection, processing, and transfer of health and location data by businesses.

In New York, SB 9599 would impose strict consent requirements on companies that collect or sell personal health information for data processing, geofencing, or data brokering. The Washington State Attorney General’s office has announced that it will support similar legislation, the Consumer Health Data Privacy Act, in the coming year. Stakeholders will be watching closely to learn whether these legislative efforts converge around a shared approach to key definitions, rights, and business obligations, or move forward with diverging health privacy frameworks.

5. How effective will the laws taking effect be?

No matter what happens in state legislatures this year, 2023 will hold the distinction as the year in which the new era of state privacy laws take effect. On January 1st, California’s revised regime and Virginia’s law will become operational, followed by the Colorado and Connecticut statutes on July 1st, with Utah’s statute bringing up the rear with a December 31st effective date. In the impending shift from theory to practice, how will both public and policymaker perceptions of these various laws change?

While privacy professionals have spent years debating and preparing for these impending state laws, 2023 will mark the first time that many U.S. consumers will be legally entitled to exercise new privacy rights over the businesses that collect and share their personal information. Depending on the public perception (both immediate and over time) of these new state privacy laws, legislative efforts in other jurisdictions could be impacted in a variety of ways. For example, successful rollouts of the new state laws could prompt lawmakers in other jurisdictions to move forward on similar bills, seizing upon a popular issue. On the other hand, if the new laws kick off with a whimper, lawmaker appetite to take up consumer privacy issues might wane. If these laws take effect and consumers face difficulty in exercising their rights (as Consumer Reports argues occurred following the enactment of the CCPA), perhaps lawmakers will consider statutes with stronger enforcement mechanisms and larger penalties in order to compel compliance. Alternatively, lawmakers may also consider establishing longer ‘on-ramps’ to compliance (particularly for small businesses) or seek to draft more explicit, self-executing statutory obligations.

Conclusion

This commentary has noted several privacy proposals already under serious consideration for the 2023 legislative calendar (particularly in New York, where many bills have already been introduced). These bills and efforts should be regarded as only the narrow, visible tip of the iceberg, lawmakers and stakeholders across the country are likely already at work on new proposals that will not be officially introduced until legislative sessions formally convene. This article has posed many questions but can offer only one clear forecast: a turbulent and exciting year in the efforts to advance and secure new consumer data privacy rights and protections is on the horizon. Be sure to follow the Future of Privacy Forum for help tracking emerging trends and key developments throughout the year.

FPF Releases Comparative Analysis of California and U.K. Age-Appropriate Design Codes

The Future of Privacy Forum (FPF) today released a new policy brief comparing the California Age-Appropriate Design Code Act (AADC), a first-of-its-kind privacy-by-design law in the United States, and the United Kingdom’s Age-Appropriate Design Code. While there are distinctions between the two codes, the California AADC, which is set to become enforceable on July 1, 2024, was modeled after the UK’s version and represents a significant change in the regulation of the technology industry and how children will experience online products and services. 

Download the POLICY BRIEF: Comparing the UK and California Age-Appropriate Design Codes.

“Understanding the requirements of both the UK and California codes, and in particular where they differ, is critical for companies in the US and abroad who may soon be covered under one – or both – codes,” said Chloe Altieri, Youth & Education Privacy policy counsel for FPF and an author of the report. “The explanations and examples in the UK code, many of which are not yet defined in California’s version, may provide helpful compliance insights.”

The report builds on FPF’s in-depth analysis of the California AADC, published in October, and contains a side-by-side comparison of the 15 standards laid out in the UK AADC to the corresponding text of the California AADC, including the “best interests of the child” standard, age assurance, default settings, parental controls, enforcement, and data protection impact assessments.

The report also outlines several broader distinctions between the California and UK codes, including, crucially, how the underlying regulatory frameworks differ. While both codes build on the aims of their respective consumer privacy laws (the UK’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act), the California AADC is standalone legislation that will be independently enforced, while the UK AADC and GDPR are linked, and enforcement falls to the UK Information Commissioner’s Office (ICO). The UK AADC is also “rooted” in Article 3 of the United Nations Convention on the Rights of the Child (UNCRC), an international treaty ratified by 195 countries, including the UK, but not the US. While the California AADC uses a similar “best interests of children” standard, without the foundation of the UNCRC, there is much less certainty in how businesses should make that determination. 

“As policymakers in other states start to consider similar age-appropriate design code legislation, understanding how and why the California and UK codes differ will be critical,” said Bailey Sanchez, Youth & Education Privacy policy counsel at FPF and an author of the report. “It is not as simple as copying and pasting the UK code in California or anywhere else. California’s version was adapted to fit the legal landscape in the US and the state, which has a unique consumer privacy landscape. Other states will need to make their own adjustments.”

FPF’s youth and education privacy team is closely monitoring the implementation of the California AADC. Catch up on previous blog posts tracking the bill’s progress and our formal analysis once the bill was signed into law. To access the Youth & Ed team’s child and student privacy resources, visit www.StudentPrivacyCompass.org and follow the team on Twitter at @SPrivacyCompass.

New Report Promotes Accountability-Based Approach to Data Protection in the APAC Region

In recent years, there has been an uptick in new (comprehensive) data protection laws in the Asia Pacific (APAC). This trend introduces challenges for cross-border compliance, particularly for the industry, legal practitioners, and the community of data protection regulators. As a result of the difficulties, these stakeholders acknowledge that there is a need for greater consistency in regional data protection frameworks.

Today, the Future of Privacy Forum (FPF) — a global non-profit focused on privacy and data protection — and experts from the Asian Business Law Institute (ABLI) — a Singapore-based think tank non-profit dedicated to providing practical guidance in the field of Asian legal development — released a report providing a detailed comparison of the requirements for processing personal data in 14 jurisdictions in the Asia-Pacific (APAC), including Australia, China, India, Indonesia, Hong Kong SAR, Japan, Macau SAR, Malaysia, New Zealand, the Philippines, Singapore, South Korea, Thailand, and Vietnam. The comparative analysis follows a months-long dissemination of individual reports on each jurisdiction. 

In the movement for enhanced consistency between different jurisdictions’ data protection laws, FPF and ABLI found that many APAC jurisdictions have engaged in conversations around the call to move away from consent-centric privacy practices. Consent practices are often restricted to individual jurisdictions and their legal systems. FPF and ABLI’s new report aims to elevate this discussion by promoting alternatives to consent measures, thereby increasing the accountability of organizations that process personal data.

By using various alternative legal bases other than consent and with an eye on supporting greater accountability, organizations can balance their interest in using personal data with broader societal concerns, such as developing a vibrant digital economy or preventing the harms of crime and fraud to individuals.

“The APAC region is presently undergoing a period of intensive law reform. This APAC comparative report provides an opportunity for lawmakers, governments, and regulators who draft, review or implement data protection laws to have a comprehensive overview and analysis of notice, consent, and related requirements that operate in the data protection frameworks of their respective jurisdictions, regional partners and neighbors,” said Josh Lee Kok Thong, Managing Director of FPF APAC in Singapore. “We hope the report serves as a catalyst for initiating a regional dialogue focused on clarifying existing uncertainties and enhancing the compatibility of regional data protection laws.”

Understanding that compliance-based approaches to data privacy, such as consent, notice, and choice mechanisms, have serious limitations, FPF and ABLI propose an accountability-based approach to data protection. This new system shifts the responsibility of protection to users and controllers of personal data by utilizing data protection impact assessments (DPIAs). Assessing risk and enhancing accountability provides organizations with legal certainty, ensuring better privacy accountability across the region.

Moreover, FPF and ABLI found that this accountability approach is necessary and important as data protection developments continue advancing across the APAC region. The rise of new developments conveys a rare opportunity to clarify existing uncertainties and enhance the compatibility of Asian data protection laws on these crucial privacy issues. The report’s comparative legal analysis and input from industry, privacy professionals, and regional practitioners have demonstrated that despite divergences, commonalities exist and can be leveraged to drive convergence between jurisdictions’ different legal systems.

“ABLI is delighted to see the fruit of its partnership with FPF culminating in the release of this comparative review at a time when digitalization is driving an increase in cross-border data flows,” said Mr. Rama Tiwari, Chief Executive of the Singapore Academy of Law, ABLI’s parent organization. “With the largest economies in APAC estimated to be able to reap economic benefits of approximately US$2 trillion if they fully capture their digital potential, the release of the report provides a timely and enabling reference for policymakers and practitioners alike as they design, reform and refine policies and practices for personal data processing both domestically and across borders.”

The GDPR and the AI Act Interplay: Highlights from FPF and Ada Lovelace Institute’s Joint Event

Authored by Christina Michelakaki, FPF Intern for Global Policy

On November 9, 2022, FPF, along with the Ada Lovelace Institute (Ada), organized a closed roundtable in Brussels where experts met to discuss the lessons that can be drawn from General Data Protection Regulation (GDPR) enforcement precedents when deciding on the scope and obligations of the European Commission (EC)’s Artificial Intelligence (AI) Act Proposal. The event hosted representatives from the European Parliament, civil society organizations, Data Protection Authorities (DPAs), and industry representatives.

The roundtable discussion was based on a comprehensive Report launched by FPF in May 2022 analyzing case law under the GDPR applied to real-life cases involving Automated Decision-Making (ADM). This blog outlines the main conclusions drawn by the speakers on four main topics:

  1. the complementarity between the AI Act and the GDPR;
  2. the needed clarity on transparency requirements and AI training;
  3. the difference between the risk-based approach and an exhaustive high-risk AI list; and
  4. the need for effective enforcement and redress avenues for affected persons.    

FPF’s Managing Director for Europe, Dr. Rob van Eijk, gave opening remarks followed by a short introduction by FPF EU Policy Counsel Sebastião Barros Vale and Ada’s European Adviser, Alexandru Circiumaru

As outlined in a recent analysis on the matter published by FPF, Barros Vale highlighted that:

This was followed by short interventions from the participants and a discussion. 

  1. The AI Act and the GDPR have different but complementary scopes and obligations.

Speakers seemed to agree on the fact that not following the GDPR’s distinction between data controllers and processors in the AI Act stems from the fact that the EC aims at regulating AI systems entering the EU market from a product safety perspective. 

Industry representatives further explained that trying to understand the requirements of the AI Act from a data protection point of view does not make practical sense. They added that having specific types of AI systems in the Annex III list would not make them lawful per se, notably if they breach GDPR requirements. 

Other panelists noted that the AI Act has a broader scope than the GDPR regarding AI systems, as it covers cases where no personal data processing is involved but that may negatively affect individuals.

Moreover, participants questioned whether the AI Act’s human oversight requirements would render Article 22 GDPR on ADM useless – notably, the rights for individuals to obtain human review of automated decisions. Some participants argued that this does not seem to be the case because users are not required by the AI Act Proposal to implement meaningful human oversight when they deploy AI systems. With regard to human oversight, the current draft of the AI Act only requires providers to embed measures into their AI systems that enable users to include human oversight in their deployment scheme. 

Additionally, some experts claimed that the interplay between the AI Act and the GDPR was neglected in the original Proposal, which may raise further issues. As an example, the initial text seems to overlook the fact that users (likely controllers under the GDPR) are often very dependent on AI providers when it comes to practical GDPR compliance. Moreover, speakers worried about issues arising in the AI system’s deployment phase – where AI users are in control – and pointed to the approach proposed at the Council of Europe’s Convention on AI as more tailored to the responsibilities of each party.

For another speaker, incorporating broader fundamental rights impact assessments into the AI Act could complement Data Protection Impact Assessments (DPIAs) under the GDPR. 

  1. Transparency requirements and AI training need further clarity

With regard to certain requirements set forth by the AI Act Proposal, speakers conveyed that:

  1. The difference between the risk-based approach and an exhaustive high-risk AI list 

As a pushback to having a specific set of strictly-regulated high-risk AI use cases in Annex III and prohibited practices in Article 5 of the AI Act, some voices suggested mimicking the GDPR’s open clauses and risk-based approach. More specifically, a few speakers agreed that not having a closed list but rather a set of overarching principles and risk assessment requirements would increase providers’ accountability and enable enforcers to verify compliance in a more flexible manner. 

Speakers that advocated for such a solution agreed that the concept of ‘risk’ and its underlying assessment criteria should be read in line with the GDPR, which could also provide less prescriptive indications of how to mitigate detected risks.

For other experts in the room, a clear definition of AI along with the list of Annex III is preferable, as it avoids enshrining subjective risk assessment criteria. According to this point of view, relying on providers’ self-assessments to decide what is considered high-risk could benefit larger players with the financial incentives and legal resources to take a less cautious approach to AI development.

Among the participants that defended keeping a high-risk AI list, some called for the ability to easily update the list since novel risky AI use cases are constantly surfacing. One of the experts disagreed that it should fall on the EC to update the list, given the institution’s political driving factors. Instead, the speaker called for a bottom-up approach involving regulators and the public’s participation. Other voices also advocated for incorporating emotion recognition systems and the analysis of biometrics-based data in the list of high-risk AI use cases. 

  1. A need for effective enforcement and redress avenues for affected persons 

Lastly, participants touched upon the AI Act’s governance mechanisms and redress avenues. 

Further reading:

FPF Urges Federal Trade Commission to Craft Practical Privacy Rules

FPF Comments Regarding FTC ANPR Urge the Commission to Provide Individuals with  Strong, Enforceable Rights and Companies with Greater Clarity about their Obligations under Section 5 of the FTC Act.

The Future of Privacy Forum filed comments regarding the Federal Trade Commission’s Advance Notice of Proposed Rulemaking, recommending that the Commission prioritize practical rules that clearly define individuals’ rights and companies’ responsibilities. 

The Commission has spent decades enforcing prohibitions against unfair and deceptive data practices regarding a wide range of established and emerging technologies. Those privacy and security enforcement actions have been based on the FTC’s statutory authority, which provides flexibility to address consumer harms arising from novel technologies and business practices, but which does not articulate granular rights for consumers or requirements for businesses. Clear, practical rules can more specifically define what data practices the Commission considers unfair or deceptive. The current FTC rulemaking is an opportunity to provide individuals with strong, enforceable rights and companies with greater clarity about their obligations under Section 5 of the FTC Act.

FPF’s comments urge the Commission to:

As a practical matter, the FTC acts as the primary U.S. privacy enforcement agency. Although FPF views a new, pragmatic, comprehensive federal privacy law as the ideal mechanism for grappling with complex technologies and data flows, clear and practical FTC rules defining unfair and deceptive practices would benefit individuals and businesses.

Understanding Extended Reality Technology & Data Flows: Privacy and Data Protection Risks and Mitigation Strategies

This post is the second in a two-part series. Click here for FPF’s XR infographic. The first post in this series focuses on the key functions that XR devices may feature, and analyzes the kinds of sensors, data types, data processing, and transfers to other parties that power these functions. 

I. Introduction

Today’s virtual (VR), mixed (MR), and augmented (AR) reality environments, collectively known as extended reality (XR), are powered by the interplay of multiple sensors, large volumes and varieties of data, and various algorithms and automated systems, such as machine learning (ML). These complex relationships enable functions like gesture-based controls and eye tracking, without which XR experiences would be less immersive or unable to function at all. However, these technologies often depend on sensitive personal information, and the collection, processing, and transfer of this data to other parties may pose privacy and data protection risks to both users and bystanders. 

This post examines the XR data flows that are featured in FPF’s infographic, and analyzes some of the data protection, privacy, and equity issues raised by the data that is processed by these devices, as well as strategies for mitigating these risks.

Key risks include:

Key mitigation strategies include:

II. Processing Large Volumes and Varieties of Sensitive Personal Data

XR technologies raise traditional privacy and data protection risks, but also implicates larger questions around surveillance, social engineering, and freedom of expression. As noted in the first blog post in this series, XR technologies require large volumes and varieties of data about the user’s body and their environment. Certain collection and use limitations may therefore be challenging or impossible to implement, since some of XR’s core functions require extensive data collection and processing. Now and in the future, XR technologies may also transfer data to other users and third parties, such as software companies, hardware manufacturers, and advertisers. While devices generally process raw sensor data on device, they may transmit raw or processed sensor data to an application and other parties for further processing to improve representations of virtual content or enable shared experiences. While these transmissions of data may improve a user’s XR experiences, they can also create new privacy and data protection risks for users and bystanders.

Eye tracking underpins many current and future-facing use cases, such as enhanced graphics, expressive avatars, and personalized content, but it may pose privacy and data protection risks to users. This is due to eye tracking data’s sensitive nature, its potential role in significant decisions affecting users, and the unconscious nature of the behaviors from which some of this data is derived. Organizations could use data related to pupil dilation and gaze to potentially infer information—whether accurate or not—about the user, such as their sexual orientation, age, gender, race, and more. Organizations may also use this data to attempt to diagnose medical conditions, such as ADHD, autism, and schizophrenia. Despite the sensitive nature of this data, users often lack the capacity to meaningfully control its collection or use. Without proper controls, this information may be further shared with third parties. This raises the likelihood of organizations using this data to inform major decisions about a user, which could have real-world impacts on XR users.

Sensors that track a user’s bodily motions may also cause harm due to their potential to undermine anonymity. The first post in this blog series analyzed how tracking a user’s position can enable functions like mapping the user’s space to help place virtual content. But this tracking could also be a means to digitally fingerprint users and individuals, including bystanders, especially given the volume and variety of data that XR devices gather and process. At the same time, this tracking data raises the same de-identification and anonymization concerns that exist regarding similarly granular non-XR data types, such as behavioral biometrics, historical geolocation, and genetic information. Digital fingerprinting may therefore undermine individuals’ ability to maintain anonymity in XR environments. This may discourage users from fully expressing themselves or participating in certain activities due to worries about retaliation.

III. Statutory Obligations 

It is unclear how well current legal protections mitigate the privacy risks posed by certain processing activities in the XR context. Whether or not bodily information like gaze and gait are covered by existing biometric regulations may depend on these laws’ definitions of biometric data. For example, under the EU’s comprehensive privacy law, the General Data Protection Regulation (GDPR), this type of data qualifies as “personal data” if it relates to an identified or identifiable person, such as a user or bystander. Thus, an organization that records, collects, assesses or uses this data in any other manner would be subject to GDPR obligations such as transparency, fairness, data minimization or storage limitation. 

Pursuant to the GDPR, “biometric data” includes personal data resulting from the specific technical processing of a person’s physical, psychological, or behavioral characteristics, and which allows for identification. Organizations are subject to heightened obligations under the Regulation depending on the purpose for which they process biometric data. Specifically, the GDPR prohibits organizations from processing such personal data, unless one of the permissible grounds strictly defined by the law applies. The Regulation defines biometric data to include only that which an organization could use for identification purposes. As described in FPF’s prior blog post, however, an organization may process eye and other bodily information for non-identification purposes, such as to debug applications or improve products . This raises questions as to whether the GDPR’s protections for sensitive data categories would always apply to these XR functions. Notably, even if this eye and other bodily information does not meet the “sensitive data” criteria, the rest of the Regulation would still apply to this data. Furthermore, European ePrivacy rules may apply to a user’s system that connects to or pairs with XR equipment.

Similar lack of certainty exists in U.S. law. For example, the Illinois Biometric Information Privacy Act (BIPA) applies to information based on “scans” of hand or face geometry, retinas or irises, and voiceprints. This definition of “biometric identifiers” does not explicitly cover the collection of behavioral characteristics or eye tracking. Whereas the GDPR may still apply to an organization that processes eye and other bodily information if it is personal data or qualifies as other sensitive data categories, BIPA may not apply at all. This highlights how existing laws’ protections for biometric data may not extend to every situation involving XR technologies. However, protections may apply to other special categories of data, given XR data’s potential to draw sensitive inferences about individuals.

IV. Bystander and Environmental Data

Bystanders’ privacy can also be impacted when XR devices and third parties collect and process sensor data. Some of the privacy and data protection issues affecting bystanders mirror the  privacy risks to XR users. However, unique notice challenges arise with respect to bystanders. Non-users in proximity to an XR user may be unaware that the device is collecting and processing data about them, as well as for what purposes and with whom the device is sharing this information. Like users, bystanders also cannot control the unconscious behaviors that provide the sensor data inputs for XR experiences. Even if a bystander generally understands that a device is collecting data about them, the unconscious nature of some behaviors means that bystanders may neither be aware of the behaviors nor specifically understand that a device is processing data about these behaviors. 

Bystander data could facilitate both use cases that are detrimental to a non-user’s privacy and decisions that negatively affect them. Future XR technologies will likely incorporate facial characterization or analysis technologies that can allegedly sense cognitive states or infer emotions—whether accurate or not—based on sensor data. Insights from these technologies could help organizations construct a portrait of the locations a non-user frequents, their interests, and medical conditions. 

IV. Strategies for Mitigating Risks

Organizations that provide XR technologies can implement a number of strategies to address the risks raised by XR data collection, use, and sharing. While no single intervention by itself mitigates all of these risks, some combination of strategies is likely to decrease risks and help minimize harms that may result. For instance, processing and storing data on a user’s device, as opposed to remotely on a processor’s server, helps ensure that the data remains in the user’s hands and not accessible to others. Organizations can also work to limit data collection, storage, and/or usage, including third-party use, to particular, specified purposes, and provide notice to or obtain consent from users if they plan to use this data for a different purpose. Companies should set policies and guidelines for third-party developers’ data practices, and monitor to ensure compliance with said policies.

Certain privacy-enhancing technologies (PETs) are useful tools for managing privacy risks. For example, advances in encryption and differential privacy can enable privacy-preserving data analysis and sharing, and the use of synthetic data sets can address concerns about data sharing or secondary data use. Another option is to provide greater user controls, allowing users to control the kinds of data collected about them—particularly sensitive data like eye tracking and facial expressions data—and with whom this data is shared. 

Some organizations have chosen to design XR devices so that they ensure bystanders’ data is not unduly collected, for instance by automatically blurring bystanders’ faces, or using a system of lights on a head-mounted display to signal to non-users the device is on and potentially collecting data.

Organizations using XR should be transparent about how they use and plan to use XR data, and publicly commit to guidelines and/or ethical principles. This could also include something akin to an institutional review board (IRB) to ensure compliance with these principles. Finally, organizations can build privacy into an organization’s culture and processes and create bodies like oversight boards to ensure privacy protections endure beyond other changes in mission and values.

V. Conclusion

The complex web of data, sensors, algorithms and automated systems, and parties that enable important and sometimes central XR functions also can raise privacy and data protection concerns. Devices and ML models may collect and process large volumes and varieties of sensitive personal data, over which users and bystanders may lack meaningful controls, and that other parties could use to make important decisions affecting these individuals. The disclosure of this data may also undermine user anonymity, which could discourage users from freely expressing themselves due to fears of retaliation. Providing bystanders with notice that communicates that a device is collecting data about them, let alone for what purpose and to whom the data is transmitted, is challenging and may not be possible. This creates difficulties related to obtaining affirmative express consent to data processing activities in XR, where consent is predicated on the individual being informed. There is also uncertainty about how existing laws interact with XR technologies, such as how body-based data fits within existing legal definitions of biometrics. The risks to users and bystanders outlined in this post underscore the importance—and, sometimes, challenge—of ensuring appropriate safeguards exist at the technical, policy, and legal level to mitigate against harms that may arise in this space.

Brussels Privacy Convening Focuses on Empowering Vulnerable and Marginalized People, Launches New Project

The Future of Privacy Forum (FPF), a global non-profit focused on data protection and privacy, and the Brussels Privacy Hub of Vrije Universiteit Brussel (VUB) will jointly present the sixth edition of the Brussels Privacy Symposium on November 15, 2022. The in-person event will convene in Brussels, bringing together policymakers, academic researchers, civil society, and industry representatives to discuss privacy research and scholarship. 

In line with this year’s topic, “Vulnerable People, Marginalization, and Data Protection,” participants will explore the extent to which data protection and privacy law — including GDPR and other modern data protection laws like Brazil’s LGPD — safeguard and empower vulnerable and marginalized people. They will also debate how to balance the right to privacy with the need to process sensitive personal information to uncover and prevent bias and marginalization. Stakeholders will discuss whether prohibiting the processing of personal data related to vulnerable people serves as a protection mechanism. 

The event marks the launch of “VULNERA,” the International Observatory on Vulnerable People in Data Protection, led by the Brussels Privacy Hub and supported by the Future of Privacy Forum. The observatory aims to promote a mature debate on the multifaceted connotations surrounding the notions of human “vulnerability” and “marginalization” existing in the data protection and privacy domains.

“I’m excited to begin the groundbreaking and much-needed work we have ahead of us,” Gabriela Zanfir-Fortuna, FPF’s Vice President for Global Privacy, said. Zanfir-Fortuna is also a member of VULNERA’s executive team as a Scientific Coordinator, which is joined by more than 30 members of a broad scientific network. “This initiative will focus on understanding how data protection and privacy law puts safeguards in place to protect the rights of vulnerable and marginalized people in societies increasingly underpinned by digital data flows.”

Professor Gianclaudio Malgieri, Co-Director of Brussels Privacy Hub of Vrije Universiteit Brussel added: “The VULNERA International Observatory will explore theories of vulnerability, marginalization, and intersectionality, examining how data protection law and policy apply to people in certain contexts that may be vulnerable or marginalized, such as women, children, people on a low or zero income, racialized communities, and people of color, ethnic and religious groups, migrants, LGBTQIA+ and non-binary people, the elderly, and persons with disabilities,” 

Representatives from the European Network Against Racism, Dutch Human Rights Council, European Commission, Irish DPC, European Digital Rights (EDRi), European Data Protection Supervisor, and other relevant organizations will share their expertise during the Brussels Privacy Symposium. 

“As we think about the next iteration of the digital age, it’s important that we have a more global consensus on how to protect those who have been historically marginalized,” said Rob van Eijk, FPF’s Managing Director for Europe. “The timing for the launch of VULNERA, and this symposium at-large, could not have been at a more critical juncture.” 

For more information about the event, the agenda, and speakers, visit the FPF site. To learn more about the VULNERA, visit the Brussels Privacy Hub site

Event Report: FPF Side Event and Workshop on Privacy Enhancing Technologies (PETs) at the 2022 Global Privacy Assembly (GPA)

The 2022 Global Privacy Assembly (GPA) – which brings together most global data protection authorities (DPAs) every year since 1979, to share knowledge and establish common priorities among regulators – took place between October 25 and 28, in Istanbul (Türkiye). The Future of Privacy Forum (FPF) was invited by the organizers of the GPA (the Turkish DPA) to host a two-part side event during the GPA’s Open Session (on October 25 and 26), in addition to a capacity building workshop for regulators during the Closed Session (on October 28).

These sessions covered the topic of Privacy Enhancing Technologies from three different approaches:

Below we summarize the discussions in the two FPF Side Events with regulators and privacy leaders and highlight key takeaways.

The regulators’ take: PETs are promising, but no silver bullet

Moderator Limor Schmerling Magazanik opened the first discussion by observing that regulators have a dual role regarding PETs: issuing guidance to clarify when and how PETs should be deployed in different scenarios to ensure compliance with privacy laws; and providing tailored advice to lawmakers that wish to promote the use of PETs for the pursuit of public interest tasks and the responsible use of data. 

On this note, Gilad Semama noted that PETs seem to present solutions for combining innovation in the tech sector with the protection of privacy as a Constitutional right in Israel. Semama highlighted that companies have expressed their need for certainty on how they can use PETs to achieve compliance with the privacy framework. The speaker added that it is challenging to find a one-size-fits-all solution in this respect, but that the Privacy Commissioner is trying to pass flexible guidance and answer the public’s queries on PETs for the benefit of businesses and DPOs, by referring to accountability and helping them choose the most appropriate PET for specific use cases. According to Semama, PETs should be complemented with other data security solutions to provide meaningful protections. On the other hand, he noted that companies that are developing PETs in Israel need access to funding and that a recent joint project from the regulator and the Innovation Authority of Israel may be of help. 

Next up, Rebecca Kelly Slaughter stressed the potential that PETs might offer in promoting competition and consumer protection, as they can represent innovation and a positive metric of competition. However, some applications of PETs can be misleading and competition-inhibiting. This means that, according to Slaughter, the value of PETs should be assessed against their concrete effects. The Commissioner stated that the FTC should mainly focus on providing guidance to assist businesses developing and implementing PETs through FTC rulemaking, instead of strict enforcement. However, the FTC will not approve broad safe harbor provisions for the use of specific PETs, as their effectiveness is generally context-specific.

Slaughter suggested PETs could enable the implementation of privacy-preserving age verification systems, although the FTC is yet to see such a solution. This would enable businesses to move away from notice and consent-based standards regarding the processing of children’s data, which is one of the current aims of the FTC. According to Slaughter, consent does not provide adequate protections to children’s online privacy, and providers should rather focus on data minimisation, purpose and storage limitation. 

The FTC is currently receiving comments to its proposed Consumer Surveillance and Data Security Rulemaking, which also touches on PETs. The contributions to the public consultation promise to offer a compendium of perspectives for several stakeholders to tap into when developing and implementing PETs. In addition, Slaughter admitted that the FTC needed to build collaboration with and draw inspiration from regulators in different jurisdictions, also when it comes to issuing enforcement orders. As companies will roll-out PETs across borders, consistent regulatory approaches will increase the likelihood of broad uptake of PETs by small and large players.

Tobias Judin followed up on Slaughter’s comments, by saying that, when it comes to greenlighting PETs, DPAs should explain that companies do not need to choose between data collection and privacy, or between innovation and data protection. Judin used health research as an example, outlining that often researchers need to collect data about rare diseases across jurisdictions to make the dataset more representative, even knowing that the level of data protection is not equivalent in all targeted countries. In that context, PETs such as homomorphic encryption or differential privacy may provide reassurance to research subjects. Judin also stressed that confidential computing can mitigate security vulnerabilities that often exist when research data is stored on premises and not in the cloud.

Judin also elaborated extensively on federated learning, which allows controllers to check their data processing systems for bias through careful analysis of larger datasets. He stated that the application of federated learning to an AI model’s training data can be done within users’ devices. He gave the example of Google’s GBoard, which enabled the company to make predictions about what individuals wanted to type, without the data leaving their device. 

Another example is how the Norwegian DPA advised banks within its regulatory sandbox for responsible AI to cooperate when training their money-laundering detection algorithms. As banks do not normally have enough ‘suspicious’ customers to train their detection algorithms, they tend to be overzealous, which leads to false positives and data protection issues. However, the DPA noted that banks could cooperate in the development of a more effective algorithm without sharing raw data about their customers by using differential privacy, as long as they prevented model inversion attacks. The DPA also conceded that banks needed to tweak the model and the underlying training and input data as they went along to ensure the algorithm’s effectiveness, which should reassure diligent AI developers against the risk of fines. 

Lastly, Vitelio Ruiz Bernal stressed the importance of helping businesses achieve security standards that can help them comply with data protection law. In this respect, he mentioned the INAI’s data protection laboratory, which is dedicated to analyzing apps and web-based applications that are subject to a black-box. The INAI has found that processors which assist controllers in those contexts are often under-resourced and reluctant to use PETs due to their perceived high costs. Bernal revealed that the INAI is currently looking for public-private collaborations to develop accessible PETs and to issue guidelines on specific PETs (e.g., encryption), also inspired by the work of the Berlin Group on the matter. Given Mexico’s specific legal requirements in terms of cloud service security, Bernal mentioned that PETs could potentiate the uptake of cloud services by increasing trust among stakeholders.

img 0775

The view of practitioners: a call for regulatory clarity and predictability

To frame the second panel of the Side Event, Jules Polonetsky reflected on the privacy community’s eagerness to learn about how industry privacy leaders are integrating PETs into their compliance strategies, their successful and less successful stories. On the other hand, Jules queried the panelists about the actions they would like to see from regulators and policymakers in this space to promote the uptake of PETs. 

Anna Zeiter revealed that eBay has had meetings with its lead DPA in Germany about how PETs could help them comply with the Court of Justice of the European Union (CJEU)’s Schrems II ruling on international data transfers, in particular on the implementation of supplemental measures in accordance with the European Data Protection Board (EDPB)’s guidance. In that context, the DPA focused on measures such as tokenization and encryption (in transit and at rest).

Zeiter highlighted the UK Information Commissioner’s Office (ICO)’s PETs guidance, and said this constituted an opportunity for other regulators to evaluate where they stand on the matter. The speaker also called for a global alignment from DPAs, because companies will implement PETs across very different jurisdictions. Zeiter claimed that, for companies to know whether they should invest in PETs, regulators need to give them reassurance, for example in the form of some sort of PET ‘whitelist’ in particular contexts of application. Additionally, Zeiter underlined that companies that develop and use PETs and their DPOs have a role in educating regulators, which was echoed by a DPA official in the room.

Emerald De Leeuw-Goggin mentioned Logitech’s offerings of PETs as a service for its internal teams of software developers. According to the speaker, this involved making PETs more accessible and scalable within the wider decentralised organisation, the development of privacy engineering capabilities, and the buy-in of Chief Technology Officers. De Leeuw-Goggin noted that PETs are still not mainstream enough for an SME owner to feel confident investing and implementing them, also due to the existing skills gap in the field. As PETs become mainstream, they will also become more understandable and usable across sectors and company sizes.

Barbara Cosgrove stated that B2B companies like Workday tend to receive questions from their customers on how to best implement PETs into their software solutions. This includes masking or pseudonymizing data, or limiting employee access to data. Sometimes, more sophisticated measures – like differential privacy – could be adequate, but companies are reluctant in investing resources in the absence of regulatory clarity, particularly on de-identification. Cosgrove agreed that businesses and regulators need to put their brains together in developing use cases and standards that would increase legal certainty around the effective use of PETs. Co-regulatory solutions like Codes of Conduct could facilitate demonstrations that PETs are used in a compliant manner. 

Finally, Geff Brown highlighted how differential privacy has become usable in multiple apps, allowing providers to process aggregated telemetry data at scale for analytics. Microsoft is using the technique to improve its Natural Language Processing models, including text and speech prediction. In that context, differential privacy allows companies to demonstrate the accuracy of the model without compromising individuals’ privacy. Brown argued that tech savvy companies need to better explain PETs to consumers and corporate customers, but that standardization efforts and favorable DPA positions can also help. In this context, Geff wished for an EDPB update to the 2014 guidance on anonymization, and to have regulators carry out PET testing and share the results with the public, thereby increasing knowledge and trust in the technologies.

img 0831

Further reading: 

Call for Nominations Open: FPF’s Award for Research Data Stewardship

When companies share data with researchers in a way that protects data, the collaboration can unlock new scientific insights and drive progress in medicine, public health, education, social science, and many other fields. 

FPF is thrilled to announce the open nomination period for FPF’s 3rd Annual Award for Research Data Stewardship. The Award recognizes partnerships between companies and research institutions where a company shares data it holds in a privacy-protective manner with a researcher or research team for scholarly publication. 

An example of an extraordinary award-winning partnership between researchers and a company to advance scientific and medical progress to benefit society through privacy-protective research data sharing is Stanford Medicine researchers and medical wearable and digital biomarker company Empatica. This award-winning collaboration studied whether data collected by Empatica’s researcher-friendly E4 device, which measures skin temperature, heart rate, and other biomarkers, could detect COVID-19 infections before the onset of symptoms. 

The award is presented to the company and its academic partner based on several factors, including the adherence to privacy protection in the sharing process, the quality of the data handling process, and the company’s commitment to supporting academic research. 

Learn more about past award winners.

Save the Date: FPF’s Award for Data Stewardship Virtual Award Presentation (May 10, 2023)

The Award is a part of FPF’s “Corporate Data Sharing for Research: Next Steps in a Changing Legal and Policy Landscape” project to accelerate the safe and responsible sharing of administrative data between companies and academic researchers. This project is supported by the Alfred P. Sloan Foundation, a not-for-profit grantmaking institution whose mission is to enhance the welfare of all through the advancement of scientific knowledge.

logo 1b small gold blue