Record Set: Assessing Points of Emphasis from Public Input on the FTC’s Privacy Rulemaking

More than 1,200 law firms, advocacy organizations, trade associations, companies, researchers, and others responded to the Federal Trade Commission’s Advance Notice of Proposed Rulemaking (ANPR) on “Commercial Surveillance and Data Security.” Significantly, the ANPR initiates a process that may result in comprehensive regulation of data privacy and security in the United States, and also marks a notable change from the Commission’s historical case-by-case approach to addressing consumer data misuse. Comments received in response to the ANPR will be used to generate a public record that informs the Commission in deciding whether to pursue one or more draft rules, and will be generally available for any policymaker to utilize in future legislative proposals. The Future of Privacy Forum’s comment is available here

Using a sample of 70 comments, excluding our own, selected from stakeholders representing various sectors and perspectives, the Future of Privacy Forum analyzed responses for common themes and areas of divergence. Below is a summary of key takeaways.

1. Areas of Agreement

a. Data Minimization

Many submissions encouraged the Commission to create a rule or standard requiring that companies engage in some form of data minimization. Data minimization is a foundational data protection principle, appearing in the Fair Information Practice Principles (FIPPs) and required by the European Union’s General Data Protection Regulation (GDPR) and other international regulations. The European Data Protection Supervisor (EDPS) emphasized that an FTC data minimization rule would help harmonize data protection standards between the European Union (EU) and the United States (U.S.), and would codify the data protection best practices established by the Commission’s history of enforcement. Several comments focused on the ability of data minimization to create “market wide” incentives that could disrupt an environment that may provide competitive advantages to organizations who are not responsible data stewards, while Palantir noted that, unlike the exercise of data subject rights, data minimization requires no extra action from users.

A small group of responses noted that data minimization has implications for machine learning (ML) and the development of artificial intelligence (AI) systems. While such systems must be trained on vast quantities of data, commenters noted that it is equally important that such data be high quality. Palantir emphasized that data minimization, insofar as it required the deletion of out-of-date or otherwise flawed data, would support this goal. EPIC noted that data minimization requirements would help ensure that businesses use of personal data is aligned with consumer expectations, observing that, “[c]onsumers reasonably expect that when they interact with a business online, that business will collect and use their personal data for the limited purpose and duration necessary to provide the goods or services they have requested,” and not retain their data beyond that duration or for other uses. Finally, Google, the Wikimedia Foundation, and other commenters emphasized that a data minimization rule would support data security objectives as well: if companies retain less personal data, data breaches, when they do occur, will be less harmful to consumers.

b. Data Security

There was also broad, though not uniform, consensus around support for a data security rule requiring businesses to implement reasonable data security programs. Many commenters noted that data security incidents are a common occurrence, are not reasonably avoidable by consumers, and pose grave risks to individuals, including that of identity theft. The EDPS underscored the role of data security in protecting core rights and freedoms under EU law and the GDPR, and recommended that the Commission require organizations to engage in data processing impact assessments as well as data protection by design and default, and use encryption and pseudonymization to protect personal data.

The Computer & Communications Industry Association (CCIA) observed that any data security rulemaking should be harmonized with standards established by the Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST). In this same vein, the Software Alliance (BSA), “encourage[d] the agency…to recognize that the best way to strengthen security practices cross industry sectors is by connecting any rule on data security to existing and proposed regulations, standards, and frameworks.” The BSA also asked that the Commission recognize the “shared responsibilities of companies and their service providers” in protecting consumer data, and create rules that reflect this dual-responsibility framework, as well as the different relationships that companies versus service providers have with consumers. Some commenters discussed how the Commission’s rulemaking should interact with other agencies. For example, the Center for Democracy and Technology (CDT) emphasized that reasonable data security requirements should extend to the government services context, including to government-run educational settings, as well as to non-financial identity verification services.

2. Areas of Contention

a. The Commission’s Authority

By far, the most common disagreement among commenters was whether, and to what extent, the Commission could promulgate data privacy and security regulations through their statutory authority. Most commenters took a moderate approach–believing that the Commission had some level of limited rulemaking authority in the space. Commenters provided a variety of bases for where the Commission could and could not create rules, and why. Some entities believed that the Commission could only address practices that clearly demonstrate consumer harm, while others encouraged the Commission to focus on FTC enforcement actions that have already survived judicial scrutiny. Google noted that the “FTC rests on solid ground” for a data security rule given the Third Circuit’s decision in FTC v. Wyndham Worldwide Corp, which affirmed the agency’s authority to regulate data security as an unfair trade practice. While many commenters also argued that the Commission could only create rules that would not overlap with other regulatory jurisdictions, some advocates believed that the FTC can use its authority to regulate unfair and deceptive practices like discrimination even when other agencies have concurrent jurisdiction. The Lawyers Committee for Civil Rights noted the FTC’s extensive experience sharing concurrent jurisdiction with other agencies, where “[i]t works on ECOA with the Consumer Financial Protection Bureau, on antitrust with the Department of Justice, and on robocalls with the Federal Communications Commission.”

However, comments also existed at both ends of the spectrum in regard to the FTC’s authority to act. Some commenters, largely from civil society organizations, civil rights groups, and academia, argued that the Commission has substantial authority to address data privacy and security where data practices meet the statutory requirements for being “unfair or deceptive.” These commenters reasoned that Congress intended the Commission’s Section 5 authority to maintain dexterity to address evolving commercial practices, and thus can be easily applied to data-driven activities that cause unavoidable and substantial injury to consumers. 

Other commenters, largely comprising trade associations, business communities, and some policymakers questioned the Commission’s authority to conduct this rulemaking under the FTC Act. While some, like the Developers Alliance, argued that most data collection practices do not constitute “unfair or deceptive” trade practices, the majority of commenters in opposition argued that the Commission lacks the authority to conduct this type of rulemaking because the regulation of data privacy and security is a “major question” best served through Congress. These comments focused on the Supreme Court’s 2022 ruling in West Virginia v. EPA, holding that regulatory agencies, absent clear congressional authorization, cannot issue rules on major questions that affect a large portion of the American economy. Several Republican U.S. Senators noted that “simply stating within the ANPR that within Section 18 of the FTC Act, Congress authorized the Commission to propose a rule defining unfair or deceptive acts or practices…is hardly the clear Congressional authorization necessary to contemplate an agency rule that could regulate more than 10% of U.S. GDP ($2.1 trillion) and impact millions of U.S. consumers (if not the entire world).” The lawmakers further argued that even if the Commission could prove clear Congressional authorization, the rulemaking would likely violate the FTC Act because the ANPR failed to describe the area of inquiry under consideration with the mandated level of specificity.

b. “Commercial Surveillance”

The Commission’s framing of the ANPR regarding “commercial surveillance,” was another area that generated controversy. The ANPR defines “commercial surveillance” as “the collection, aggregation, analysis, retention, transfer, or monetization of consumer data and the direct derivatives of that information.” 

Several comments supported the Commission’s framing and detailed the multitude of ways in which businesses track private individuals over time and space. The Electronic Privacy Information Center (EPIC) stated, “[t]he ability to monitor, profile, and target consumers at a mass scale have created a persistent power imbalance that robs individuals of their autonomy and privacy, stifles competition, and undermines democratic systems.” The most common examples of practices considered “commercial surveillance” by commenters included: targeted advertising, facial recognition, pervasive tracking of people across services and websites, unlimited sharing and sale of consumer information, and secondary uses of consumer information. 

On the other side, commenters argued that the terminology around  “commercial surveillance” was both an unfair and overly broad characterization. Trade associations like the Information Technology Industry Council argued that the term implies a negative connotation for any commercial activity that collects or processes data, even the many legitimate, necessary, and beneficial uses of data that make products and services work for users. Many comments emphasized the crucial role of consumer data in our society and how it has been used to fuel social research and innovation, including telehealth, studying the efficacy of the COVID vaccine, the development of assistive AI for disabled individuals, identifying bias and discrimination in school programs, and ad-supported services like newspapers, magazines, and television.

3. Other Notable Focuses

a. Automated Decision-Making and Civil Rights

A large contingency of advocacy organizations documented how automated decision-making systems can exacerbate discrimination against marginalized groups. Organizations including the National Urban League, Next Century Cities, CDT, Upturn, Lawyers’ Committee for Civil Rights Under Law, and EPIC provided illustrative examples of discriminatory outcomes in housing, credit, employment, insurance, healthcare, and other areas brought about by algorithmic decision-making.

Industry groups including the National Association of Mutual Insurance Companies argued that discrimination concerns are best addressed through Congressional action, given that the FTC Act does not mention discrimination and does not answer the foundational legal question of ”whether it is a regime of disparate treatment or disparate impact.” Many advocacy groups refuted this assertion and argued for the necessity of addressing algorithmic discrimination in a rule because of gaps in existing civil rights law and because of the Commission’s history of utilizing concurrent jurisdiction. For example, Upturn highlighted three major gaps, noting that current law leaves large categories of companies, such as hiring screening technology firms, uncovered; fails to address modern-day harms such as discrimination by voice assistants; and does not require affirmative steps to measure and address algorithmic discrimination. 

Commenters made a variety of suggestions about how the Commission could address these problems in a rule, including through data minimization (National Urban League), greater transparency (CDT), declaring the use of facial recognition technology an unfair practice in certain settings (Lawyers’ Committee, EPIC), and implementing principles in the Biden administration’s Blueprint for an AI Bill of Rights (Upturn, Lawyers’ Committee). Google emphasized that any rulemaking on AI should be risk-based and process-based and promote transparency, adding that “a process-based approach with possible safe harbor provisions could encourage companies to continually audit their systems for fairness without fear that looking too closely at their systems could expose them to legal liability.”

b. Health Data and Other Considerations in Light of Dobbs

Another strong thread throughout the comments was a concern about the privacy and integrity of health data, particularly in light of the Supreme Court’s 2022 decision in Dobbs v. Jackson Women’s Health Organization. Comments from Planned Parenthood, CDT, the American College of Obstetricians and Gynecologists (ACOG), and the California Attorney General (AG) Rob Bonta all emphasized the impact of the Dobbs decision, which allows states to criminalize the acts of seeking and providing abortion services. For example, the ACOG cited a Brookings Institution article demonstrating the extent to which user data such as geolocation data, app data, web search data, and communications and payments data can be used to make sensitive health inferences.

Responding to concerns about the risk of misuse of geolocation data specifically, Planned Parenthood called upon the Commission to write tailored regulations requiring that the retention of location data be time-bound and linked to a direct consumer request. The Duke/Stanford Cyber Policy Program emphasized that the Commission should seek to establish comprehensive regulations to govern data brokers, and that, “[i]n some cases, the policy response should include restrictions or outright bans on the sale of certain categories of information, such as GPS, location, and health data.” AG Bonta recommended that, “[t]he Commission…prohibit [the] collection, retention or use of particularly sensitive geolocation data, including…data showing that a user has visited reproductive health and fertility clinics.”

Many comments addressed questions around sensitive health-related data that is not otherwise protected by the Health Insurance Portability and Accountability  Act (HIPAA). ​​The College of Healthcare Information Management Executives (CHIME) emphasized that many consumers do not understand the scope or scale of the use of their sensitive health data, including data collected by fitness and “femtech” apps. The American Clinical Laboratory Association (ACLA), meanwhile, emphasized that the Commission should not subject entities already subject to HIPAA to new requirements, and argued that de-identified data should be exempt from privacy and security protections. Finally, algorithmic discrimination in the healthcare context was a focus area for several commenters.

c. Children’s Data

Finally, many commenters also weighed in on the particular vulnerability of children online. The Software & Information Industry Association (SIIA), for example, recognized that children deserve unique consideration, but argued that FTC rulemaking on child and student privacy would be duplicative of existing Commission efforts to update COPPA rules, as well as existing education privacy statutory provisions at the federal and state levels. Others suggested that a Commission rule could and should address child safety. Some of their most pressing concerns included:

What’s Next?

The ANPR is merely one step along a lengthy and arduous rulemaking process. Should the Commission decide to move forward with rulemaking after reviewing the public record, they will need to notify Congress, facilitate another public comment process on a proposed rule, conduct informal hearings, and survive judicial review. Regardless of the outcome, the ANPR comment period has provided an ample public record to inform any policymaker about the current digital landscape, the most pressing concerns faced by consumers, and frameworks utilized by companies and other jurisdictions to mitigate privacy and security risks.

The authors would like to acknowledge FPF intern Mercedes Subhani for her significant contributions to this analysis.

FPF Provides Input on Draft Colorado Privacy Act Regulations

On September 30th, The Colorado Department of Law released draft regulations to implement the Colorado Privacy Act. The Future of Privacy Forum (FPF) filed written comments in response to the proposed rules on November 7th. Furthermore, FPF’s Keir Lamont and Felicity Slater participated in public stakeholder sessions hosted by the Colorado Attorney General’s Office as part of its regulatory process held on November 10th and November 17th.

FPF’s comments identified and provided recommendations to address areas of potential ambiguity in the draft regulations. Specifically, FPF’s contributions encouraged the Department of Law to:

  1. Provide additional clarity for the exercise of consumer rights through Universal Opt-Out Mechanisms (UOOMs), including standards for residency authentication and the procedures by which “opt-out lists” may function as UOOMs.
  2. Resolve apparent inconsistencies in the draft regulations for the use of on-by-default UOOMs.
  3. Clarify the intended scope of restrictions on “Dark Patterns” in consumer interfaces.
  4. Assess the scope and intended effect of the regulations’ novel definitions for “biometric data” and “biometric identifiers.”
  5. Align the protections of children’s privacy and relevant definitions with the Children’s Online Privacy Protection Act (COPPA).

The comment docket for the Colorado Privacy Act draft regulations remains open. Interested parties also have an opportunity to participate in a rulemaking hearing scheduled for February 1, 2023.

FPF at IAPP’s Europe Data Protection Congress 2022: Global State of Play, Automated Decision-Making, and US Privacy Developments

Authored by Christina Michelakaki, FPF Intern for Global Policy

On November 16 and 17, 2022, the IAPP hosted the Europe Data Protection Congress 2022 – Europe’s largest annual gathering of data protection experts. During the Congress, members of the  Future of Privacy Forum (FPF) team moderated and spoke at three different panels. Additionally, on November 14, FPF hosted the first Women@Privacy awards ceremony at its Brussels office, and on November 15, FPF co-hosted the sixth edition of its annual Brussels Privacy Symposium with the Vrije Universiteit Brussels (VUB)’s Brussels Privacy Hub on the issue of “Vulnerable People, Marginalization, and Data Protection” (event report forthcoming in 2023).

In the first panel for IAPP’s Europe Data Protection Congress, Global Privacy State of Play, Gabriela Zanfir-Fortuna (VP for Global Privacy, Future of Privacy Forum) moderated a conversation on key global trends in data protection and privacy regulation in jurisdictions from Latin America, Asia, and Africa. Linda Bonyo (CEO, Lawyers Hub Africa), Annabel Lee (Director, Digital Policy (APJ) and ASEAN Affairs, Amazon Web Services), and Rafael Zanatta (Director, Data Privacy Brasil Research Association) participated. 

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale (EU Privacy Counsel, Future of Privacy Forum) led a discussion on FPF’s ADM case-law report and impactful cases and relevant concepts for automated decision-making regulation under the GDPR. Ruth Boardman (Partner, Co-head, International Data Protection Practice, Bird & Bird), Simon Hania (DPO, Uber), and Gintare Pazereckaite (Legal Officer, EDPB) participated.

Finally, in the third panel, Perspectives on the Latest US Privacy Developments, Keir Lamont (Senior Counsel, Future of Privacy Forum) participated in a conversation focused on data protection developments at the federal and state level in the United States. Cobun Zweifel-Keegan (Managing Director, D.C., IAPP) moderated it, and Maneesha Mithal (Partner, Privacy and Cybersecurity, Wilson Sonsini Goodrich & Rosati) and Dominique Shelton Leipzig, (Partner, Cybersecurity & Data Privacy; Leader, Global Data Innovation & AdTech, Mayer Brown) also participated.

Below is a summary of the discussions in each of the three panels:

1. Global trends and legislative initiatives around the world

In the first panel, Global Privacy State of Play, Gabriela Zanfir-Fortuna stressed that although EU and US developments in privacy and data protection are in the spotlight, the explosion of regulatory action in other regions of the world is very interesting and deserves more attention.

Linda Bonyo touched upon the current movement in Africa where countries are adopting their own data protection laws, primarily inspired by the European model of data protection regulation, since they trust that the GDPR is a global standard and lack the resources to draft policies from scratch. Bonyo also added that the lack of resources and limited expertise are the main reasons why African countries struggle to establish independent Data Protection Authorities (DPAs). She then stressed that the Covid-19 pandemic revived discussions about a continental legal framework to address data flows. Regarding enforcement, she noted that for Africa, the approach looks rather “preventative” than “punitive.” Bonyo also underlined that it is common for big tech companies to operate outside of the continent and only have a small subsidiary in the African region, rendering local and regional regulatory action less impactful than in other regions.

Annabel Lee offered her view on the very dynamic Asia-Pacific region, noting that the latest trends, especially post-GDPR, include not only the introduction of new GDPR-like laws but also the revision of existing ones. Lee noted, however, that the GDPR is a very complex piece of legislation to “copy,” especially if a country is building its first data protection regime. She then focused on specific jurisdictions, noting that South Korea has overhauled its originally fragmented framework with a more comprehensive one and that Australia will implement a broad extraterritorial element in its revised law. Then Lee stated that when it comes to implementation and interpretation, data protection regimes in the region differ significantly, and countries try to promote harmonization by mutual recognition. With regards to enforcement, she stressed that it is common to see occasional audits and that in certain countries, such as Japan, there is a very strong culture of compliance. She also added that education can play a key role in working towards harmonized rules and enforcement. Lee offered Singapore as an example, where the Personal Data Protection Commission gives companies explanations not only on why they are in breach but also on why they are not in breach.

Rafael Zanatta explained that after years of strenuous discussions, there is an approved data protection legislation in Brazil (LGPD) that has already been in place for a couple of years. The new DPA created by the LGPD will likely ramp up its enforcement duties next year and has, so far, focused on building experimental techniques (to help incentivize associations and private actors to cooperate) and publishing guidelines, namely non-binding rules that will provide future interpretation for cases. Zanatta stressed that Brazil has been experiencing the formalization of autonomous data protection rights with supreme court rulings stating that data protection is a fundamental right different from privacy. He underscored that it will be interesting to see how the private sector applies data protection rights given their horizontal effect and the development of concepts like positive obligations and the collective dimension of rights. He explained that the extraterritorial applicability of Brazil’s law is very similar to the GDPR since companies do not need to operate in Brazil for the law to apply. He also touched upon the influence of Mercosur, a South American trade bloc, in discussions around data protection and the collective rights of the indigenous people of Bolivia in light of the processing of their biometric data. With regards to enforcement, he explained that in Brazil, it is happening primarily through the courts due to Brazil’s unique system where federal prosecutors and public defenders can file class actions.

img 1056

2. Looking beyond case law on automated decision-making

In the second panel, Automated Decision-making and Profiling: Lessons from Court and DPA Decisions, Sebastião Barros Vale offered an overview of FPF’s ADM Report, noting that it contains analyses of more than 70 DPA decisions and court rulings concerning the application of Article 22 and other related GDPR provisions. He also briefly summarized the Report’s main conclusions. One of the main points he highlighted is that the GDPR covers automated decision-making (ADM) comprehensively beyond Article 22, including through the application of overarching principles like fairness and transparency, rules on lawful grounds for processing, and carrying out Data Protection Impact Assessments (DPIA). 

Ruth Boardman underlined that the FPF Report reveals the areas of the law that are still “foggy” regarding ADM. Boardman also offered her view on the Portuguese DPA decision concerning a university using proctoring software to monitor students’ behavior during exams and detect fraudulent acts. The Portuguese DPA ruled that the Article 22 prohibition applied, given that the human involvement of professors in the decisions to investigate instances of fraud and invalidate exams was not meaningful. Boardman further explained that this case, along with the Italian DPA’s Foodhino case, shows that the human in the loop must have meaningful involvement in the process of making a decision for Article 22 GDPR to be inapplicable. She added that internal guidelines and training provided by the controller may not be definitive factors but can serve as strong indicators of meaningful human involvement. Regarding the concept of “legal or similarly significant effects” — another condition for the application of Article 22 GDPR – Boardman noted the link between such effects and contract law. For example, in the case of national laws transposing the e-Commerce Directive in which adding a product to a virtual basket counts as an offer to the merchant and not as a binding contract, no legal effects are triggered. She also added that meaningful information about the logic behind ADM should include the consequences that data subjects can suffer and referred to an enforcement notice from the UK’s Information Commissioner Office concerning the creation of profiles for direct marketing purposes.

Simon Hania argued that the FPF Report showed the robustness of the EDPB guidelines on ADM and that ADM triggers GDPR provisions that are relevant to fairness and transparency. With regards to the “human in the loop” concept, Hania claimed that it is important to involve multiple humans and ensure that they are properly trained to avoid biased decisions. Then he elaborated on a case concerning Uber’s algorithms that match drivers with clients, where Uber drivers requested access to data to assess whether the matching process was fair. For the Amsterdam District Court, the drivers did not demonstrate how the matching process could have legal or similarly significant effects on them, which meant that drivers did not have enhanced access rights that would only apply if ADM covered by Article 22 GDPR was at stake. However, when ruling on an algorithm used by another ride-hailing company (Ola) to calculate fare deductions based on drivers’ performance, the same Court found that the ADM at issue had significant effects on drivers. For Hania, a closer inspection of the two cases reveals that both ADM schemes affect drivers’ ability to earn or lose remuneration, which highlights the importance of financial impacts when assessing the effects of ADM as per Article 22. He also touched on a decision from the Austrian DPA concerning a company that scored individuals on the likelihood they would belong to certain demographic groups, as the DPA mandated the company to inform individuals about how it calculated their individual scores. For Hania, the case shows that controllers need to explain the reasons behind their automated decisions – regardless of whether they are covered by Article 22 GDPR or not – to comply with the fairness and transparency principles of Article 5 GDPR.

Gintare Pazereckaite noted that the FPF Report is particularly helpful in understanding inconsistencies in how DPAs apply Article 22 GDPR. She then stressed that the interpretation of “solely automated processing” should be done in light of protecting and safeguarding data subjects’ fundamental rights. Pazereckaite also referred to the criteria set out by the EDPB guidelines that clarify the concept of the “legal and similarly significant effects.” She added that data protection rules such as accountability and data protection by design play an important role in allowing data subjects to understand how ADM works and what consequences it may bring up. Lastly, Pazereckaite commented on Article 5 of the proposed AI Act – which contains a list of prohibited AI practices – and its importance when an algorithm does not trigger Article 22 GDPR.

img 1063

3. ADPPA and regional laws re-shaping US data protection regime

In the last panel, Perspectives on the Latest US Privacy Developments, Keir Lamont offered an overview of recent US Congressional efforts to enact the American Data Privacy and Protection Act (ADPPA) and outstanding areas of disagreement. For him, the bill would introduce stronger rights and protections than those set forth in existing state-level laws; including a broad scope; strong data minimization provisions; limitations on advertising practices; enhanced privacy-by-design requirements; algorithmic impact assessments; and a private right of action. In contrast, existing state laws typically adhere to the outdated opt-in/opt-out paradigm for establishing individual privacy rights.

Maneesha Mithal explained that in the absence of comprehensive federal privacy legislation, the Federal Trade Commission (FTC) has largely taken on the role of DPA by virtue of having jurisdiction over a broad range of sectors in the economy and acting both as an enforcement and rulemaking agency. Mithal explained that the FTC enforces four existing privacy laws in the US and can also take action against both unfair and deceptive trade practices. For example, the FTC can enforce against any statement (irrespective of whether it is in a privacy policy or the context of user interfaces), material omissions (for example, the FTC has concluded that a company did not inform its clients that it was collecting second by second television data and was further sharing it), and unfair practices in the data security area. Mithal pointed out that since the FTC does not have the authority to seek civil penalties for first-time violations, it is trying to introduce additional deterrents by naming individuals (for example, in the case of an alcohol provider, the FTC named the CEO for failing to prioritize security) and is using its power to obtain injunctive relief. For example, in a case where a company was unlawfully using facial recognition systems, the FTC ordered the company to delete any models or algorithms that were used, and thus FTC applied the fruit to the poisonous tree theory. Mithal also noted that although the FTC has historically not been active as a rulemaking authority due to procedural issues along with the lack of resources and time considerations, it is initiating a major rulemaking involving “Commercial Surveillance and Lax Data Security Practices.”

Finally, Dominique Shelton Leipzig offered remarks on state-level legislation focusing on the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA), adding that Colorado, Connecticut, Utah, and Virginia have similar laws. She elaborated on the CPRA’s contractual language, comparing California’s categorization of “Businesses,” “Contractors,” “Third Parties,” and “Service Providers” to the GDPR’s distinction between controllers and processors. Shelton Leipzig also explained that the CPRA introduced a highly disruptive model for the ad tech industry since consumers can opt out of both the sale of data, as well as the sharing of data. The CPRA also created a new independent rulemaking and enforcement Agency, the first in the US, focusing only on data protection and privacy. Finally, she addressed the recently enacted California Age-Appropriate Design Code Act, which focuses on the design of internet tools, and stressed that companies are struggling to implement it.

img 1077

Further reading:

Five Big Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2023

Entering 2023, the United States remains one of the only global economic powers that lacks a comprehensive, national framework governing the collection and use of consumer data throughout the economy. Congress made unprecedented progress toward enacting baseline privacy legislation in 2022. However, the apparent impasse in the efforts to move H.R. 8152, the American Data Privacy and Protection Act (“ADPPA”) over the finish line is likely to re-center the states as the locus of continued legislative activity on consumer privacy. Stakeholders are eager to learn which (if any) states will establish new privacy rights and protections in the coming year, but it remains too early in the legislative cycle to make predictions with any confidence. So instead, this post explores five big questions about the state privacy landscape that will determine whether 2023 emerges as a pivotal year for the protection of consumer data in the United States.

1. Will any state raise the bar for comprehensive privacy protections?

In four of the past five years, a new high-water mark for American privacy protections has been set through the enactment of comprehensive legislation at the state-level. In 2018, the California Consumer Privacy Act (CCPA) emerged as the nation’s first comprehensive consumer privacy law. The 2020 California Privacy Rights Act (CPRA) ballot initiative expanded California’s privacy regime, establishing heightened protections for certain sensitive personal information and providing a right to correct inaccurate data. In 2021, Virginia (VCDPA) and Colorado (CPA) enacted laws that are notable for creating ‘opt-in’ affirmative consent requirements in addition to California-style ‘opt-out’ privacy rights. Finally, in 2022, Connecticut (CACPDPOM) adopted a privacy law that improved upon prior models by creating clear protections for facial recognition data and an explicit right to revoke consent. 

Will any state continue this trend by enacting a privacy law that establishes new or stronger privacy rights and protections for its citizens in the coming year? As industry groups become increasingly insistent about the dangers of a ‘patchwork’ of divergent state privacy laws raising compliance costs for businesses, it is possible that policymakers will be reluctant to explore new approaches to privacy protection and will instead advance legislation that ‘paints inside the lines’ of the five established laws. 

In considering the forthcoming state privacy landscape, one of the best places to start is with the jurisdictions that came closest to adopting new privacy laws over the past year. In 2022, five states saw privacy legislation clear one chamber of their legislature: Florida (HB 9), Indiana (SB 358), Iowa (HF 2506), Oklahoma (HB 2969), and Wisconsin (AB 957). Of these, the Midwestern proposals (Indiana, Iowa, and Wisconsin) would not have meaningfully expanded privacy rights, protections, or compliance obligations beyond what is already on the books in other states (though they would have established some important privacy rights and protections for their residents).

Alternatively, last year’s bills from Oklahoma and Florida would have significantly reshaped privacy compliance programs for covered entities. Oklahoma’s Computer Data Privacy Act includes more rigorous consent requirements than any comparable state or national law, while Florida’s proposal would have required companies to adhere to strict data retention schedules and provided for enforcement mechanisms that are absent in other state laws, including a private right of action. However, there are reasons to suspect the window of opportunity for each bill may have closed. In Oklahoma, the bill’s most prominent backer, Democratic Rep. Collin Walke, has retired. As for Florida, reports indicate that the bill’s sponsor believes that leadership changes make it unlikely that the state will prioritize privacy legislation in the coming years. 

Although no state appears to have an existing-privacy framework with a demonstrated record of support at the ready to move next year, history has also shown that under the right political conditions novel privacy legislation can rapidly advance in a single legislative session. Potential candidates for similar progress in 2023 include Oregon, where an Attorney General-led multi-stakeholder task force has spent months gearing up to advance a comprehensive consumer privacy bill next legislative cycle. In New York, a set of end-of-session amendments to the 2022 version of the New York Privacy Act (S6701) brought the proposal structurally closer to existing privacy laws, suggesting this legislation could see renewed momentum in the coming year. It is also worth remembering that despite privacy’s emergence as a bipartisan issue, the only states to enact comprehensive privacy legislation to date have had the same party in power in both legislative chambers and the governor’s mansion. It will therefore be worth watching four states that have previously considered privacy legislation and emerged from the November elections with newly formed Democratic Party trifectas in government: Maryland (SB 11), Massachusetts (H 4514), Michigan (SB 1182 & HB 5989), and Minnesota (HF 1492 (2021)).

2. Will there be an ‘ADPPA Effect’?

In 2022, the American Data Privacy and Protection Act (ADPPA) advanced through the House Energy and Commerce Committee by an overwhelmingly bipartisan 53-2 vote. ADPPA appears unlikely to be enacted this Congress as the bill’s backers were unable to secure the support of either Senate Commerce Chair Cantwell or outgoing Speaker Pelosi. Nevertheless, the introduction, enthusiasm, and momentum behind ADPPA represented a seismic event for the U.S. privacy landscape and may exert significant influence on state lawmakers in the coming years.

There are two (potentially competing) theories for how ADPPA’s emergence may impact state governments considering privacy legislation. First, in introducing state privacy legislation, lawmakers have routinely asserted that they are acting in the absence of Congressional action and that they would prefer to see a unified, federal approach to the protection of consumer privacy. As a result, demonstrated bipartisan cooperation on ADPPA and the potential for further progress in the next Congress may make consumer privacy a less salient issue in state legislatures.

On the other hand, it is also possible that ADPPA will substantially drive the content of privacy bills that will be considered in 2023. The majority of state privacy proposals considered in recent years have been modeled on either the California or Washington Privacy Act legislative frameworks, both of which are rooted in the traditional, narrow privacy paradigm of ‘notice and choice.’ However, ADPPA’s framework is significantly stronger and broader than any enacted state law in rights and protections, scope, and enforcement mechanisms. For example, ADPPA would broadly cover businesses and nonprofits, establish strict data minimization requirements, create new civil rights protections, and provide for enforcement by a private right of action. The prominence of ADPPA and its record of bipartisan support make it a potential third model for state privacy legislation. There is already legislation in Michigan (SB 1182) that contains shades of ADPPA in its formulation of a private right of action. What, if any, additional language or concepts from ADPPA will gain traction at the state level?

3. Have we entered the Age of the Age-Appropriate Design Code?

While ‘comprehensive’ privacy laws and proposals continue to capture the bulk of the privacy commentariat’s attention, it is likely that the most significant U.S. consumer privacy development in 2022 was not ‘comprehensive,’ but ‘sectoral’ in nature. On September 15, California Governor Newsom signed the Age-Appropriate Design Code (AB 2273) into law. The AADC is a far-reaching children’s online safety, design, and privacy statute that is loosely modeled on an existing UK code of practice. Come 2024, the AADC will govern online services likely to be accessed by Californian users under 18 years of age and create significant new obligations. Notably, the law could also run contrary to traditional privacy interests and priorities, as it contains age-estimation requirements that will likely cause many companies to collect additional personal information on all their users. California’s AADC has been divisive – lauded by some and criticized as unworkable or unconstitutional by others. But most careful readers agree that the statute leaves many key terms undefined or vague; future rulemaking or other work to bring clarity is likely.

The ‘California effect’, where activity in California is seen to catalyze others to mimicry, is well documented in the privacy context. This means that a key question for consumer privacy in the coming years is whether other states will follow California’s lead and begin to enact their own age-appropriate design laws. Supporters of the AADC certainly intend for it to serve as a model for adoption in additional jurisdictions. However, as with breach notification statutes and comprehensive privacy laws, should other states consider and enact age-appropriate design legislation, there is no guarantee that they will follow neatly in the footsteps of California.

One AADC-style proposal that has already been introduced, the New York Child Data Privacy and Protection Act (S9563), would impose significant new obligations beyond California’s AADC. Perhaps most notably, S9563 would severely limit product development by requiring a risk assessment to be completed for any new online feature of a service targeted toward children, to be reviewed and approved by the Attorney General’s Office before such feature can be made available to the public. The California AADC also contains a broad grant of rulemaking authority, meaning that even if other states adopt identical laws, the contours of the AADC’s rights and responsibilities may continue to shift over the coming years. In sum, age-appropriate design legislation has the potential to dramatically alter online experiences for all users in the coming years; however, the ultimate impact of such frameworks is likely to come into greater focus over the coming months.

4. Will state legislatures prioritize protections for health and location data?

In June, the Supreme Court’s decision in Dobbs v. Jackson Women’s Health overturned decades of precedent to hold that the U.S. Constitution does not confer a right to receive an abortion. Following this decision, dozens of states took rapid action to either criminalize or shore up protections for receiving or providing reproductive health services. For example, California enacted AB-1242, which seeks to prohibit electronic communications providers from complying with out-of-state law enforcement inquiries relating to the investigation or enforcement of laws prohibiting abortion. However, there are indications that come 2023, some Democratic state lawmakers will pursue a new legislative response by regulating the collection, processing, and transfer of health and location data by businesses.

In New York, SB 9599 would impose strict consent requirements on companies that collect or sell personal health information for data processing, geofencing, or data brokering. The Washington State Attorney General’s office has announced that it will support similar legislation, the Consumer Health Data Privacy Act, in the coming year. Stakeholders will be watching closely to learn whether these legislative efforts converge around a shared approach to key definitions, rights, and business obligations, or move forward with diverging health privacy frameworks.

5. How effective will the laws taking effect be?

No matter what happens in state legislatures this year, 2023 will hold the distinction as the year in which the new era of state privacy laws take effect. On January 1st, California’s revised regime and Virginia’s law will become operational, followed by the Colorado and Connecticut statutes on July 1st, with Utah’s statute bringing up the rear with a December 31st effective date. In the impending shift from theory to practice, how will both public and policymaker perceptions of these various laws change?

While privacy professionals have spent years debating and preparing for these impending state laws, 2023 will mark the first time that many U.S. consumers will be legally entitled to exercise new privacy rights over the businesses that collect and share their personal information. Depending on the public perception (both immediate and over time) of these new state privacy laws, legislative efforts in other jurisdictions could be impacted in a variety of ways. For example, successful rollouts of the new state laws could prompt lawmakers in other jurisdictions to move forward on similar bills, seizing upon a popular issue. On the other hand, if the new laws kick off with a whimper, lawmaker appetite to take up consumer privacy issues might wane. If these laws take effect and consumers face difficulty in exercising their rights (as Consumer Reports argues occurred following the enactment of the CCPA), perhaps lawmakers will consider statutes with stronger enforcement mechanisms and larger penalties in order to compel compliance. Alternatively, lawmakers may also consider establishing longer ‘on-ramps’ to compliance (particularly for small businesses) or seek to draft more explicit, self-executing statutory obligations.

Conclusion

This commentary has noted several privacy proposals already under serious consideration for the 2023 legislative calendar (particularly in New York, where many bills have already been introduced). These bills and efforts should be regarded as only the narrow, visible tip of the iceberg, lawmakers and stakeholders across the country are likely already at work on new proposals that will not be officially introduced until legislative sessions formally convene. This article has posed many questions but can offer only one clear forecast: a turbulent and exciting year in the efforts to advance and secure new consumer data privacy rights and protections is on the horizon. Be sure to follow the Future of Privacy Forum for help tracking emerging trends and key developments throughout the year.

FPF Releases Comparative Analysis of California and U.K. Age-Appropriate Design Codes

The Future of Privacy Forum (FPF) today released a new policy brief comparing the California Age-Appropriate Design Code Act (AADC), a first-of-its-kind privacy-by-design law in the United States, and the United Kingdom’s Age-Appropriate Design Code. While there are distinctions between the two codes, the California AADC, which is set to become enforceable on July 1, 2024, was modeled after the UK’s version and represents a significant change in the regulation of the technology industry and how children will experience online products and services. 

Download the POLICY BRIEF: Comparing the UK and California Age-Appropriate Design Codes.

“Understanding the requirements of both the UK and California codes, and in particular where they differ, is critical for companies in the US and abroad who may soon be covered under one – or both – codes,” said Chloe Altieri, Youth & Education Privacy policy counsel for FPF and an author of the report. “The explanations and examples in the UK code, many of which are not yet defined in California’s version, may provide helpful compliance insights.”

The report builds on FPF’s in-depth analysis of the California AADC, published in October, and contains a side-by-side comparison of the 15 standards laid out in the UK AADC to the corresponding text of the California AADC, including the “best interests of the child” standard, age assurance, default settings, parental controls, enforcement, and data protection impact assessments.

The report also outlines several broader distinctions between the California and UK codes, including, crucially, how the underlying regulatory frameworks differ. While both codes build on the aims of their respective consumer privacy laws (the UK’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act), the California AADC is standalone legislation that will be independently enforced, while the UK AADC and GDPR are linked, and enforcement falls to the UK Information Commissioner’s Office (ICO). The UK AADC is also “rooted” in Article 3 of the United Nations Convention on the Rights of the Child (UNCRC), an international treaty ratified by 195 countries, including the UK, but not the US. While the California AADC uses a similar “best interests of children” standard, without the foundation of the UNCRC, there is much less certainty in how businesses should make that determination. 

“As policymakers in other states start to consider similar age-appropriate design code legislation, understanding how and why the California and UK codes differ will be critical,” said Bailey Sanchez, Youth & Education Privacy policy counsel at FPF and an author of the report. “It is not as simple as copying and pasting the UK code in California or anywhere else. California’s version was adapted to fit the legal landscape in the US and the state, which has a unique consumer privacy landscape. Other states will need to make their own adjustments.”

FPF’s youth and education privacy team is closely monitoring the implementation of the California AADC. Catch up on previous blog posts tracking the bill’s progress and our formal analysis once the bill was signed into law. To access the Youth & Ed team’s child and student privacy resources, visit www.StudentPrivacyCompass.org and follow the team on Twitter at @SPrivacyCompass.

New Report Promotes Accountability-Based Approach to Data Protection in the APAC Region

In recent years, there has been an uptick in new (comprehensive) data protection laws in the Asia Pacific (APAC). This trend introduces challenges for cross-border compliance, particularly for the industry, legal practitioners, and the community of data protection regulators. As a result of the difficulties, these stakeholders acknowledge that there is a need for greater consistency in regional data protection frameworks.

Today, the Future of Privacy Forum (FPF) — a global non-profit focused on privacy and data protection — and experts from the Asian Business Law Institute (ABLI) — a Singapore-based think tank non-profit dedicated to providing practical guidance in the field of Asian legal development — released a report providing a detailed comparison of the requirements for processing personal data in 14 jurisdictions in the Asia-Pacific (APAC), including Australia, China, India, Indonesia, Hong Kong SAR, Japan, Macau SAR, Malaysia, New Zealand, the Philippines, Singapore, South Korea, Thailand, and Vietnam. The comparative analysis follows a months-long dissemination of individual reports on each jurisdiction. 

In the movement for enhanced consistency between different jurisdictions’ data protection laws, FPF and ABLI found that many APAC jurisdictions have engaged in conversations around the call to move away from consent-centric privacy practices. Consent practices are often restricted to individual jurisdictions and their legal systems. FPF and ABLI’s new report aims to elevate this discussion by promoting alternatives to consent measures, thereby increasing the accountability of organizations that process personal data.

By using various alternative legal bases other than consent and with an eye on supporting greater accountability, organizations can balance their interest in using personal data with broader societal concerns, such as developing a vibrant digital economy or preventing the harms of crime and fraud to individuals.

“The APAC region is presently undergoing a period of intensive law reform. This APAC comparative report provides an opportunity for lawmakers, governments, and regulators who draft, review or implement data protection laws to have a comprehensive overview and analysis of notice, consent, and related requirements that operate in the data protection frameworks of their respective jurisdictions, regional partners and neighbors,” said Josh Lee Kok Thong, Managing Director of FPF APAC in Singapore. “We hope the report serves as a catalyst for initiating a regional dialogue focused on clarifying existing uncertainties and enhancing the compatibility of regional data protection laws.”

Understanding that compliance-based approaches to data privacy, such as consent, notice, and choice mechanisms, have serious limitations, FPF and ABLI propose an accountability-based approach to data protection. This new system shifts the responsibility of protection to users and controllers of personal data by utilizing data protection impact assessments (DPIAs). Assessing risk and enhancing accountability provides organizations with legal certainty, ensuring better privacy accountability across the region.

Moreover, FPF and ABLI found that this accountability approach is necessary and important as data protection developments continue advancing across the APAC region. The rise of new developments conveys a rare opportunity to clarify existing uncertainties and enhance the compatibility of Asian data protection laws on these crucial privacy issues. The report’s comparative legal analysis and input from industry, privacy professionals, and regional practitioners have demonstrated that despite divergences, commonalities exist and can be leveraged to drive convergence between jurisdictions’ different legal systems.

“ABLI is delighted to see the fruit of its partnership with FPF culminating in the release of this comparative review at a time when digitalization is driving an increase in cross-border data flows,” said Mr. Rama Tiwari, Chief Executive of the Singapore Academy of Law, ABLI’s parent organization. “With the largest economies in APAC estimated to be able to reap economic benefits of approximately US$2 trillion if they fully capture their digital potential, the release of the report provides a timely and enabling reference for policymakers and practitioners alike as they design, reform and refine policies and practices for personal data processing both domestically and across borders.”

The GDPR and the AI Act Interplay: Highlights from FPF and Ada Lovelace Institute’s Joint Event

Authored by Christina Michelakaki, FPF Intern for Global Policy

On November 9, 2022, FPF, along with the Ada Lovelace Institute (Ada), organized a closed roundtable in Brussels where experts met to discuss the lessons that can be drawn from General Data Protection Regulation (GDPR) enforcement precedents when deciding on the scope and obligations of the European Commission (EC)’s Artificial Intelligence (AI) Act Proposal. The event hosted representatives from the European Parliament, civil society organizations, Data Protection Authorities (DPAs), and industry representatives.

The roundtable discussion was based on a comprehensive Report launched by FPF in May 2022 analyzing case law under the GDPR applied to real-life cases involving Automated Decision-Making (ADM). This blog outlines the main conclusions drawn by the speakers on four main topics:

  1. the complementarity between the AI Act and the GDPR;
  2. the needed clarity on transparency requirements and AI training;
  3. the difference between the risk-based approach and an exhaustive high-risk AI list; and
  4. the need for effective enforcement and redress avenues for affected persons.    

FPF’s Managing Director for Europe, Dr. Rob van Eijk, gave opening remarks followed by a short introduction by FPF EU Policy Counsel Sebastião Barros Vale and Ada’s European Adviser, Alexandru Circiumaru

As outlined in a recent analysis on the matter published by FPF, Barros Vale highlighted that:

This was followed by short interventions from the participants and a discussion. 

  1. The AI Act and the GDPR have different but complementary scopes and obligations.

Speakers seemed to agree on the fact that not following the GDPR’s distinction between data controllers and processors in the AI Act stems from the fact that the EC aims at regulating AI systems entering the EU market from a product safety perspective. 

Industry representatives further explained that trying to understand the requirements of the AI Act from a data protection point of view does not make practical sense. They added that having specific types of AI systems in the Annex III list would not make them lawful per se, notably if they breach GDPR requirements. 

Other panelists noted that the AI Act has a broader scope than the GDPR regarding AI systems, as it covers cases where no personal data processing is involved but that may negatively affect individuals.

Moreover, participants questioned whether the AI Act’s human oversight requirements would render Article 22 GDPR on ADM useless – notably, the rights for individuals to obtain human review of automated decisions. Some participants argued that this does not seem to be the case because users are not required by the AI Act Proposal to implement meaningful human oversight when they deploy AI systems. With regard to human oversight, the current draft of the AI Act only requires providers to embed measures into their AI systems that enable users to include human oversight in their deployment scheme. 

Additionally, some experts claimed that the interplay between the AI Act and the GDPR was neglected in the original Proposal, which may raise further issues. As an example, the initial text seems to overlook the fact that users (likely controllers under the GDPR) are often very dependent on AI providers when it comes to practical GDPR compliance. Moreover, speakers worried about issues arising in the AI system’s deployment phase – where AI users are in control – and pointed to the approach proposed at the Council of Europe’s Convention on AI as more tailored to the responsibilities of each party.

For another speaker, incorporating broader fundamental rights impact assessments into the AI Act could complement Data Protection Impact Assessments (DPIAs) under the GDPR. 

  1. Transparency requirements and AI training need further clarity

With regard to certain requirements set forth by the AI Act Proposal, speakers conveyed that:

  1. The difference between the risk-based approach and an exhaustive high-risk AI list 

As a pushback to having a specific set of strictly-regulated high-risk AI use cases in Annex III and prohibited practices in Article 5 of the AI Act, some voices suggested mimicking the GDPR’s open clauses and risk-based approach. More specifically, a few speakers agreed that not having a closed list but rather a set of overarching principles and risk assessment requirements would increase providers’ accountability and enable enforcers to verify compliance in a more flexible manner. 

Speakers that advocated for such a solution agreed that the concept of ‘risk’ and its underlying assessment criteria should be read in line with the GDPR, which could also provide less prescriptive indications of how to mitigate detected risks.

For other experts in the room, a clear definition of AI along with the list of Annex III is preferable, as it avoids enshrining subjective risk assessment criteria. According to this point of view, relying on providers’ self-assessments to decide what is considered high-risk could benefit larger players with the financial incentives and legal resources to take a less cautious approach to AI development.

Among the participants that defended keeping a high-risk AI list, some called for the ability to easily update the list since novel risky AI use cases are constantly surfacing. One of the experts disagreed that it should fall on the EC to update the list, given the institution’s political driving factors. Instead, the speaker called for a bottom-up approach involving regulators and the public’s participation. Other voices also advocated for incorporating emotion recognition systems and the analysis of biometrics-based data in the list of high-risk AI use cases. 

  1. A need for effective enforcement and redress avenues for affected persons 

Lastly, participants touched upon the AI Act’s governance mechanisms and redress avenues. 

Further reading:

FPF Urges Federal Trade Commission to Craft Practical Privacy Rules

FPF Comments Regarding FTC ANPR Urge the Commission to Provide Individuals with  Strong, Enforceable Rights and Companies with Greater Clarity about their Obligations under Section 5 of the FTC Act.

The Future of Privacy Forum filed comments regarding the Federal Trade Commission’s Advance Notice of Proposed Rulemaking, recommending that the Commission prioritize practical rules that clearly define individuals’ rights and companies’ responsibilities. 

The Commission has spent decades enforcing prohibitions against unfair and deceptive data practices regarding a wide range of established and emerging technologies. Those privacy and security enforcement actions have been based on the FTC’s statutory authority, which provides flexibility to address consumer harms arising from novel technologies and business practices, but which does not articulate granular rights for consumers or requirements for businesses. Clear, practical rules can more specifically define what data practices the Commission considers unfair or deceptive. The current FTC rulemaking is an opportunity to provide individuals with strong, enforceable rights and companies with greater clarity about their obligations under Section 5 of the FTC Act.

FPF’s comments urge the Commission to:

As a practical matter, the FTC acts as the primary U.S. privacy enforcement agency. Although FPF views a new, pragmatic, comprehensive federal privacy law as the ideal mechanism for grappling with complex technologies and data flows, clear and practical FTC rules defining unfair and deceptive practices would benefit individuals and businesses.

Understanding Extended Reality Technology & Data Flows: Privacy and Data Protection Risks and Mitigation Strategies

This post is the second in a two-part series. Click here for FPF’s XR infographic. The first post in this series focuses on the key functions that XR devices may feature, and analyzes the kinds of sensors, data types, data processing, and transfers to other parties that power these functions. 

I. Introduction

Today’s virtual (VR), mixed (MR), and augmented (AR) reality environments, collectively known as extended reality (XR), are powered by the interplay of multiple sensors, large volumes and varieties of data, and various algorithms and automated systems, such as machine learning (ML). These complex relationships enable functions like gesture-based controls and eye tracking, without which XR experiences would be less immersive or unable to function at all. However, these technologies often depend on sensitive personal information, and the collection, processing, and transfer of this data to other parties may pose privacy and data protection risks to both users and bystanders. 

This post examines the XR data flows that are featured in FPF’s infographic, and analyzes some of the data protection, privacy, and equity issues raised by the data that is processed by these devices, as well as strategies for mitigating these risks.

Key risks include:

Key mitigation strategies include:

II. Processing Large Volumes and Varieties of Sensitive Personal Data

XR technologies raise traditional privacy and data protection risks, but also implicates larger questions around surveillance, social engineering, and freedom of expression. As noted in the first blog post in this series, XR technologies require large volumes and varieties of data about the user’s body and their environment. Certain collection and use limitations may therefore be challenging or impossible to implement, since some of XR’s core functions require extensive data collection and processing. Now and in the future, XR technologies may also transfer data to other users and third parties, such as software companies, hardware manufacturers, and advertisers. While devices generally process raw sensor data on device, they may transmit raw or processed sensor data to an application and other parties for further processing to improve representations of virtual content or enable shared experiences. While these transmissions of data may improve a user’s XR experiences, they can also create new privacy and data protection risks for users and bystanders.

Eye tracking underpins many current and future-facing use cases, such as enhanced graphics, expressive avatars, and personalized content, but it may pose privacy and data protection risks to users. This is due to eye tracking data’s sensitive nature, its potential role in significant decisions affecting users, and the unconscious nature of the behaviors from which some of this data is derived. Organizations could use data related to pupil dilation and gaze to potentially infer information—whether accurate or not—about the user, such as their sexual orientation, age, gender, race, and more. Organizations may also use this data to attempt to diagnose medical conditions, such as ADHD, autism, and schizophrenia. Despite the sensitive nature of this data, users often lack the capacity to meaningfully control its collection or use. Without proper controls, this information may be further shared with third parties. This raises the likelihood of organizations using this data to inform major decisions about a user, which could have real-world impacts on XR users.

Sensors that track a user’s bodily motions may also cause harm due to their potential to undermine anonymity. The first post in this blog series analyzed how tracking a user’s position can enable functions like mapping the user’s space to help place virtual content. But this tracking could also be a means to digitally fingerprint users and individuals, including bystanders, especially given the volume and variety of data that XR devices gather and process. At the same time, this tracking data raises the same de-identification and anonymization concerns that exist regarding similarly granular non-XR data types, such as behavioral biometrics, historical geolocation, and genetic information. Digital fingerprinting may therefore undermine individuals’ ability to maintain anonymity in XR environments. This may discourage users from fully expressing themselves or participating in certain activities due to worries about retaliation.

III. Statutory Obligations 

It is unclear how well current legal protections mitigate the privacy risks posed by certain processing activities in the XR context. Whether or not bodily information like gaze and gait are covered by existing biometric regulations may depend on these laws’ definitions of biometric data. For example, under the EU’s comprehensive privacy law, the General Data Protection Regulation (GDPR), this type of data qualifies as “personal data” if it relates to an identified or identifiable person, such as a user or bystander. Thus, an organization that records, collects, assesses or uses this data in any other manner would be subject to GDPR obligations such as transparency, fairness, data minimization or storage limitation. 

Pursuant to the GDPR, “biometric data” includes personal data resulting from the specific technical processing of a person’s physical, psychological, or behavioral characteristics, and which allows for identification. Organizations are subject to heightened obligations under the Regulation depending on the purpose for which they process biometric data. Specifically, the GDPR prohibits organizations from processing such personal data, unless one of the permissible grounds strictly defined by the law applies. The Regulation defines biometric data to include only that which an organization could use for identification purposes. As described in FPF’s prior blog post, however, an organization may process eye and other bodily information for non-identification purposes, such as to debug applications or improve products . This raises questions as to whether the GDPR’s protections for sensitive data categories would always apply to these XR functions. Notably, even if this eye and other bodily information does not meet the “sensitive data” criteria, the rest of the Regulation would still apply to this data. Furthermore, European ePrivacy rules may apply to a user’s system that connects to or pairs with XR equipment.

Similar lack of certainty exists in U.S. law. For example, the Illinois Biometric Information Privacy Act (BIPA) applies to information based on “scans” of hand or face geometry, retinas or irises, and voiceprints. This definition of “biometric identifiers” does not explicitly cover the collection of behavioral characteristics or eye tracking. Whereas the GDPR may still apply to an organization that processes eye and other bodily information if it is personal data or qualifies as other sensitive data categories, BIPA may not apply at all. This highlights how existing laws’ protections for biometric data may not extend to every situation involving XR technologies. However, protections may apply to other special categories of data, given XR data’s potential to draw sensitive inferences about individuals.

IV. Bystander and Environmental Data

Bystanders’ privacy can also be impacted when XR devices and third parties collect and process sensor data. Some of the privacy and data protection issues affecting bystanders mirror the  privacy risks to XR users. However, unique notice challenges arise with respect to bystanders. Non-users in proximity to an XR user may be unaware that the device is collecting and processing data about them, as well as for what purposes and with whom the device is sharing this information. Like users, bystanders also cannot control the unconscious behaviors that provide the sensor data inputs for XR experiences. Even if a bystander generally understands that a device is collecting data about them, the unconscious nature of some behaviors means that bystanders may neither be aware of the behaviors nor specifically understand that a device is processing data about these behaviors. 

Bystander data could facilitate both use cases that are detrimental to a non-user’s privacy and decisions that negatively affect them. Future XR technologies will likely incorporate facial characterization or analysis technologies that can allegedly sense cognitive states or infer emotions—whether accurate or not—based on sensor data. Insights from these technologies could help organizations construct a portrait of the locations a non-user frequents, their interests, and medical conditions. 

IV. Strategies for Mitigating Risks

Organizations that provide XR technologies can implement a number of strategies to address the risks raised by XR data collection, use, and sharing. While no single intervention by itself mitigates all of these risks, some combination of strategies is likely to decrease risks and help minimize harms that may result. For instance, processing and storing data on a user’s device, as opposed to remotely on a processor’s server, helps ensure that the data remains in the user’s hands and not accessible to others. Organizations can also work to limit data collection, storage, and/or usage, including third-party use, to particular, specified purposes, and provide notice to or obtain consent from users if they plan to use this data for a different purpose. Companies should set policies and guidelines for third-party developers’ data practices, and monitor to ensure compliance with said policies.

Certain privacy-enhancing technologies (PETs) are useful tools for managing privacy risks. For example, advances in encryption and differential privacy can enable privacy-preserving data analysis and sharing, and the use of synthetic data sets can address concerns about data sharing or secondary data use. Another option is to provide greater user controls, allowing users to control the kinds of data collected about them—particularly sensitive data like eye tracking and facial expressions data—and with whom this data is shared. 

Some organizations have chosen to design XR devices so that they ensure bystanders’ data is not unduly collected, for instance by automatically blurring bystanders’ faces, or using a system of lights on a head-mounted display to signal to non-users the device is on and potentially collecting data.

Organizations using XR should be transparent about how they use and plan to use XR data, and publicly commit to guidelines and/or ethical principles. This could also include something akin to an institutional review board (IRB) to ensure compliance with these principles. Finally, organizations can build privacy into an organization’s culture and processes and create bodies like oversight boards to ensure privacy protections endure beyond other changes in mission and values.

V. Conclusion

The complex web of data, sensors, algorithms and automated systems, and parties that enable important and sometimes central XR functions also can raise privacy and data protection concerns. Devices and ML models may collect and process large volumes and varieties of sensitive personal data, over which users and bystanders may lack meaningful controls, and that other parties could use to make important decisions affecting these individuals. The disclosure of this data may also undermine user anonymity, which could discourage users from freely expressing themselves due to fears of retaliation. Providing bystanders with notice that communicates that a device is collecting data about them, let alone for what purpose and to whom the data is transmitted, is challenging and may not be possible. This creates difficulties related to obtaining affirmative express consent to data processing activities in XR, where consent is predicated on the individual being informed. There is also uncertainty about how existing laws interact with XR technologies, such as how body-based data fits within existing legal definitions of biometrics. The risks to users and bystanders outlined in this post underscore the importance—and, sometimes, challenge—of ensuring appropriate safeguards exist at the technical, policy, and legal level to mitigate against harms that may arise in this space.

Brussels Privacy Convening Focuses on Empowering Vulnerable and Marginalized People, Launches New Project

The Future of Privacy Forum (FPF), a global non-profit focused on data protection and privacy, and the Brussels Privacy Hub of Vrije Universiteit Brussel (VUB) will jointly present the sixth edition of the Brussels Privacy Symposium on November 15, 2022. The in-person event will convene in Brussels, bringing together policymakers, academic researchers, civil society, and industry representatives to discuss privacy research and scholarship. 

In line with this year’s topic, “Vulnerable People, Marginalization, and Data Protection,” participants will explore the extent to which data protection and privacy law — including GDPR and other modern data protection laws like Brazil’s LGPD — safeguard and empower vulnerable and marginalized people. They will also debate how to balance the right to privacy with the need to process sensitive personal information to uncover and prevent bias and marginalization. Stakeholders will discuss whether prohibiting the processing of personal data related to vulnerable people serves as a protection mechanism. 

The event marks the launch of “VULNERA,” the International Observatory on Vulnerable People in Data Protection, led by the Brussels Privacy Hub and supported by the Future of Privacy Forum. The observatory aims to promote a mature debate on the multifaceted connotations surrounding the notions of human “vulnerability” and “marginalization” existing in the data protection and privacy domains.

“I’m excited to begin the groundbreaking and much-needed work we have ahead of us,” Gabriela Zanfir-Fortuna, FPF’s Vice President for Global Privacy, said. Zanfir-Fortuna is also a member of VULNERA’s executive team as a Scientific Coordinator, which is joined by more than 30 members of a broad scientific network. “This initiative will focus on understanding how data protection and privacy law puts safeguards in place to protect the rights of vulnerable and marginalized people in societies increasingly underpinned by digital data flows.”

Professor Gianclaudio Malgieri, Co-Director of Brussels Privacy Hub of Vrije Universiteit Brussel added: “The VULNERA International Observatory will explore theories of vulnerability, marginalization, and intersectionality, examining how data protection law and policy apply to people in certain contexts that may be vulnerable or marginalized, such as women, children, people on a low or zero income, racialized communities, and people of color, ethnic and religious groups, migrants, LGBTQIA+ and non-binary people, the elderly, and persons with disabilities,” 

Representatives from the European Network Against Racism, Dutch Human Rights Council, European Commission, Irish DPC, European Digital Rights (EDRi), European Data Protection Supervisor, and other relevant organizations will share their expertise during the Brussels Privacy Symposium. 

“As we think about the next iteration of the digital age, it’s important that we have a more global consensus on how to protect those who have been historically marginalized,” said Rob van Eijk, FPF’s Managing Director for Europe. “The timing for the launch of VULNERA, and this symposium at-large, could not have been at a more critical juncture.” 

For more information about the event, the agenda, and speakers, visit the FPF site. To learn more about the VULNERA, visit the Brussels Privacy Hub site