Record Set: Assessing Points of Emphasis from Public Input on the FTC’s Privacy Rulemaking
More than 1,200 law firms, advocacy organizations, trade associations, companies, researchers, and others responded to the Federal Trade Commission’s Advance Notice of Proposed Rulemaking (ANPR) on “Commercial Surveillance and Data Security.” Significantly, the ANPR initiates a process that may result in comprehensive regulation of data privacy and security in the United States, and also marks a notable change from the Commission’s historical case-by-case approach to addressing consumer data misuse. Comments received in response to the ANPR will be used to generate a public record that informs the Commission in deciding whether to pursue one or more draft rules, and will be generally available for any policymaker to utilize in future legislative proposals. The Future of Privacy Forum’s comment is available here.
Using a sample of 70 comments, excluding our own, selected from stakeholders representing various sectors and perspectives, the Future of Privacy Forum analyzed responses for common themes and areas of divergence. Below is a summary of key takeaways.
1. Areas of Agreement
a. Data Minimization
Many submissions encouraged the Commission to create a rule or standard requiring that companies engage in some form of data minimization. Data minimization is a foundational data protection principle, appearing in the Fair Information Practice Principles (FIPPs) and required by the European Union’s General Data Protection Regulation (GDPR) and other international regulations. The European Data Protection Supervisor (EDPS) emphasized that an FTC data minimization rule would help harmonize data protection standards between the European Union (EU) and the United States (U.S.), and would codify the data protection best practices established by the Commission’s history of enforcement. Several comments focused on the ability of data minimization to create “market wide” incentives that could disrupt an environment that may provide competitive advantages to organizations who are not responsible data stewards, while Palantir noted that, unlike the exercise of data subject rights, data minimization requires no extra action from users.
A small group of responses noted that data minimization has implications for machine learning (ML) and the development of artificial intelligence (AI) systems. While such systems must be trained on vast quantities of data, commenters noted that it is equally important that such data be high quality. Palantir emphasized that data minimization, insofar as it required the deletion of out-of-date or otherwise flawed data, would support this goal. EPIC noted that data minimization requirements would help ensure that businesses use of personal data is aligned with consumer expectations, observing that, “[c]onsumers reasonably expect that when they interact with a business online, that business will collect and use their personal data for the limited purpose and duration necessary to provide the goods or services they have requested,” and not retain their data beyond that duration or for other uses. Finally, Google, the Wikimedia Foundation, and other commenters emphasized that a data minimization rule would support data security objectives as well: if companies retain less personal data, data breaches, when they do occur, will be less harmful to consumers.
b. Data Security
There was also broad, though not uniform, consensus around support for a data security rule requiring businesses to implement reasonable data security programs. Many commenters noted that data security incidents are a common occurrence, are not reasonably avoidable by consumers, and pose grave risks to individuals, including that of identity theft. The EDPS underscored the role of data security in protecting core rights and freedoms under EU law and the GDPR, and recommended that the Commission require organizations to engage in data processing impact assessments as well as data protection by design and default, and use encryption and pseudonymization to protect personal data.
The Computer & Communications Industry Association (CCIA) observed that any data security rulemaking should be harmonized with standards established by the Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST). In this same vein, the Software Alliance (BSA), “encourage[d] the agency…to recognize that the best way to strengthen security practices cross industry sectors is by connecting any rule on data security to existing and proposed regulations, standards, and frameworks.” The BSA also asked that the Commission recognize the “shared responsibilities of companies and their service providers” in protecting consumer data, and create rules that reflect this dual-responsibility framework, as well as the different relationships that companies versus service providers have with consumers. Some commenters discussed how the Commission’s rulemaking should interact with other agencies. For example, the Center for Democracy and Technology (CDT) emphasized that reasonable data security requirements should extend to the government services context, including to government-run educational settings, as well as to non-financial identity verification services.
2. Areas of Contention
a. The Commission’s Authority
By far, the most common disagreement among commenters was whether, and to what extent, the Commission could promulgate data privacy and security regulations through their statutory authority. Most commenters took a moderate approach–believing that the Commission had some level of limited rulemaking authority in the space. Commenters provided a variety of bases for where the Commission could and could not create rules, and why. Some entities believed that the Commission could only address practices that clearly demonstrate consumer harm, while others encouraged the Commission to focus on FTC enforcement actions that have already survived judicial scrutiny. Google noted that the “FTC rests on solid ground” for a data security rule given the Third Circuit’s decision in FTC v. Wyndham Worldwide Corp, which affirmed the agency’s authority to regulate data security as an unfair trade practice. While many commenters also argued that the Commission could only create rules that would not overlap with other regulatory jurisdictions, some advocates believed that the FTC can use its authority to regulate unfair and deceptive practices like discrimination even when other agencies have concurrent jurisdiction. The Lawyers Committee for Civil Rights noted the FTC’s extensive experience sharing concurrent jurisdiction with other agencies, where “[i]t works on ECOA with the Consumer Financial Protection Bureau, on antitrust with the Department of Justice, and on robocalls with the Federal Communications Commission.”
However, comments also existed at both ends of the spectrum in regard to the FTC’s authority to act. Some commenters, largely from civil society organizations, civil rights groups, and academia, argued that the Commission has substantial authority to address data privacy and security where data practices meet the statutory requirements for being “unfair or deceptive.” These commenters reasoned that Congress intended the Commission’s Section 5 authority to maintain dexterity to address evolving commercial practices, and thus can be easily applied to data-driven activities that cause unavoidable and substantial injury to consumers.
Other commenters, largely comprising trade associations, business communities, and some policymakers questioned the Commission’s authority to conduct this rulemaking under the FTC Act. While some, like the Developers Alliance, argued that most data collection practices do not constitute “unfair or deceptive” trade practices, the majority of commenters in opposition argued that the Commission lacks the authority to conduct this type of rulemaking because the regulation of data privacy and security is a “major question” best served through Congress. These comments focused on the Supreme Court’s 2022 ruling in West Virginia v. EPA, holding that regulatory agencies, absent clear congressional authorization, cannot issue rules on major questions that affect a large portion of the American economy. Several Republican U.S. Senators noted that “simply stating within the ANPR that within Section 18 of the FTC Act, Congress authorized the Commission to propose a rule defining unfair or deceptive acts or practices…is hardly the clear Congressional authorization necessary to contemplate an agency rule that could regulate more than 10% of U.S. GDP ($2.1 trillion) and impact millions of U.S. consumers (if not the entire world).” The lawmakers further argued that even if the Commission could prove clear Congressional authorization, the rulemaking would likely violate the FTC Act because the ANPR failed to describe the area of inquiry under consideration with the mandated level of specificity.
b. “Commercial Surveillance”
The Commission’s framing of the ANPR regarding “commercial surveillance,” was another area that generated controversy. The ANPR defines “commercial surveillance” as “the collection, aggregation, analysis, retention, transfer, or monetization of consumer data and the direct derivatives of that information.”
Several comments supported the Commission’s framing and detailed the multitude of ways in which businesses track private individuals over time and space. The Electronic Privacy Information Center (EPIC) stated, “[t]he ability to monitor, profile, and target consumers at a mass scale have created a persistent power imbalance that robs individuals of their autonomy and privacy, stifles competition, and undermines democratic systems.” The most common examples of practices considered “commercial surveillance” by commenters included: targeted advertising, facial recognition, pervasive tracking of people across services and websites, unlimited sharing and sale of consumer information, and secondary uses of consumer information.
On the other side, commenters argued that the terminology around “commercial surveillance” was both an unfair and overly broad characterization. Trade associations like the Information Technology Industry Council argued that the term implies a negative connotation for any commercial activity that collects or processes data, even the many legitimate, necessary, and beneficial uses of data that make products and services work for users. Many comments emphasized the crucial role of consumer data in our society and how it has been used to fuel social research and innovation, including telehealth, studying the efficacy of the COVID vaccine, the development of assistive AI for disabled individuals, identifying bias and discrimination in school programs, and ad-supported services like newspapers, magazines, and television.
3. Other Notable Focuses
a. Automated Decision-Making and Civil Rights
A large contingency of advocacy organizations documented how automated decision-making systems can exacerbate discrimination against marginalized groups. Organizations including the National Urban League, Next Century Cities, CDT, Upturn, Lawyers’ Committee for Civil Rights Under Law, and EPIC provided illustrative examples of discriminatory outcomes in housing, credit, employment, insurance, healthcare, and other areas brought about by algorithmic decision-making.
Industry groups including the National Association of Mutual Insurance Companies argued that discrimination concerns are best addressed through Congressional action, given that the FTC Act does not mention discrimination and does not answer the foundational legal question of ”whether it is a regime of disparate treatment or disparate impact.” Many advocacy groups refuted this assertion and argued for the necessity of addressing algorithmic discrimination in a rule because of gaps in existing civil rights law and because of the Commission’s history of utilizing concurrent jurisdiction. For example, Upturn highlighted three major gaps, noting that current law leaves large categories of companies, such as hiring screening technology firms, uncovered; fails to address modern-day harms such as discrimination by voice assistants; and does not require affirmative steps to measure and address algorithmic discrimination.
Commenters made a variety of suggestions about how the Commission could address these problems in a rule, including through data minimization (National Urban League), greater transparency (CDT), declaring the use of facial recognition technology an unfair practice in certain settings (Lawyers’ Committee, EPIC), and implementing principles in the Biden administration’s Blueprint for an AI Bill of Rights (Upturn, Lawyers’ Committee). Google emphasized that any rulemaking on AI should be risk-based and process-based and promote transparency, adding that “a process-based approach with possible safe harbor provisions could encourage companies to continually audit their systems for fairness without fear that looking too closely at their systems could expose them to legal liability.”
b. Health Data and Other Considerations in Light of Dobbs
Another strong thread throughout the comments was a concern about the privacy and integrity of health data, particularly in light of the Supreme Court’s 2022 decision in Dobbs v. Jackson Women’s Health Organization. Comments from Planned Parenthood, CDT, the American College of Obstetricians and Gynecologists (ACOG), and the California Attorney General (AG) Rob Bonta all emphasized the impact of the Dobbs decision, which allows states to criminalize the acts of seeking and providing abortion services. For example, the ACOG cited a Brookings Institution article demonstrating the extent to which user data such as geolocation data, app data, web search data, and communications and payments data can be used to make sensitive health inferences.
Responding to concerns about the risk of misuse of geolocation data specifically, Planned Parenthood called upon the Commission to write tailored regulations requiring that the retention of location data be time-bound and linked to a direct consumer request. The Duke/Stanford Cyber Policy Program emphasized that the Commission should seek to establish comprehensive regulations to govern data brokers, and that, “[i]n some cases, the policy response should include restrictions or outright bans on the sale of certain categories of information, such as GPS, location, and health data.” AG Bonta recommended that, “[t]he Commission…prohibit [the] collection, retention or use of particularly sensitive geolocation data, including…data showing that a user has visited reproductive health and fertility clinics.”
Many comments addressed questions around sensitive health-related data that is not otherwise protected by the Health Insurance Portability and Accountability Act (HIPAA). The College of Healthcare Information Management Executives (CHIME) emphasized that many consumers do not understand the scope or scale of the use of their sensitive health data, including data collected by fitness and “femtech” apps. The American Clinical Laboratory Association (ACLA), meanwhile, emphasized that the Commission should not subject entities already subject to HIPAA to new requirements, and argued that de-identified data should be exempt from privacy and security protections. Finally, algorithmic discrimination in the healthcare context was a focus area for several commenters.
c. Children’s Data
Finally, many commenters also weighed in on the particular vulnerability of children online. The Software & Information Industry Association (SIIA), for example, recognized that children deserve unique consideration, but argued that FTC rulemaking on child and student privacy would be duplicative of existing Commission efforts to update COPPA rules, as well as existing education privacy statutory provisions at the federal and state levels. Others suggested that a Commission rule could and should address child safety. Some of their most pressing concerns included:
- Age verification: One of the thorniest issues surrounding protecting child privacy has been parameters for age verification. Many commenters, including the Wikimedia Foundation, Data & Society Research Institute (Data & Society), and AG Rob Bonta, outlined the limits of using biological age for age verification purposes, the harms of using biometric verification, and concerns about increasing data collection in order to verify age.
- Targeted advertising: The EDPS, among other commenters, addressed unique concerns about targeted advertising to children, with EPIC, the Center for Digital Democracy, and Fairplay calling explicitly for a ban on targeted advertising to minors.
- Data minimization: EPIC proposed a “strictly necessary” standard for the collection and processing of minors’ data. Data & Society highlighted the importance of data minimization for reducing privacy harms for children, but cautioned that it shouldn’t be used as a pretext to “deploy ‘strategic ignorance.”
- Design: Data & Society called for some “basic floors”, such as those found in the United Kingdom’s Age-Appropriate Design Code. EPIC specifically proposed that the FTC make it “an unfair and deceptive practice for companies to make intentional design choices in order to facilitate the commercial surveillance of minors.”
- Deletion rights: Data & Society advocated for an “eraser button” that would allow young people to delete “the data collected about them and the algorithms fed by that data that shape the content they see,” modeled after GDPR Articles 16 and 17. In contrast, SIIA raised concerns about an unfettered deletion right, particularly in educational contexts, noting that deletion of records like grades and attendance records could negatively impact students and school programs.
What’s Next?
The ANPR is merely one step along a lengthy and arduous rulemaking process. Should the Commission decide to move forward with rulemaking after reviewing the public record, they will need to notify Congress, facilitate another public comment process on a proposed rule, conduct informal hearings, and survive judicial review. Regardless of the outcome, the ANPR comment period has provided an ample public record to inform any policymaker about the current digital landscape, the most pressing concerns faced by consumers, and frameworks utilized by companies and other jurisdictions to mitigate privacy and security risks.
The authors would like to acknowledge FPF intern Mercedes Subhani for her significant contributions to this analysis.