Survey of Current Universal Opt-Out Mechanisms

With contributions from Aaron Massey, FPF Senior Policy Analyst and Technologist, Keir Lamont, Director for U.S. Legislation, and Tariq Yusuf, FPF Policy Intern

Several technologies can help individuals configure their devices to automatically opt out of web services’ requests to sell or share personal information for targeted advertising. Seven state privacy laws require that organizations honor opt-out requests. This blog post discusses the legal landscape governing Universal Opt-Out Mechanisms (UOOMs), as well as the key differences between the leading UOOMs in terms of setup, default settings, and whether those settings can be configured. We then offer guidance to policymakers to consider clarity and consistency in establishing, interpreting, and enforcing UOOM mandates.

The legal environment behind Universal Opt-Out Mechanisms

Online advertising continues to evolve, specifically in reaction to new regulatory requirements as an increasing number of international jurisdictions and U.S. states have enacted comprehensive privacy laws. As of October 2024, twelve states grant individuals the right to opt out of businesses selling their personal information or processing that data for targeted advertising. Of these twelve state privacy laws, seven include provisions that make it easier for individuals to opt out of certain uses of personal data. This includes the kind of personal and pseudonymized information that is routinely shared with websites, such as browser information or information sent via cookies.

Historically, a significant practical hurdle existed in the implementation of opt-out rights: users wishing to exercise the right to opt out of the use of this information for targeted advertising must locate and manually click opt-out links that businesses provide on their web pages, and they generally must do so for every site they visit. To make opting out easier, seven state’s privacy laws (California, Colorado, Connecticut, Delaware, Montana, Oregon, and Texas) require businesses to honor individuals’ opt-out preferences transmitted through Universal Opt-Out Mechanisms (UOOMs) as valid means to opt out of targeted advertising and data sales. UOOMs refer to a range of desktop and mobile tools designed to provide consumers with the ability to configure their devices to automatically opt out of the sale or sharing of their personal information with internet-based entities with whom they interact. These tools transmit consumers’ opt out preferences by using technical specifications, chief among these the Global Privacy Control (GPC)

California became the first state to establish the force of law for opt-out signals as valid opt-outs through an Attorney General rulemaking process in August, 2020. Specifically, businesses who do not honor the Global Privacy Control on their websites may risk being found in noncompliance with the California Consumer Privacy Act (CCPA), which was the central topic in the recent enforcement action against Sephora, an online retailer. In the complaint, state authorities alleged that Sephora’s website was not configured to detect or process any GPC signals and, as a result, failed to honor users’ opt-out preferences by not opting them out of sales of their data.

Survey of UOOM Tools Available to Consumers

The California Attorney General references the Global Privacy Control as the leading opt-out specification that meets CCPA standards. As of this writing, eight UOOMs are endorsed by the creators of the GPC specification:

Although other UOOMs exist (and more are likely to emerge), we focus exclusively on the tools endorsed by the creators of the Global Privacy Control specification. In 2023, the FPF team downloaded and installed each tool and evaluated each tool’s installation process, whether GPC signals were sent without additional configuration, and whether those settings could be adjusted (see Figure 1 below).

InstallationGPC Signals Sent without Additional ConfigurationCan the Configuration Be Adjusted?
IronVestRequires account sign-up❌ NoYes; GPC can be enabled only on a per-site basis, not globally.
Brave BrowserNo steps required after installationYesNo; GPC cannot be disabled, either globally or per-site, even when other protections in the “Shields” feature are turned off.
DisconnectNo steps required after installation❌ NoYes; GPC can be enabled globally but not on a per-site basis using a checkbox in the main browser plugin window.
DuckDuckGo Privacy BrowserNo steps required after installationYesYes; GPC can be disabled globally but not on a per-site basis.
DuckDuckGo Privacy EssentialsNo steps required after installationYesYes; GPC can be disabled both globally or on a per-site basis by disabling “Site Privacy Protection.”
FirefoxRequires technical configuration❌ NoYes, GPC can be disabled globally in the browser’s technical configuration but not on a per-site basis.
OptMeowtNo steps required after installationYesYes; GPC can be disabled both globally or on a per-site basis by disabling the “Do Not Sell” feature.
Privacy BadgerNo steps required after installationYesYes; GPC can be disabled both globally or on a per-site basis by disabling the “Do Not Sell” feature.
Figure 1: Observations of eight leading UOOM tools as of October 12, 2023

Our survey allows us to make four key observations about the state of these UOOMs.

Next Steps & Policy Considerations

In 2023 alone, six states passed comprehensive privacy laws. In the years ahead, we expect that more states will be added to this list, and many are likely to include provisions regarding UOOMs. Policymakers must ensure that all UOOM requirements offer adequate clarity and consistency. 

One place where greater detail from policymakers would provide benefit to organizations seeking to comply with legal requirements is in guidance not only for covered businesses, but also for vendors of consumer-facing privacy tools. Specifically, guidance would be useful regarding how a UOOM must be configured or implemented to give assurance that the GPC signals being sent are a legally valid expression of individual intent. For example, a minor detail such as whether a tool contains a “per-site” toggle for the GPC may be significant in one state, but not another. 

Similarly, the question of “default settings” and their legal significance requires greater clarity in many jurisdictions. For example, to be considered a valid exercise of individuals’ opt-out rights under Colorado law, a valid GPC signal occurs when individuals provide “affirmative, freely given, and unambiguous choice.” This requirement creates an engineering ambiguity for publishers and websites over the validity of GPC signals they receive. For example, users installing a browser extension that requires a separate, affirmative user configuration prior to sending the GPC signal will unambiguously be a valid expression of individual choice. On the other hand, an individual using a browser marketed with a variety of privacy preserving features, including the GPC, may be sending a GPC signal that does not meet the law’s standards for defaults if those features are enabled by default and they do not provide notice to users. The user may have wanted a privacy feature other than GPC and not been aware that the GPC signal would be sent. On the other hand, another user may both be seeking and appreciate a default-on GPC and not want it to be legally ignored because they didn’t affirmatively enable it. Publishers and websites do not have an engineering mechanism to differentiate between these scenarios, incentivizing them to use nonstandard techniques, like fingerprinting, for the purposes of discerning which GPC signals are valid.

New states implementing comprehensive privacy laws also increase the odds that specific privacy rights may fracture across jurisdictions in ways that are either cohesive or irreconcilable. The current GPC specification does not support conveying users’ jurisdictions, so it is unclear how organizations must differentiate between signals originating from one jurisdiction or another. The result could be that entities must choose which state to risk running afoul of the law in such that they may follow the requirements of a conflicting jurisdiction.

As user-facing privacy tools are developed and updated, responsible businesses will likely err on the side of over-inclusion by treating all GPC signals as valid UOOMs. However, increased user adoption and the expansion of the GPC into new sectors (such as connected TVs or vehicles) could change expectations and put more pressure on different kinds of advertising activities. In the absence of uniform federal standards that would create guidance for such mechanisms, most businesses will aim to streamline compliance across states, providing a significant opportunity for policymakers to shape the direction of consumer privacy in the coming years. Policymakers must be aware of these developments and strive for clarity and consistency in order to best inform organizations, empower individuals, and set societal expectations and standards that can be applied in future cases.

FPF Weighs in on Automated Decisionmaking, Purpose Limitation, and Global Opt-Outs for California Stakeholder Sessions

This week, Future of Privacy Forum policy experts provided testimony in California public Stakeholder Sessions to provide independent policy recommendations for the California Privacy Protection Agency (CPPA). The Agency heard from a variety of speakers and members of the public, on a broad range of issues relevant to forthcoming rulemaking on the California Privacy Rights Act (CPRA).

Specifically, FPF weighed in on automated decisionmaking (ADM), purpose limitation, and global opt-out preference signals. As a non-profit dedicated to advancing privacy leadership and scholarship, FPF typically weighs in with regulators when we identify opportunities to support meaningful privacy protections and principled business practices with respect to emerging and socially beneficial technologies. In California, the 5th largest economy in the world, the newly established California Privacy Protection Agency is tasked with setting standards that will impact data flows across the United States and globally for years to come.

Automated Decision-making (ADM). The subject of “automated decision-making” (ADM) was discussed on Wednesday, May 4th. Although the California Privacy Rights Act does not provide specific statutory rights around ADM technologies, the Agency is tasked with rulemaking to elaborate on how the law’s individual access and opt-out rights should be interpreted with respect to profiling and ADM.

FPF’s Policy Counsel Tatiana Rice raised the following issues for the Agency on automated decision-making:

Purpose Limitation. The California Privacy Rights Act requires businesses to disclose the purposes for which the personal information they collect will be used, and prohibits them from collecting additional categories of personal information, or using the personal information collected, for additional purposes that are “incompatible with the disclosed purpose for which the personal information was collected,” without giving additional notice. 1798.100(a)(1). As a general business obligation, this provision reflects the principle of “purpose limitation” in the Fair Information Practices (FIPs), and was discussed on Thursday, May 5th.

FPF’s Director of Legislative Research & Analysis Stacey Gray raised the following issues for the Agency on purpose limitation:

Opt-out preference signals. Finally, the California Privacy Rights Act envisions a new class of “opt-out preference signals,” sent by browser plug-ins and similar tools to convey an individual’s request to opt-out of certain data processing. As an emerging feature of several U.S. state privacy laws, there are open technical and policy questions for how to ensure that such ‘global’ signals succeed in lowering the burdens of individual privacy self-management. 

FPF’s Senior Counsel Keir Lamont provided the following comments to the Agency on global opt-out preference signals on Thursday, May 5th:

Following the public Stakeholder Sessions this week, the Agency is expected to publish draft regulations as soon as Summer or Fall 2022, which will then be available for public comments. Although the timeline could be delayed, the Agency’s goal is to finalize regulations prior to the CPRA’s effective date of January 1, 2023.

What the Biden Executive Order Means for Data Protection

Last week, President Biden signed an Executive Order on “Promoting Competition in the American Economy” (“the Order” or “the EO”), published together with an explanatory Fact Sheet. The Order outlines a sweeping agenda for a “whole of government” approach to enforcement of antitrust laws in nearly every sector of the economy. Although there is a focus on particular markets, such as agriculture and healthcare, the Order includes a number of provisions with clear implications for data protection and privacy. 

In our view, the overarching theme of the Order is a concern with large platforms and the accumulation of data as an aspect of market dominance. This is an approach that aligns with growing developments in the European Union, and will have continued consequences for all sectors of the economy. In addition, the Order has implications for upcoming enforcement and privacy rulemaking at the Federal Trade Commission (FTC). Finally, we note a number of other federal agencies that are tasked with, or encouraged, to pursue particular goals that impact privacy or data protection. These include: drone privacy (DOT/FAA); studying the mobile app ecosystem (Dept. of Commerce); the right to repair (FTC, DoD); Net Neutrality (FCC); and financial data portability (CFPB).

Most of the Executive Order provisions do not have immediate effect for the collection of consumer data, but instead, call for federal agencies to study, take future action, incorporate the administration’s policy in procurement and enforcement decisions, or consider engaging in rulemaking to the extent of their statutory authority. Independent commissions, which do not report directly to the President, are “encouraged” to consider rulemaking and other actions. 

(1) An Overall Focus on Accumulation of Data as Relevant to Market Dominance 

Although the intersection of privacy and competition law has been discussed for many years, this Order represents an important development insofar as it explicitly frames data collection as a key aspect of market dominance. The Order specifically highlights the impact of serial mergers in the technology sector on user privacy, identifying privacy and competition among “free” products as factors that should be considered as part of the enhanced scrutiny of mergers. The Fact Sheet explains that this is particularly relevant in the case of “dominant internet platforms” and acquisition of nascent competitors.  

The EO states in Section 1 the policy of the administration, which all federal agencies are required to follow, and independent commissions are encouraged to pursue:

“[It is] the policy of my Administration to enforce the antitrust laws to meet the challenges posed by new industries and technologies, including the rise of the dominant Internet platforms, especially as they stem from serial mergers, the acquisition of nascent competitors, the aggregation of data, unfair competition in attention markets, the surveillance of users, and the presence of network effects.”

This framing of privacy and data protection as core elements of competition aligns not only with growing movement in the United States, but also with clear trends in the European Union. In the United States, both the FTC and state Attorneys General have brought lawsuits in recent years against Facebook, as well as state Attorneys General against Google, for alleged violations of antitrust laws. Two such claims were recently dismissed, although they could be followed by further actions from the FTC. 

In the EU, the European Data Protection Supervisor (EDPS) began pushing for EU competition policy to take into account accumulation of personal data and other privacy risks as early as March 2014, in his Preliminary Opinion on privacy and competitiveness in the age of big data. Since then, antitrust regulators in Europe have started to pay increased attention not only to the role of personal data in digital markets and anticompetitive behavior, but also to the role that personal data protection and privacy safeguards might play in sanctioning that behavior, or, on the contrary, in being a barrier to competition. Significantly, the German antitrust regulator, in 2019, relied on General Data Protection Regulation (GDPR) provisions to find an abuse of dominant position in a case against Facebook, prohibiting the company to combine user data from different sources. Those findings have been challenged in Court by Facebook and proceedings are ongoing

A key development in Brussels is the legislative proposal for a Digital Markets Act – a draft regulation published by the European Commission last year that targets “gatekeepers”, or online intermediaries providing a “core platform service”. The DMA proposes a series of ex ante rules, including a prohibition to combine personal data from different sources in the absence of valid GDPR consent.

(2) Enforcement and Rulemaking Ahead for the Federal Trade Commission (FTC)

The Executive Order “encourages” the Chair of the FTC, as well as other agencies with authority to enforce the Clayton Act, to “enforce the antitrust laws fairly and vigorously,” including in oversight of mergers, but also in areas such as protecting workers from wage collusion or unfair non-compete clauses. In addition, the Chair of the FTC is “encouraged” to exercise the Commission’s statutory rulemaking authority in a number of specific areas to promote competition, including to address “unfair data collection and surveillance practices that may damage competition, consumer autonomy, and consumer privacy” and “unfair competition in major Internet marketplaces.” Sec. 5(h).

With respect to ongoing enforcement of antitrust laws, this language aligns with recent developments in the FTC, most significantly the appointment of antitrust expert Lina Khan as the Chair, who has already indicated that the FTC will take a greater role in antitrust cases.  The FTC’s Bureau of Competition, working with the Bureau of Economics, enforces antitrust laws in the United States, including the Sherman Act (15 U.S.C. 1 et seq) and the Clayton Act (15 U.S.C. 12 et seq).

Earlier statements from the previous Acting Chair Rebecca Kelly-Slaughter have also indicated that the Commission would be focused on bringing more cases under the “unfairness” prong of Section 5 of the FTC Act, followed by the announcement of a new Rulemaking Group. This rulemaking, which is set to commence under “Magnuson-Moss Procedures,” is far slower than typical agency rulemaking, but could be used to promulgate federal rules on what types of data collection and use are “unfair” under the FTC Act. For example, the agency recently noted in an FTC staff blog that the sale or use of “racially biased algorithms” is unfair under Section 5 of FTC Act. Rulemaking could codify or further elaborate on this and other data collection issues.

(3) Other Agencies to Watch: Drones, Mobile Apps, Right to Repair, Net Neutrality, and Financial Data Portability

Consistent with the EO’s “whole-of-government” approach, the order outlines tasks and recommendations for a long list of other federal agencies, noting that each agency has the ability to influence market competition through both the procurement process and through rulemaking. Agencies that are under the direct control of the President (such as the Departments of Transportation or Defense) are expressly required to engage in particular tasks, such as conducting studies or commencing rulemaking. In contrast, independent federal agencies (such as the FTC, FCC, and CFPB) are “encouraged to consider” particular courses of action.

In order to help direct and coordinate the efforts of all agencies, the Order establishes a White House Competition Council in the Executive Office of the President, to monitor the implementation of the Order and to coordinate the government agencies’ response to “the rising power of large corporations in the economy.”

Conclusion

Overall, the Executive Order represents a leveraging of the enormous power of the US executive branch towards the promotion of competition in every sector, including taking into account privacy and data protection. The Order frames this approach as a return to the policies reflected in the Sherman Act, the Clayton Act, and other laws passed in the 19th and early 20th centuries. Among other things, this ensures that competition law in the United States must now incorporate notions of power and fairness that arise from data collection, use, and privacy.

Finally, the Administration’s argument that privacy should be a factor for competition policy would be far stronger if the United States, like other countries, had a comprehensive federal legislative standard for privacy. Proliferating state privacy laws are particularly unhelpful as a point of reference, as many of the larger or more dominant platforms can point to the fact that they do not “share or sell” data as defined by the recent state laws, while smaller companies do. We hope the administration will lend its weight to efforts to break the Capitol Hill logjam on data protection legislation.

Navigating Preemption through the Lens of Existing State Privacy Laws

This post is part of an ongoing series on federal preemption and enforcement in United States federal privacy legislation. See Preemption in US Privacy Laws (June 14, 2021).

In drafting a federal baseline privacy law in the United States, lawmakers must decide to what extent the law will override state and local privacy laws. In a previous post, we discussed a survey of 12 existing federal privacy laws passed between 1968-2003, and the extent to which they are preemptive of similar state laws. 

Another way to approach the same question, however, is to examine the hundreds of existing state privacy laws currently on the books in the United States. Conversations around federal preemption inevitably focus on comprehensive laws like the California Consumer Privacy Act, or the Virginia Consumer Data Protection Act — but there are hundreds of other state privacy laws on the books that regulate commercial and government uses of data. 

In reviewing existing state laws, we find that they can be categorized usefully into: laws that complement heavily regulated sectors (such as health and finance); laws of general applicability; common law; laws governing state government activities (such as schools and law enforcement); comprehensive laws; longstanding or narrowly applicable privacy laws; and emerging sectoral laws (such as biometrics or drones regulations). As a resource, we recommend: Robert Ellis Smith, Compilation of State and Federal Privacy Laws (last supplemented in 2018). 

  1. Heavily Regulated Sectoral Silos. Most federal proposals for a comprehensive privacy law would not supersede other existing federal laws that contain privacy requirements for businesses, such as the Health Insurance Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act (GLBA). As a result, a new privacy law should probably not preempt state sectoral laws that: (1) supplement their federal counterparts and (2) were intentionally not preempted by those federal regimes. In many cases, robust compliance regimes have been built around federal and state parallel requirements, creating entrenched privacy expectations, privacy tools, and compliance practices for organizations (“lock in”).
  1. Laws of General Applicability. All 50 states have laws barring unfair and deceptive commercial and trade practices (UDAP), as well as generally applicable laws against fraud, unconscionable contracts, and other consumer protections. In cases where violations involve the mis-use of personal information, such claims could be inadvertently preempted by a national privacy law.
  1. State Common Law. Privacy claims have been evolving in US common law over the last hundred years, and claims vary from state to state. A federal privacy law might preempt (or not preempt) claims brought under theories of negligence, breach of contract, product liability, invasions of privacy, or other “privacy torts.”
  2. State Laws Governing State Government Activities. In general, states retain the right to regulate their own government entities, and a commercial baseline privacy law is unlikely to affect such state privacy laws. These include, for example, state “mini Privacy Acts” applying to state government agencies’ collection of records, state privacy laws applicable to public schools and school districts, and state regulations involving law enforcement — such as government facial recognition bans.
  1. Comprehensive or Non-Sectoral State Laws. Lawmakers considering the extent of federal preemption should take extra care to consider the effect on different aspects of omnibus or comprehensive consumer privacy laws, such as the California Consumer Privacy Act (CCPA), the Colorado Privacy Act, and the Virginia Consumer Data Protection Act. In addition, however, there are a number of other state privacy laws that can be considered “non-sectoral” because they apply broadly to businesses that collect or use personal information. These include, for example, CalOPPA (requiring commercial privacy policies), the California “Shine the Light” law (requiring disclosures from companies that share personal information for direct marketing), data breach notification laws, and data disposal laws.
  1. Longstanding, Narrowly Applicable State Privacy Laws. Many states have relatively long-standing privacy statutes on the books that govern narrow use cases, such as: state laws governing library records, social media password laws, mugshot laws, anti-paparazzi laws, state laws governing audio surveillance between private parties, and laws governing digital assets of decedents. In many cases, such laws could be expressly preserved or incorporated into a federal law. 
  1. Emerging Sectoral and Future-Looking Privacy Laws. New state laws have emerged in recent years in response to novel concerns, including for: biometric data; drones; connected and autonomous vehicles; the Internet of Things; data broker registration; and disclosure of intimate images. This trend is likely to continue, particularly in the absence of a federal law.

Congressional intent is the “ultimate touchstone” of preemption. Lawmakers should consider long-term effects on current and future state laws, including how they will be impacted by a preemption provision, as well as how they might be expressly preserved through a Savings Clause. In order to help build consensus, lawmakers should work with stakeholders and experts in the numerous categories of laws discussed above, to consider how they might be impacted by federal preemption.

ICYMI: Read the first blog in this series PREEMPTION IN US PRIVACY LAWS.

Manipulative Design: Defining Areas of Focus for Consumer Privacy

In consumer privacy, the phrase “dark patterns” is everywhere. Emerging from a wide range of technical and academic literature, it now appears in at least two US privacy laws: the California Privacy Rights Act and the Colorado Privacy Act (which, if signed by the Governor, will come into effect in 2025).

Under both laws, companies will be prohibited from using “dark patterns,” or “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision‐making, or choice,” to obtain user consent in certain situations–for example, for the collection of sensitive data.

When organizations give individuals choices, some forms of manipulation have long been barred by consumer protection laws, with the Federal Trade Commission and state Attorneys General prohibiting companies from deceiving or coercing consumers into taking actions they did not intend or striking bargains they did not want. But consumer protection law does not typically prohibit organizations from persuading consumers to make a particular choice. And it is often unclear where the lines fall between cajoling, persuading, pressuring, nagging, annoying, or bullying consumers. The California and Colorado laws seek to do more than merely bar deceptive practices; they prohibit design that “subverts or impairs user autonomy.”

What does it mean to subvert user autonomy, if a design does not already run afoul of traditional consumer protections law? Just as in the physical world, the design of digital platforms and services always influences behavior — what to pay attention to, what to read and in what order, how much time to spend, what to buy, and so on. To paraphrase Harry Brignull (credited with coining the term), not everything “annoying” can be a dark pattern. Some examples of dark patterns are both clear and harmful, such as a design that tricks users into making recurring payments, or a service that offers a “free trial” and then makes it difficult or impossible to cancel. In other cases, the presence of “nudging” may be clear, but harms may be less clear, such as in beta-testing what color shades are most effective at encouraging sales. Still others fall in a legal grey area: for example, is it ever appropriate for a company to repeatedly “nag” users to make a choice that benefits the company, with little or no accompanying benefit to the user?

In Fall 2021, Future of Privacy Forum will host a series of workshops with technical, academic, and legal experts to help define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops will feature experts on manipulative design in at least three contexts of consumer privacy: (1) Youth & Education; (2) Online Advertising and US Law; and (3) GDPR and European Law. 

As lawmakers address this issue, we identify at least four distinct areas of concern:

This week at the first edition of the annual Dublin Privacy Symposium, FPF will join other experts to discuss principles for transparency and trust. The design of user interfaces for digital products and services pervades modern life and directly impacts the choices people make with respect to sharing their personal information. 

Preemption in US Federal Privacy Laws

This post is the first in an ongoing series on federal preemption and enforcement in United States federal privacy legislation.

As federal lawmakers consider proposals for a federal baseline privacy law in the United States, one of the most complex challenges is federal preemption, or the extent to which a federal law should nullify the state laws on the books and the emerging laws addressing the collection and use of personal information.

Many recognize the benefits to businesses and consumers of establishing uniform national standards for the collection, transfer, and sale of commercial personal information, if those standards are strong and flexible enough to meet new challenges that arise. Such standards will require, to at least an extent, replacing individual state efforts. At the same time, however, there are hundreds of state privacy laws on the books. Many of these laws have a uniquely local character, such as laws governing student records, medical information, and library records. Preemption only becomes more complicated as additional states join recent leaders such as Virginia, California, and Colorado, to pass omnibus data privacy laws that apply to data collected across borders from websites, apps, and other digital services.

What can we learn from how existing federal privacy laws have addressed preemption? As a starting point, FPF staff have surveyed twelve (12) federal sectoral privacy laws passed between 1968-2003, and examined the extent to which they preempt similar state privacy laws. A comprehensive consumer privacy law would almost certainly preserve most of these sectoral laws and their state counterparts. They provide a useful insight into how Congress has addressed federal preemption in the past.

In surveying these 12 federal privacy laws, we observe a few notable features, and offer some thoughts (below) on what factors have influenced Congressional decisions about preemption:

Factors Influencing Preemption Decisions

Given the case-by-case variability described above and in the Discussion Draft, what determines when and how Congress has chosen to preempt state and local regulations that overlap or supplement federal privacy laws?

Congress is a political body, and politics surely play a role. But our analysis suggests that Congress pursues an overall goal of balancing individual rights with practical business compliance. We suggest that Congress pursues those goals by weighing several factors aside from political considerations. This likely include, for example: (1) the existence of national consensus on harmful business practices (versus expected regional variation in what is considered harmful); (2) the comprehensiveness or prescriptive nature of the law; (3) the national versus localized nature of business practices; and (4) the localized nature of data (which is sometimes, but not always, related to identifiability of data).

For example, a key difference between the federal commercial emailing law, CAN-SPAM (very preemptive), and the federal commercial telemarketing law, the Telephone Consumer Protection Act (not preemptive except with respect to certain inter-state standards) is the relative ease with which the personal data being regulated can be localized, or have its geographic location readily inferred. Email addresses, despite being personal information, give no indication of the owner’s location, while residential phone numbers were straightforward to relate to a particular state when the law was drafted in 1991. 

Thus, while differing state telemarketing laws can present compliance costs for marketing companies operating across state lines, such laws do not create impractical barriers to compliance. In addition, telemarketing represents an issue on which there may be much more regional variation than national consensus on appropriate local business practices: for example, some states ban political calls, some ban calls during certain times of day, and some maintain additional do-not-call registries (such as Texas’s do-not-call registry for businesses, to allow them to avoid commercial calls from electricity providers). 

As a contrasting example, the Fair Credit Reporting Act (largely preemptive), in 1970 represented a strong national consensus on appropriate business practices applicable primarily to three dominant credit bureaus in the United States, all operating effectively nationwide. At the same time, credit reports are involved in relatively localized business practices and involve identifiable information from which location can usually be inferred (e.g. from home addresses). As a result, business compliance with different state standards may not have been impossible, but was perhaps, ultimately, not desirable due to the comprehensive and prescriptive nature of the law and the relative national consensus on appropriate norms for credit bureaus.

These factors are just some of the myriad considerations that we suggest may influence preemption decisions for a federal privacy law, if the goal is to balance consumer privacy interests against concerns about practical business compliance. Further research might include, for example, a review of Congressional histories, or learning from other, non-privacy federal laws. We welcome feedback on the Discussion Draft.

Read the next blog in this series: Navigating Preemption through the Lens of Existing State Privacy Laws.