Public Comments Surface Fault Lines in Expectations for New California Privacy Law
In November 2020, California voters adopted the California Privacy Rights Act (“CPRA”) ballot initiative, which was developed to strengthen and expand upon the underlying California Consumer Privacy Act (“CCPA”) that the state legislature adopted in 2018. While the CPRA provides for significant new consumer rights and responsible data processing obligations on covered businesses, many questions regarding the scope and practical operation of these requirements remain unresolved. A recently released set of public comments on a CPRA rulemaking process brings some of these contested issues into sharper focus.
The CPRA delegates both rulemaking and enforcement authority to a brand new, privacy-specific body, the California Privacy Protection Agency (“the Agency”). Following the appointment of a governing board, the Agency took its first public-facing steps towards rulemaking in September, 2021, issuing an invitation for comment on 8 topics focused on new and undecided issues introduced by the CPRA. Last week, the Agency published approximately 70 submissions that it received during the course of its 45-day comment period.
A variety of individuals and organizations filed comments including trade associations and companies representing diverse industry sectors, consumer rights groups, and academics. One noteworthy filing is from Californians for Consumer Privacy, a nonprofit organization helmed by Alastair Mactaggart. Given the group’s role in drafting the California Privacy Rights Act ballot initiative and driving the public advocacy campaign that led to its adoption, these comments are indicative of the intent behind some of the ambiguous and contested provisions of the CPRA.
Across hundreds of pages of comments, stakeholders displayed sharp disagreements on what the CPRA does and should require on multiple consequential issues. These contested topics for CPRA rulemaking include (1) how businesses should conduct and submit privacy and security risk assessments, (2) the ways that automated decisionmaking technologies shall be regulated, (3) whether the CPRA requires the recognition of user enabled opt-out signals, (4) the scope of the Agency’s audit authority, and (5) how the Agency should further define and regulate manipulative design interfaces known as “dark patterns.”
1. Privacy and Security Risk Assessments
The CPRA brings California into greater alignment with other global and domestic privacy frameworks by requiring organizations engaged in data processing that poses a “significant risk” to consumer privacy and security to conduct and submit to the Agency risk assessments on a “regular basis.” However, the CPRA leaves many details to Agency regulations, including the specific activities that trigger the requirement to conduct an assessment, the scope and procedures for completing assessments, and the cadence for submitting assessments to the Agency. Comments revealed a variety of preferences for how and when businesses should be required to conduct and submit assessments.
Filings from industry stakeholders frequently raised concerns that the adoption of overly formalistic procedures and reporting requirements for risk assessments would create unnecessary burdens to both businesses and the Agency. Multiple industry groups suggested that assessments should be submitted to the Agency only upon request (consistent with the Virginia and Colorado privacy laws), or, if mandatory, once every 3 years. Civil society organizations typically sought to impose more expansive assessment requirements on covered businesses, with one coalition arguing that assessments should be conducted in advance of any change in business practices that “might alter the resulting risks to individuals’ privacy,” and be resubmitted to the Agency at 6 month intervals.
Californians for Consumer Privacy encouraged the Agency to adopt a graduated approach, with requirements to conduct risk assessments initially falling on only large processors of personal information. The group further suggested variable timing requirements for submitting those assessments established on the basis of the “intensity” of personal information and sensitive personal information processing.
2. Automated Decisionmaking Technology
The CPRA directs the Agency to develop regulations “governing access and opt-out rights” with respect to the use of automated decisionmaking technology, (“ADT”) including “profiling.” The Agency sought comments on multiple aspects of these rights, including the activities that should constitute regulated ADT, what businesses should do to provide consumers with “meaningful information about the logic” of automated decisionmaking processes, and the scope of consumers’ opt-out rights with regards to ADT. Industry and civil society comments differed in how to define the scope of ADT and whether the CPRA creates a standalone consumer right to opt-out of ADT beyond the CPRA’s rights to opt-out of the sale and sharing of personal information and to limit the use of sensitive personal information.
Numerous commenters, including the Future of Privacy Forum, recommended that the Agency define the scope of regulated ADT to decisions that produce “legal or similarly significant effects” to consumers, noting a similar standard under the GDPR. Legal or similarly significant effects would include, for example, automatic refusal of an online credit application; decisions made by online job recruitment platforms; decisions that affect other financial, credit, employment, health, or education opportunities and likely, in certain contexts, behavioral advertising.
Several industry groups such as the California Grocers Association further sought to ensure that the regulations will govern only “fully” automated processes that produce “final” decisions. Supporting this analysis, many commenters pointed to a universe of clearly low-risk, socially beneficial tools such as calculators, spreadsheets, GPS systems, and spell-checkers that could be swept up by overly broad regulation. Civil society groups including EFF and EPIC largely took a different approach, arguing that given emerging concerns of algorithmic harm and bias, the Agency’s regulations should more broadly define ADT, to include, for example, “systems that provide recommendations, support a decision, or contextualize information.”
Notably, Californians for Consumer Privacy argued that the Agency’s regulations should “specify that consumers have the right to opt-out of this automated decisionmaking” (referencing the online advertising ecosystem), and that the Agency should subsequently expand the right to opt-out of ADT to “other areas of online and business activity.” In stark contrast to this view, several industry groups argued that the Agency cannot create a standalone consumer right to opt-out of ADT as such a right is not provided for in the CPRA itself. Two prominent trade associations, CTIA and TechNet, further asserted that such a delegation of rulemaking authority would be “unconstitutional.”
3. Opt-Out Preference Signals
One of the most high profile debates in the present consumer privacy landscape concerns the adoption of “user-enabled global privacy controls,” a potentially broad array of technical signals first recognized under the CCPA’s regulations. In July 2021, a California Attorney General FAQ page was updated to assert that one such tool, a browser-signal named the Global Privacy Control (“GPC”) “must be honored by covered businesses as a valid consumer request to stop the sale of personal information.” The public comments revealed stark differences in statutory interpretation as to whether or not the CPRA requires that businesses honor this class of controls.
Industry groups including ESA, California Retailers Association, and the California Chamber of Commerce largely adopted the interpretation that the text of the CPRA makes business recognition of opt-out preference signals optional, based on the reading that CPRA sections 1798.135(b)(1),(3) offer multiple paths to complying with the exercise of user rights. One exception came from Mozilla, which recently implemented the GPC in the Firefox browser, and noted that enforceability of preference signals under the CCPA “remains ambiguous” and encouraged the Agency to expressly require that companies comply with the GPC under the CPRA.
On the other hand, civil society organizations tended to argue that the CPRA expressly mandates the recognition of global signals, pointing to section 1798.135(e), which concerns the exercise of consumer rights (including by opt-out signals) carried out by other authorized persons. Consumer Reports argued that recognition of these signals is required by the “plain language” of this provision and also noted that this interpretation would be consistent with the CPRA’s stated purpose of strengthening the CCPA. Californians for Consumer Privacy also took a firm stance, arguing that “there is no reading of the statute that would allow a business to [refuse] to honor a global opt-out signal enabled by a consumer” and criticized “misinformation we have seen from the advertising and technologies industries” on the scope of CPRA opt-out rights.
In the Future of Privacy Forum’s comments, we noted that regardless of whether the recognition of global opt-out signals is mandated or voluntary, the Agency has an important opportunity to set clear standards for the adoption of signals that will comply with the CPRA, GDPR, or the Colorado Privacy Act (which will require recognition of certain preference signals by 2025). In this context, the Agency should work with expert stakeholders to address many unresolved operational issues, such as how signals should be interpreted if they conflict with other consumer choices, and establish procedures for the approval of new signals over time.
4. Agency Audit Authority
The CPRA empowers the Agency to conduct audits of businesses to ensure compliance with the Act. Again, many of the details for the breadth and conduct of such audits are left to rulemaking, and the Agency requested expansive feedback on issues including the scope of its audit authority, the processes that audits should follow, and the criteria the Agency should use in selecting businesses to audit.
Californians for Consumer Privacy stated that the Agency Auditor’s scope should “only be limited by whether a request is reasonably linked to a potential violation of the CPRA.” The group further argued that the Agency should leave the determination of its auditing criteria to its Executive Director and Auditor rather than through rulemaking, so as not to alert businesses to these factors.
In contrast, industry groups suggested multiple approaches to clearly defining audit authority and criteria. Popular recommendations include requirements that the Agency (1) have evidence of a violation of a substantive provision of the CPRA that risks significant harm to consumers prior to initiating an audit, (2) provide 90-days notice to a business prior to an audit, (3) impose guardrails to ensure that audits are separate and independent from the Agency’s investigation and enforcement teams, and (4) create “fair and equal treatment” rules for determining what companies are audited.
5. “Dark Patterns”
Finally, the Agency requested feedback on a number of definitions used by the CPRA including manipulative design interfaces known as “dark patterns.” The CPRA defines “dark patterns” as “a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice, as further defined by regulation.” The Act contains relatively limited prohibitions on their application, stating that the use of “dark patterns” invalidates user “consent” and further directs Agency rulemaking to ensure that web pages that permit users to opt back-in to the sale or use of their information under the CPRA do not utilize “dark patterns.” Nevertheless, the concept of “dark patterns” has received increasing regulatory attention in recent years and has been flagged by Agency Board Chairperson Urban as a potential subject for discussion at a forthcoming series of “informational hearings.”
Industry groups such as the Internet Association raised concerns with the definition of “dark patterns” under the CPRA, arguing that essentially any interface could be interpreted as impairing user choice and therefore be considered a “dark pattern” under the Act, including the use of privacy-protective default settings. Several of these organizations requested that the definition of “dark patterns” be narrowed to focus on design practices that amount to consumer fraud and encouraged forthcoming regulations to provide clear examples of such conduct.
In contrast, a group of Stanford academics led by Professor Jen King suggested regulation on this subject beyond the context of consent interfaces and specifically requested an expanded definition of “dark patterns” to encompass novel interfaces such as voice activated systems. Similarly, despite raising concerns with the suitability of the term “dark patterns,” Common Sense Media suggested defining manipulative designs “as broadly as possible” to include features that encourage children to share personal information.
Conclusion
The Agency’s request for comments has revealed significant divergences in policy and statutory interpretation between stakeholders for the appropriate scope and application of CPRA requirements. Forthcoming resolution of contested issues through Agency rulemaking will likely carry significant implications for the exercise of consumer rights under the CPRA as well as the practical compliance obligations for covered businesses. Interested parties will hope to learn more about the ultimate scope and operation of the CPRA in early-2022, when the Agency intends to publish its initial set of proposed regulations and statement of reasons.