FPF Weighs in on Automated Decisionmaking, Purpose Limitation, and Global Opt-Outs for California Stakeholder Sessions
This week, Future of Privacy Forum policy experts provided testimony in California public Stakeholder Sessions to provide independent policy recommendations for the California Privacy Protection Agency (CPPA). The Agency heard from a variety of speakers and members of the public, on a broad range of issues relevant to forthcoming rulemaking on the California Privacy Rights Act (CPRA).
Specifically, FPF weighed in on automated decisionmaking (ADM), purpose limitation, and global opt-out preference signals. As a non-profit dedicated to advancing privacy leadership and scholarship, FPF typically weighs in with regulators when we identify opportunities to support meaningful privacy protections and principled business practices with respect to emerging and socially beneficial technologies. In California, the 5th largest economy in the world, the newly established California Privacy Protection Agency is tasked with setting standards that will impact data flows across the United States and globally for years to come.
Automated Decision-making (ADM). The subject of “automated decision-making” (ADM) was discussed on Wednesday, May 4th. Although the California Privacy Rights Act does not provide specific statutory rights around ADM technologies, the Agency is tasked with rulemaking to elaborate on how the law’s individual access and opt-out rights should be interpreted with respect to profiling and ADM.
FPF’s Policy Counsel Tatiana Rice raised the following issues for the Agency on automated decision-making:
- Consumers’ rights of access for ADM should center on systems that directly and meaningfully impact individuals’ lives, such as those that affect financial opportunities, housing, or employment. The standard “legal or similarly significant effects” has the benefit of capturing high-risk use cases, while encouraging interoperability with global frameworks, such as existing guidance and case law under Article 22 of the General Data Protection Regulation (GDPR).
- Explainability is a crucial principle for developing trustworthy automated systems, and information about ADM should be meaningful and understandable to the average consumer. As a starting point, the Agency should draw from the National Institute of Science and Technology’s Principles for Explainable Artificial Intelligence, which describe ways in which explainable systems should (1) provide an explanation; (2) be understandable to its’ intended end-users; (3) be accurate; and (4) operate within its knowledge limits, or the conditions for which it was designed.
- All consumer rights of access should be inclusive and reflective of California’s diverse population, including those who are non-English speaking, differently abled, and lack consistent access to broadband.
Purpose Limitation. The California Privacy Rights Act requires businesses to disclose the purposes for which the personal information they collect will be used, and prohibits them from collecting additional categories of personal information, or using the personal information collected, for additional purposes that are “incompatible with the disclosed purpose for which the personal information was collected,” without giving additional notice. 1798.100(a)(1). As a general business obligation, this provision reflects the principle of “purpose limitation” in the Fair Information Practices (FIPs), and was discussed on Thursday, May 5th.
FPF’s Director of Legislative Research & Analysis Stacey Gray raised the following issues for the Agency on purpose limitation:
- Purpose limitation is a fundamental principle of the Fair Information Practices (FIPs) that serves to protect individual and societal privacy interests without relying solely on individual consent management – as such, we encourage the Agency to ensure that it is respected and provide robust guidance on its provisions.
- “Incompatible” secondary uses of information should be interpreted strictly and include those not reasonably expected by the average person – for example, invasive profiling unrelated to providing the product or service requested by the consumer; training high-risk algorithmic systems such as facial recognition; or voluntary sharing with law enforcement.
- “Compatible” secondary uses of information should include scientific, historical, or archival research in the public interest, when subjected to appropriate privacy and security safeguards.
Opt-out preference signals. Finally, the California Privacy Rights Act envisions a new class of “opt-out preference signals,” sent by browser plug-ins and similar tools to convey an individual’s request to opt-out of certain data processing. As an emerging feature of several U.S. state privacy laws, there are open technical and policy questions for how to ensure that such ‘global’ signals succeed in lowering the burdens of individual privacy self-management.
FPF’s Senior Counsel Keir Lamont provided the following comments to the Agency on global opt-out preference signals on Thursday, May 5th:
- Rulemaking should address the primary practical consideration for opt-out preference signals, which is how to address conflicts between different signals or separate, business-specific privacy settings.
- The Agency should clarify the extent to which opt-out preference signals can be expected to, and should, apply to separate sets of personal data collected from different sources and in different contexts; and
- The Agency should engage with regulators in other states, including Colorado and Connecticut, to establish a multistakeholder process to approve qualifying preference signals as they are developed and refined over time.
Following the public Stakeholder Sessions this week, the Agency is expected to publish draft regulations as soon as Summer or Fall 2022, which will then be available for public comments. Although the timeline could be delayed, the Agency’s goal is to finalize regulations prior to the CPRA’s effective date of January 1, 2023.