FPF Highlights Intersection of AI, Privacy, and Civil Rights in Response to California’s Proposed Employment Regulations
On July 18, the Future of Privacy Forum submitted comments to the California Civil Rights Council (Council) in response to their proposed modifications to the state Fair Employment and Housing Act (FEHA) regarding automated-decision systems (ADS). As one of the first state agencies in the U.S. to advance modernized employment regulations to account for automated-decision systems, the Council is likely to influence how other states, regulators, and policymakers consider how existing civil rights and data privacy laws apply to artificial intelligence.
In order for these regulations to provide clarity and constructive guidance within existing laws and frameworks for organizations and individuals alike, including California’s consumer privacy laws, FPF provided four recommendations to the Council:
- 1. Definition Alignment: The Council’s definition of “automated decision system” should align with similar regulations at the state and federal levels to facilitate greater clarity and compliance.
- 2. Role-Specific Responsibilities: The Council should create legal standards for when a developer of an AI system becomes an agent or employment agency, accounting for role-specific responsibilities and capabilities in the AI system lifecycle.
- 3. Data Retention and Privacy: Data retention and record-keeping requirements should be reasonable and align with California consumers’ rights to data privacy and data minimization.
- 4. Additional AI Governance Measures: The Council should conduct additional inquiries about the use of ADS and existing civil rights laws, including assessing whether automated systems are fit for purpose.
Each is summarized below in brief. For more information, you can read FPF’s full comments to the Council here.
Definition Alignment
With at least four California state governing bodies—the Council, California Privacy Protection Agency, California Government Operations Agency, and the California Legislature—considering regulatory actions on automated decision-making technology, consistent terminology across regulations enhances AI governance and prevents conflicts that could arise from divergent definitions. To ensure focus and regulatory efforts are targeted toward technologies that play an impactful role in individuals’ rights, FPF recommended alignment with definitions from Government Code § 11546.45.51, the CPPA Draft Regulations, and Assembly Bill 2930 that require the ADS role be “substantial” to the decision-making process.
Law / Proposal | Definition |
Civil Rights Council Proposed Text | A computational process that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decisionmaking that impacts applicants or employees. |
California Privacy Protection Act Draft Regulations (March 2024) | Any technology that processes personal information and uses computation to execute a decision, replace human decision-making, or substantially facilitate human decisionmaking. |
Government Code § 11546.45.51 | “High-risk automated decision system” means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice. |
Assembly Bill 2930 | A system or service that uses artificial intelligence and has been specifically developed to, or specifically modified to, make, or be a substantial factor in making, consequential decisions. |
Role-Specific Responsibilities
ADS governance structures and corresponding accountability mechanisms should account for developers’ and deployers’ role-specific responsibilities. As explained in FPF’s Best Practices for AI and Workplace Assessment Technologies, “Developers and Deployers each have important roles in ensuring that Individuals understand when — and to what extent — AI tools have Consequential Impacts…[and p]articular disclosures should be provided by the entity that is best positioned to develop the content of the disclosure and communicate it to Individuals.” Establishing a legal standard in the proposed modifications would help clarify the degree of involvement, control, and influence required for an AI developer to become accountable for discriminatory outcomes based on the role and capability-specific responsibilities of developers and deployers and their relationship with one another.
Data Retention and Privacy
To minimize the risk of individuals’ personal data being misused or breached and uphold California citizens’ privacy rights, FPF recommends the Council should align and clarify the proposed regulations’ record and data retention requirements with existing privacy rights and obligations under the California Consumer Privacy Act (CCPA), California Privacy Rights Act (CPRA), and regulations set forth by the CPPA. As proposed, the modifications’ retention requirements for employers and developers may not only violate California data minimization principles, but they also raise questions about whether they are meant to override or cede to existing California privacy rights to delete such data or opt-out of automated decisionmaking technology.
Additional AI Governance Measures
Finally, ADS should not perpetuate discrimination or exacerbate harm, but updates to existing employment regulations may not be enough to mitigate all forms of discriminatory conduct or provide sufficient guidance. We recommend that the Council make additional inquiries to understand the use of ADS and the impact of existing civil rights laws. To prevent discriminatory effects and overall harm, AI tools must be validated and tested to ensure they solve the problems they are designed for. FPF acknowledges that discrimination can arise not only from faulty or inaccurate systems but simply because an AI system is not fit for its intended purpose. Accordingly, the Council should consider existing AI governance measures, such as “fit for purpose” tests, that further support civil rights protections and account for the limitations of AI.