A First for AI: A Close Look at The Colorado AI Act
Colorado made history on May 17, 2024 when Governor Polis signed into law the Colorado Artificial Intelligence Act (“CAIA”), the first law in the United States to comprehensively regulate the development and deployment of high-risk artificial intelligence (“AI”) systems. The law will come into effect on February 1, 2026, preceding the March, 2026 effective date of (most of) the European Union’s AI Act.
To help inform public understanding of the law, the Future of Privacy Forum released a Policy Brief summarizing and analyzing key CAIA elements, as well as identifying significant observations about the law.
In the Brief, FPF provides the following analysis and observations:
1. Broader Potential Scope of Regulated Entities: Unlike state data privacy laws, which typically apply to covered entities that meet certain thresholds, the CAIA applies to any person or entity that is a developer or deployer of a high-risk AI system. A high-risk AI system, under the Act, refers to AI systems that make or are a substantial factor in making consequential decisions, including any legal or material decision affecting an individual’s access to critical life opportunities such as education, employment, insurance, healthcare, and more. Additionally, one section of the law applies to any entity offering or deploying any consumer-facing AI system. Therefore, despite a detailed list of exclusions, including a narrow exemption for small deployers, the law has broad applicability to a variety of businesses and sectors in Colorado.
2. Role-Specific Obligations: The CAIA apportions role-specific obligations for deployers and developers, akin to controllers and processors under data privacy regimes. Deployers, who directly interact with consumers and control how the AI system is utilized, take on more responsibilities than developers, including the following:
- Maintaining a Risk Management Policy & Program that governs their deployment of high-risk AI systems. It must be updated and reviewed regularly, specify the principles, processes, and personnel used to identify and mitigate algorithmic discrimination, and “be reasonable” in comparison to recognized frameworks such as the NIST Artificial Intelligence Risk Management Framework (NIST AI RMF).
- Conduct Impact Assessments annually, which must include the system’s purpose and intended use cases, any known or reasonably foreseeable risks of algorithmic discrimination, risk mitigation steps taken, categories of data processed for system use, the system’s performance metrics, transparency measures, and a description of post-deployment monitoring.
- Notifying Subjects about the use of high-risk AI systems, disclosing information about the system’s purpose and the data used to make decisions, and providing the relevant consumer rights (detailed below).
- Publicly Disclosing on their websites the types of high-risk AI systems currently deployed, and how known or reasonably foreseeable risks of algorithmic discrimination are being managed.
Developers are primarily tasked with providing documentation to help deployers fulfill their duties. This includes high-level summaries of training data types, system limitations, purposes, performance evaluations, and risk mitigation measures for algorithmic discrimination. Additionally, developers must publicly disclose on their websites summaries of high-risk AI systems sold or shared and detail how they manage risks of algorithmic discrimination.
Both developers and deployers must notify the Attorney General of any discovered instances of algorithmic discrimination.
3. Duty of Care to Mitigate Algorithmic Discrimination: Developers and deployers are also subject to a duty to use “reasonable care” to protect consumers from “any known or reasonably foreseeable risks of algorithmic discrimination from use of the high-risk AI system.” In the Brief, FPF notes that the CAIA’s algorithmic discrimination provisions appear to cover both intentional discrimination and disparate impact. Developers and deployers maintain a rebuttable presumption of using reasonable care under this provision if they satisfy their role-specific obligations. In comparison with a blanket prohibition against algorithmic discrimination, as seen in other legislative proposals, the duty of care approach likely means that enforcers of the CAIA will assess developer and deployer actions using a proportionality test considering factors, circumstances, and industry standards, to determine whether they exercised reasonable care to prevent algorithmic discrimination.
4. Novel Consumer Rights: Like many proposals to regulate AI, the CAIA provides consumers rights to be notified about the use of high-risk AI systems used to make decisions about them and receive a statement that discloses the purpose of the system and nature of its consequential decision. Because Colorado consumers already maintain data privacy rights under their state privacy law, deployers must also inform consumers of their right to opt-out of profiling in furtherance of solely automated decisions under the Colorado Privacy Act.
The CAIA also creates novel consumer rights where a deployer used a high-risk AI system to reach a consequential decision that is adverse to an individual. In those scenarios, the deployer must provide the individual with an explanation of the reasons for the decision, an opportunity to correct any inaccurate personal data the system processed for the decision, and an opportunity to appeal the decision for human review. However, deployers may not be required to provide the right to appeal if it is not technically feasible or it is not in the best interest of the individual, such as where delay would threaten an individual’s health or safety.
5. Attorney General Authority: Though the CAIA does not create a private right of action, it grants the Colorado Attorney General significant authority to enforce the law and implement necessary regulations. If an enforcement action is brought by the Attorney General, a developer, deployer, or other person may assert an affirmative defense based on their compliance with the NIST AI RMF, another recognized national or international risk management framework, or any other risk management framework designated by the Attorney General. The Attorney General also has permissive rulemaking authority in a variety of other areas, such as documentation and requirements, requirements for developer and deployer notices and disclosures, and the content and requirements of the deployer’s impact assessments.
Lastly, though the enactment of the CAIA was informed by extensive stakeholder engagement efforts led by Colorado Senate Majority Leader Rodriguez and Connecticut Senator Maroney, FPF raises several questions and considerations about the implementation and enforcement of the CAIA in the Policy Brief, such as:
- Metrics: Because the CAIA does not mandate the use of particular metrics to identify and measure algorithmic discrimination, developers and deployers will have flexibility to choose how to measure and test for bias. Are there metrics or testing that may be considered unreasonable or not pass muster?
- Technical Feasibility: When would it not be “technically feasible” to provide a consumer a right to appeal an adverse decision? Are considerations of burden or lack of resources appropriate considerations? Is there or should there be a consumer right for when a deployer inappropriately denies a consumer’s right to appeal?
- Enforcement: How will the law interact with existing civil rights statutes? Although CAIA does not include a private right of action, can an individual use information disclosed under this law as a basis to exercise their existing civil rights? Conversely, if an action is brought against an entity for algorithmic discrimination under existing civil rights law, could the defendant utilize information or standards compliance under the CAIA as a defense?
If the state legislature’s AI taskforce or the Attorney General does not address these questions in the next session, many of these issues may only be resolved through litigation.
Nonetheless, given concerns raised by the Governor, we may expect to see changes to the law that could alter the scope, substance, and allocation of responsibility. For now, though, the CAIA stands as it is currently written, and remains the first-in-the-nation law to regulate the AI industry, protect consumers, and mitigate the risks of algorithmic discrimination. FPF will continue to closely monitor updates and developments as they progress.
This blog post is for informational purposes only and should not be used or construed as legal advice.