Setting the Stage: Connecticut Senate Bill 2 Lays the Groundwork for Responsible AI in the States
NEW: Read Tatiana Rice’s op-ed in the CT Mirror on SB2
Last night, on April 24, the Connecticut Senate passed SB 2, marking a significant step toward comprehensive AI regulation in the United States. This comprehensive, risk-based approach has emerged as a leading state legislative framework for AI regulation. If enacted, SB 2 would stand as the first piece of legislation in the United States governing the private-sector development and deployment of AI with comparable scale to the EU AI Act. The law would become effective February 1, 2026.
FPF has released a new Two-Pager Fact Sheet that summarizes core components of CT SB 2 pertaining to private-sector regulation.
“Connecticut Senate Bill 2 is a groundbreaking step towards comprehensive AI regulation that is already emerging as a foundational framework for AI governance across the United States. The legislation aims to strike an important balance of protecting individuals from harms arising from AI use, including creating necessary safeguards against algorithmic discrimination, while promoting a risk-based approach that encourages the valuable and ethical uses of AI. We look forward to continuing to work with Sen. Maroney and other policymakers in the future to build upon and refine this framework, ensuring it reflects best practices and is responsive to the dynamic AI landscape.”
–Tatiana Rice, Deputy Director for U.S. Legislation
At a high level, here’s our summary of the bill’s most significant private-sector provisions:
- Scope: The bill’s private-sector provisions primarily regulate developers and deployers of high-risk AI systems, i.e. those used to make, or are a substantial factor in making, consequential decisions regarding education, employment, financial or lending services, healthcare, or other important life opportunities. There are small business exceptions for deployers in certain circumstances. The bill also contains requirements for any person or entity deploying an artificial intelligence system that interacts with individuals to disclose to the person that they are engaging with an AI system and watermark AI-generated content.
- Developer and Deployer Obligations: Both developers and deployers of high-risk AI systems would be subject to a duty of reasonable care to avoid algorithmic discrimination and issue a public statement regarding the use or sale of high-risk AI systems. Developers would also need to provide certain disclosures and documentation to deployers, including information regarding intended use, data used to train the system and risk mitigation measures. Deployers would be required to maintain a risk management policy, conduct impact assessments on high-risk AI systems, and ensure consumers are provided their relevant rights.
- Individual Rights: Individuals must be provided notice before a high-risk AI system is used to make, or be a substantial factor in making, a consequential decision. If an adverse consequential decision is made, individuals have a right to an explanation of how the high-risk AI system came to its conclusion, including the personal data used to render the decision, the right to correct the personal data used to render the decision, and the right to appeal the decision for human review. If a deployer is also a controller under the Connecticut Data Privacy Act (CTDPA), they also must inform individuals of their rights under the CTDPA, including the right to opt-out of profiling in furtherance of solely automated decisions.
- Enforcement: The Attorney General would have the sole authority to enforce provisions of the bill, though the bill explicitly does not supersede existing authority of other state agencies to enforce against discrimination, including the Connecticut Commission on Human Rights and Opportunities (CHRO). However, the Attorney General may not bring an action for claims otherwise being brought by the CHRO for the same conduct. Developers and deployers would have a 60-day right to cure any alleged violations until June 30, 2026.
- Compliance and Reciprocity: After the bill becomes enacted, entities would have almost two years to come into compliance with the Act. If an entity is otherwise in compliance with the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework or another nationally or internationally recognized risk management framework, they may assert so as an affirmative defense.
Beyond the bill’s private-sector regulations, SB 2 also creates a new task force to create recommendations regarding the regulation of generative and general-purpose AI, and contains provisions regarding AI-generated non-consensual intimate images, deepfakes in political communications, workforce development, and public-private partnerships, amongst other topics.
FPF will continue to track the bill’s developments in the coming weeks. Follow FPF on Twitter/X for the latest updates.