California’s SB 53: The First Frontier AI Law, Explained
California Enacts First Frontier AI Law as New York Weighs Its Own
On September 29, Governor Newsom (D) signed SB 53, the “Transparency in Frontier Artificial Intelligence Act (TFAIA),” authored by Sen. Scott Wiener (D). The law makes California the first state to enact a statute specifically targeting frontier artificial intelligence (AI) safety and transparency. SB 53 requires advanced AI developers to publish governance frameworks and transparency reports, establishes mechanisms for reporting critical safety incidents, extends whistleblower protections, and calls for the development of a public computing cluster.
In his signing statement, Newsom described SB 53 as a blueprint for other states, arguing on behalf of California’s role in shaping “well-balanced AI policies beyond our borders—especially in the absence of a comprehensive federal framework.” Supporters view the bill as a critical first step toward promoting transparency and reducing serious safety risks, while critics argue its requirements could be unduly burdensome on AI developers, potentially inhibiting innovation. These debates come as New York considers its own frontier AI bill–A 6953 or the Responsible AI Safety and Education (RAISE) Act, which could become the second major state law in this space–and as Congress introduces its own frontier model legislation.
Understanding SB 53’s requirements, how it evolved from earlier proposals, and how it compares to New York’s RAISE Act is critical for anticipating where U.S. policy on frontier model safety may be headed.
SB 53: Scope and Requirements
SB 53 regulates developers of the most advanced and resource-intensive AI models by imposing disclosure and transparency obligations, including the adoption of written governance frameworks and reporting of safety incidents. To target this select set of developers, the law specifically scopes the definitions of “frontier model,” “frontier developer,” and “large frontier developer.”
Scope
The law regulates frontier developers, defined as entities that “trained or initiated the training” of high-compute frontier models. It separately defines large frontier developers, or those with annual gross revenues above $500 million, targeting compliance towards the largest AI companies. SB 53 applies to frontier models, defined as foundation models trained with more than 10^26 computational operations, including cumulative compute from both initial training and subsequent fine-tuning or modifications.
Notably, SB 53 is focused on preventing catastrophic risk, defined as a foreseeable and material risk that a frontier model could:
- Contribute to the death or serious injury of 50 or more people or cause at least $1 billion in damages;
- Provide expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon;
- Engage in criminal conduct or a cyberattack without meaningful human intervention; or
- Evade the control of its developer or user.
Other proposed bills, like New York’s RAISE Act, set a narrower liability standard: harm must be a “probable consequence” of the developer’s activities, the developer’s actions must be a “substantial factor,” and the harm could not have been “reasonably prevented.” SB 53 lacks these qualifiers, applying a broader standard for when risk triggers compliance.
Requirements:
SB 53 establishes four major obligations, dividing some responsibilities between all frontier developers and the narrower subset of “large frontier developers.”
- Frontier AI Framework: Large frontier developers must publish an annual Frontier AI framework describing how catastrophic risks are identified, mitigated, and governed. Among other items, the framework must include documentation of governance structures, mitigation processes, cybersecurity practices, and a developer’s alignment with national/international standards. The framework must also assess catastrophic risk from internal use of models, raising the scope of compliance obligations. Frontier developers may make redactions to the framework to protect trade secrets, cybersecurity, and national security.
- Transparency Report: Before deploying a frontier model, all frontier developers (not only “large” developers) must publish a transparency report. Reports must include model details (intended uses, modalities, restrictions), as well as summaries of catastrophic risk assessments, their results, and the role of any third-party evaluators.
- Disclosure of Safety Incidents: Frontier developers are required to report critical safety incidents to the Office of Emergency Services (OES). OES must also establish a mechanism for the public to report critical safety incidents. Covered incidents include unauthorized tampering with a model that causes serious harm, the materialization of a catastrophic risk, loss of control of a frontier model that results in injury or major property damage, or a model deliberately evading developer safeguards. Frontier developers are required to report any critical safety incident within 15 days of discovery, shortened to 24 hours if the incident poses imminent danger of death or serious injury.
- Whistleblower Protections: SB 53 prohibits retaliation against employees or contractors who report activity from catastrophic risks. Employers must provide notice of employee rights and maintain anonymous reporting channels.
Enforcement:
SB 53 authorizes the Attorney General to bring civil actions for violations, with penalties of up to $1 million per violation, scaled to the severity of the offense. The law also empowers the California Department of Technology to recommend updates to key statutory definitions, such as “frontier model” or “large frontier developer,” to reflect technological change. Any updates must be adopted by the Legislature, but this mechanism offers definitional adaptability. Notably, earlier drafts of SB 53 would have provided the Attorney General (AG) direct rulemaking authority over these definitions. However, the final version of the bill removes the AG rulemaking authority in favor of the Department of Technology recommendations to the Legislature.
From SB 1047 to SB 53: How the Bill Narrowed
SB 53 is a pared-down successor to last year’s SB 1047, which Governor Newsom vetoed. In his veto statement, Newsom called for an approach to frontier model regulation “informed by an empirical trajectory analysis of AI systems and capabilities,” leading to the creation of the Joint California Policy Working Group on AI Frontier Models. The group released a report offering regulatory best practices, which emphasized whistleblower protections and alignment with leading safety practices.
When the bill returned in 2025, it passed without many of SB 1047’s most controversial provisions, including:
- Mandating full shutdown capabilities (or “kill switch”) for covered models, criticized as technically infeasible and a barrier to open-source development;
- Imposing pre-training requirements, obligating developers to implement safety protocols, cybersecurity protections, and full shutdown capabilities before beginning initial training of a covered model;
- Requiring annual audits by independent third-party assessors;
- Strict 72-hour reporting window for safety incidents; and
- Steep penalties tied to compute cost, up to 10% for first violations and 30% for subsequent ones.
By contrast, SB 53 focuses on deployment-stage obligations, lengthens reporting timelines to 15 days, caps penalties at $1 million per violation, and streamlines the information required in transparency reports and frameworks (removing, for example, testing disclosure requirements). These changes produced a narrower bill with reduced obligations for frontier developers, satisfying some but not all critics.
Comparison with New York’s RAISE Act
With SB 53 now law, attention turns to New York and the Responsible AI Safety and Education (RAISE) Act, which is pending on Governor Hochul’s desk. Like SB 53, the RAISE Act was inspired by last year’s SB 1047 and seeks to regulate frontier AI models. Hochul has as late as January 1, 2026, to sign, veto, or issue chapter amendments, a process that allows the governor to negotiate substantial changes with the legislature at the time of signature. Given Newsom’s signature of SB 53, a central question is whether RAISE will be amended to more closely align with the California law.
To help stakeholders track these dynamics, we’ve created a side-by-side comparison of the two bills. Broadly, SB 53 is more detailed in content—requiring frameworks, transparency reports, and whistleblower protections—while RAISE is stricter in enforcement, with higher penalties and liability provisions. Both bills share core elements, such as compute thresholds, catastrophic risk definitions, and mandatory frameworks/protocols. Key differences include:
- Strict Liability: RAISE prohibits deployment of frontier models that pose an “unreasonable risk of critical harm,” a standard absent from SB 53.
- Scope: SB 53 uses broader definitions of catastrophic risk and distinguishes between “frontier developers” and “large frontier developers,” which are those with $500M+ annual revenue. RAISE applies only to “large developers,” defined as those spending $100M+ on compute, which could bring a distinct group of companies into scope when compared to SB 53.
- Requirements: SB 53 imposes additional obligations, including employee whistleblower protections and public transparency reports. Where requirements overlap, such as safety incident reporting, SB 53 allows public reporting and offers a longer timeline (15 days), while RAISE sets a 72-hour window and uses stricter qualifiers.
- Enforcement: SB 53 caps penalties at $1 million per violation and empowers the California Department of Technology to recommend definitional updates. RAISE authorizes significantly higher penalties (up to $10 million for a first violation and $30 million for subsequent ones).
The bills highlight how state legislators are experimenting with comparable, yet distinct, approaches to AI frontier model regulation–California’s highlighting transparency and employee protections, with New York’s emphasizing stronger penalties and liability standards.
Conclusion
SB 53 makes California the first state to enact legislation focused on frontier AI, establishing transparency, disclosure, and governance requirements for high-compute model developers. Compared to last year’s broader SB 1047, the new law takes a narrower approach, scaling back several of the compliance obligations.
Attention now turns to New York, where the RAISE Act awaits action by the governor. Whether signed as written or amended through the chapter amendment process to reflect aspects of SB 53, the bill could become a second state-level framework for frontier AI. Other states, including Michigan, have introduced proposals of their own, illustrating the potential for a patchwork of requirements across jurisdictions.
As detailed in FPF’s recent report, State of State AI: Legislative Approaches to AI in 2025, this year’s legislative landscape highlights ongoing state experimentation in AI governance. With SB 53 enacted and the RAISE Act under consideration, state-level activity is moving from proposal to implementation, raising questions about how divergent approaches may shape compliance expectations and interact with future federal efforts.