The RAISE Act vs. SB 53: A Tale of Two Frontier AI Laws
What the enactment of New York’s RAISE Act reveals compared to California’s SB 53, the nation’s first frontier AI law
On December 19, New York Governor Hochul (D) signed the Responsible AI Safety and Education (RAISE) Act, ending months of uncertainty after the bill passed the legislature in June and making New York the second state to enact a statute specifically focused on frontier artificial intelligence (AI) safety and transparency.1 Sponsored by Assemblymember Bores (D) and Senator Gounardes (D), the law closely follows California’s enactment of SB 53 in late September, requiring advanced AI developers to publish governance frameworks and transparency reports, and establishing mechanisms for reporting critical safety incidents. As they moved through their respective legislatures, the RAISE Act and SB 53 shared a focus on transparency and catastrophic risk mitigation but diverged in scope, structure, and enforcement–raising concerns about a compliance patchwork for nationally operating developers.
The New York Governor’s chapter amendments ultimately narrowed those differences, revising the final version of the RAISE Act to more closely align with California’s SB 53, with conforming changes expected to be formally adopted by the Legislature in January. Even so, the two laws are not identical, and the remaining distinctions may be notable for frontier developers navigating compliance in both the Golden and the Empire State.
Understanding the RAISE Act, and how it aligns with and diverges from California’s SB 53, offers a useful lens into how states are approaching frontier AI safety and transparency and where policymaking may be headed in 2026.
At a high level, the two statutes now share largely identical scope and core requirements. Still, several distinctions remain, including:
- Scope: Though the scope of regulated technologies and entities are largely identical, the RAISE Act includes explicit carveouts for universities engaged in research, and, importantly, contains a territorial limitation (applying only to models developed or operated in whole or in part in New York) that is not present in SB 53. As a result, should either law be subject to constitutional scrutiny, it may be more likely for RAISE to survive a Dormant Commerce Clause challenge.
- Requirements: SB 53 includes employee whistleblower protections, which are absent from the RAISE Act. By contrast, the RAISE Act establishes a frontier developer disclosure program requiring additional information, such as ownership structure, that SB 53 does not mandate.
- Safety Incident Reporting: SB 53 offers a longer timeline (15 days), while RAISE sets a 72-hour window and uses stricter qualifiers (establishing reasonable belief that an incident occurred).
- Rulemaking Authority: SB 53 empowers the California Department of Technology to recommend definitional updates to the statute and to align with national and international standards. The RAISE Act offers direct rulemaking authority to the Department of Financial Services (DFS), such as considering additional reporting or publication requirements.
- Liability and Enforcement: The RAISE Act authorizes slightly higher penalties (up to $1 million for a first violation and $3 million for subsequent ones, compared to SB 53’s cap at $1 million per violation.).
RAISE Act: Scope and Requirements
Despite these distinctions, the RAISE Act largely mirrors California’s SB 53 in how it defines covered models, developers, and risks, resulting in a substantially similar compliance scope across the two states. The sections below summarize the RAISE Act’s scope and key requirements.
Scope:
The law regulates frontier developers, defined as entities that “trained or initiated the training” of high-compute frontier models, or foundation models trained with more than 10^26 computational operations. It separately defines large frontier developers, or those with annual gross revenues above $500 million, targeting compliance towards the largest AI companies.
Like California SB 53, the RAISE Act is focused on preventing catastrophic risk, defined as a foreseeable and material risk that a frontier model could:
- Contribute to the death or serious injury of 50 or more people or cause at least $1 billion in damages;
- Provide expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon;
- Engage in criminal conduct or a cyberattack without meaningful human intervention; or
- Evade the control of its developer or user.
Requirements:
The RAISE Act establishes multiple compliance requirements, with certain requirements applying to all frontier developers and additional duties reserved for large frontier developers.
- Frontier AI Framework: Large frontier developers must annually publish a Frontier AI Framework describing their governance structures, risk assessment thresholds, mitigation strategies, cybersecurity practices, and alignment with national or international standards. The framework must also address catastrophic risk arising from internal model use. Limited redactions are permitted to protect trade secrets, cybersecurity, or national security.
- Transparency Report: Before deploying a frontier model, all frontier developers (not only “large” developers) must publish a transparency report detailing the model (intended uses, modalities, restrictions), as well as summaries of catastrophic risk assessments, their results, and the role of any third-party evaluators.
- Safety Incident Reporting: Frontier developers must report critical safety incidents to the DFS within 72 hours of forming a reasonable belief that an incident occurred, shortened to 24 hours where there is an imminent danger of death or serious injury. The law also requires a mechanism for public reporting of incidents.
- Frontier Developer Disclosure: Large frontier developers may not operate a frontier model in New York without filing a disclosure statement with DFS. Disclosures must be updated at least every two years, upon ownership transfer or following material changes, and must identify ownership structure, business addresses, and designated points of contact. Large developers are assessed pro rata fees to support administration of the program and DFS may impose penalties of up to $1,000 per day for noncompliance.
- Rulemaking: The Department of Financial Services is granted direct rulemaking authority, including the ability to consider additional reporting or publication requirements to advance the statute’s safety and transparency objectives.
Enforcement: The RAISE Act authorizes the Attorney General to bring civil actions for violations, with penalties up to $1 million for a first violation and up to $3 million for subsequent violations, scaled to the severity of the offense. The statute expressly does not create a private right of action. It also clarifies, unlike California’s SB 53, that a large frontier developer may assert that alleged harm or damage was caused by another person, entity, or contributing factor.
Before the Amendments: How the RAISE Act Changed
Before Governor Hochul’s chapter amendments, the RAISE Act would have diverged much more sharply from California’s SB 53. The earlier iteration of the bill that passed out of the Legislature took a more expansive approach, including higher penalties and stricter liability thresholds, raising the prospect of meaningfully different compliance regimes on opposite coasts.
Most notably, the original RAISE Act applied only to “large developers,” defined by annual compute spending above $100 million, rather than distinguishing between frontier developers and large frontier developers as SB 53 does. That threshold would have captured a different (and potentially broader) set of companies than the enacted framework, which now relies on a $500 million revenue benchmark aligned with California’s approach. The bill also originally framed its focus around “critical harm,” rather than the “catastrophic risk” standard now shared with California’s SB 53, and paired that definition with heightened liability requirements, including that harm be a probable consequence, that the developer’s conduct be a substantial factor, and that the harm could not have been reasonably prevented. Those qualifiers were ultimately removed in favor of the “catastrophic risk” standard used in SB 53, including utilizing the same 50-person harm threshold.
The RAISE Act’s requirements evolved as well. Earlier versions lacked both the transparency report obligation (now shared with SB 53) and the frontier developer disclosure program (a new New York-specific addition). While the original RAISE Act did include an obligation to maintain a “safety and security protocol,” that requirement was less prescriptive about governance and mitigation practices than the now enacted “Frontier AI Framework.”
Perhaps the most significant change was the removal of a deployment prohibition. As passed by the Legislature, the RAISE Act would have barred deployment of models posing an unreasonable risk of critical harm, a restriction not found in SB 53. Chapter amendments left the final law focused on transparency and reporting, rather than direct deployment restrictions. Penalties were similarly scaled back, falling from a maximum of $10 million for a first violation and $30 million for subsequent violations to $1 million and $3 million, respectively.
Looking Ahead: What Comes Next in 2026?
With chapter amendments expected to be formally adopted in the coming weeks, the RAISE Act will take effect after California’s SB 53, which became operative on January 1, 2026. As a result, SB 53 will be the first real test of how a frontier AI statute operates in practice, with New York following shortly thereafter.
That rollout comes amid renewed uncertainty over the balance between state and federal AI policymaking. A recent White House executive order, Ensuring a National Policy Framework for Artificial Intelligence, seeks to apply federal pressure against state AI laws deemed excessive, including through an AI Litigation Task Force and funding restrictions tied to state enforcement of certain AI laws. While the practical impact of the EO remains unclear, it adds complexity for states and developers preparing for compliance.
Both SB 53 and the RAISE Act include severability clauses, which preserve the remainder of each statute if individual provisions are invalidated. While standard in complex legislation, those clauses may become more consequential if either law is drawn into these broader federal-state tensions. At the same time, the EO directs the Administration to engage Congress on a federal AI framework, raising the possibility that SB 53 and the RAISE Act could serve as reference points for future federal legislation. With other states, including Michigan, already introducing similar bills, it should become clearer in 2026 whether SB 53 and the RAISE Act function as models for broader adoption or face legal challenge.
- Passed by the Legislature as A 6453A and to be enacted through chapter amendments reflected in A 9449. ↩︎