Understanding the New Wave of Chatbot Legislation: California SB 243 and Beyond
As more states consider how to govern AI-powered chatbots, California’s SB 243 joins New York’s S-3008C as one of the first few enacted laws governing companion chatbots and stands out as the first to include protections tailored to minors. Signed by Governor Gavin Newsom this month, the law focuses on transparency and youth safety, requiring “companion chatbot” operators to adopt new disclosure and risk-mitigation measures. Notably, because SB 243 creates a private right of action for injured individuals, the law has drawn attention for its potential implications for significant damage claims.
The law’s passage comes amid a broader wave of state activity on chatbot legislation. As detailed in the Future of Privacy Forum’s State of State AI Report, 2025 was the first year multiple states introduced or enacted bills explicitly targeting chatbots, including Utah, New York, California, and Maine1. This growing attention reflects both the growing integration of chatbots into daily life–for instance, tools that personalize learning, travel, or writing–and increasing calls for transparency and user protection2.
While SB 243 is distinct in its focus on youth safeguards, it reflects broader efforts on the state-level to define standards for responsible chatbot deployment. As additional legislatures weigh similar proposals, understanding how these frameworks differ in scope, obligations, and enforcement will be key to interpreting the next phase of chatbot governance in 2026.
Why Chatbots Captured Lawmakers Attention
A series of high-profile incidents and lawsuits in recent months has drawn sustained attention to AI chatbots, particularly companion chatbots or systems designed to simulate empathetic, human-like conversations and adapt to users’ emotional needs. Unlike informational or customer-service bots, these chatbots often have names and personalities and sustain ongoing exchanges that can resemble real relationships. Some reports claim these chatbots are especially popular among teens.
Early research underscores the complex role that these systems can play in human lives. A Common Sense Media survey asserts that nearly three in four teens (72%) have used an AI companion, with many reporting frequent or emotionally oriented interactions. However, like many technologies, their impact is complex and evolving. A Stanford study found that 3% of young adults using a companion chatbot credited it with temporarily halting suicidal thoughts and other studies have suggested that chatbots can help alleviate the U.S.’s loneliness epidemic. Yet, several cases have also emerged in which chatbots allegedly encouraged children and teens to commit suicide or self-harm, leading to litigation and public outcry.
Parents involved in these cases have testified before state legislatures and in U.S. Senate hearings, prompting investigations by the Federal Trade Commission and Congress into AI chatbot policies.
This growing scrutiny has shaped how Congress and the states are legislating in 2025, with most proposals focusing on transparency, safety protocols, and youth protection. At the same time, these frameworks have prompted familiar policy debates around innovation, data privacy, and liability.
SB 243 Explained
According to the bill’s author, Senator Padilla (D), California’s SB 243 was enacted in response to these growing concerns, requiring companion chatbot operators to maintain certain disclosures, safety protocols, and additional safeguards when the user is known to be a minor.
While California is not the first state to regulate companion chatbots—New York’s S-3008C, enacted earlier this year, includes similar transparency and safety provisions—SB 243 is the first to establish youth-specific protections. Its requirements reflect a more targeted approach, combining user disclosure, crisis-intervention protocols, and minor-focused safeguards within a single framework. As one of the first laws to address youth interaction with companion chatbots, SB 243 may shape how other states craft their own measures, even as policymakers experiment with differing approaches.
A. Scope
California’s SB 243 defines a “companion chatbot,” as an AI system that provides “adaptive, human-like responses to user inputs,” is capable of meeting a “user’s social needs,” exhibits “anthropomorphic features,” and is able to “sustain a relationship across multiple interactions.” Unlike New York’s S-3008C, enacted earlier in 2025, SB 243 does not reference a chatbot’s ability to retain user history or initiate unsolicited prompts, resulting in a slightly broader definition focused on foreseeable use in emotionally oriented contexts.
The law excludes several categories of systems from this definition, including chatbots used solely for customer service, internal research, or operational purposes; bots embedded in video games that cannot discuss mental health, self-harm, or sexually explicit content; and stand-alone consumer devices such as voice-activated assistants. It also defines an “operator” as any person making a companion chatbot platform available to users in California.
Even with these carveouts, however, compliance determinations may hinge on subjective interpretations; for example, whether a chatbot’s repeated customer interactions could still be viewed as “sustained.” As a result, entities may face ongoing uncertainty in determining which products fall within scope, particularly for more general-purpose conversational technologies.
B. Requirements
SB 243 imposes both disclosure, safety protocol, and minor-specific safeguards, as well as a private right of action that allows individuals to seek damages of at least $1,000, along with injunctive relief and attorney’s fees.
- Disclosure: The law requires operators to provide a “clear and conspicuous” notice that the chatbot is AI in cases where a reasonable person could be “misled to believe” they are interacting with a human. It also mandates a disclaimer that companion chatbots may not be suitable for minors.
- Safety Protocols: SB 243 requires operators to maintain procedures to prevent the generation of content related to suicidal ideation or self-harm, and implement mechanisms to direct users to crisis helplines. These protocols must be publicly available on the operator’s website and annually reported to the California Office of Suicide Prevention, including data on the number crisis referrals but no personal user information.
- Safeguards for Minors: When an operator knows a user is a minor, the law also requires operators to disclose to the user that they are interacting with AI, provide a notification every three hours during sustained interactions to take a break, and take reasonable steps to prevent chatbots from suggesting or engaging in sexually explicit content.
However, these requirements raise familiar concerns regarding data privacy, compliance, and youth safety. To identify and respond to risks of suicidal ideation, operators may need to monitor and analyze user interactions, potentially processing and retaining sensitive mental health information which could create tension with existing privacy obligations. Similarly, what it means for an operator to “know” a user is a minor may depend on what information an operator collects about a user and how SB 243 interacts with other recent California laws–such as AB 1043, which establishes an age assurance framework.
Aditionally, this law directs operators to use “evidence-based methods” for detecting suicidal ideation, though it does not specify what qualifies as evidence-based or “suicidal ideation.” This language introduces some practical ambiguity, as developers must determine which conversational indicators trigger reporting and what methodologies satisfy this “evidence-based” requirement.
How SB 243 Fits into the Broader Landscape
SB 243 reflects many of the same themes found across state chatbot legislation introduced in 2025. Two central regulatory approaches emerged this year—identity disclosure through user notification and safety protocols to mitigate harm—both of which are incorporated into California’s framework. Across states, lawmakers have emphasized transparency, particularly in emotionally sensitive contexts, to ensure users understand when they are engaging with an AI system rather than a human.
A. Identity Disclosure and User Notification
Six of the seven key chatbot bills in 2025 included a user disclosure requirement, mandating that operators clearly notify users when they are interacting with AI rather than a human. While all require disclosures to be “clear and conspicuous,” states vary in how prescriptive they are about timing and format.
New York’s S-3008C (enacted) and S 5668 (proposed) require disclosure at the start of each chatbot interaction and at least once every three hours during ongoing conversations. California’s SB 243 includes a similar three-hour notification rule, but only when the operator knows the user is a minor. In contrast, Maine’s LD 1727 (enacted) simply requires disclosure “in a clear and conspicuous manner” without specifying frequency, while Utah’s SB 452 (enacted) ties disclosure to user engagement, requiring it before chatbot features are accessed or when a user asks whether AI is being used.
Lawmakers are increasingly treating disclosure as a baseline governance mechanism for AI, as noted in FPF’s State of State AI Report. From a compliance perspective, disclosure standards provide tangible obligations for developers to operationalize. From a consumer protection standpoint, legislators view them as tools to promote transparency, prevent deception, and curb excessive engagement by reminding users, especially minors, that they are interacting with an AI system.
B. Safety Protocols and Risk Mitigation
Alongside disclosure requirements, several 2025 chatbot bills, including California’s SB 243, introduce safety protocol obligations aimed at reducing risks of self-harm or related harms. Similar to SB 243, New York’s S-3008C (enacted) makes it unlawful to offer AI companions without taking “reasonable efforts” to detect and address self-harm, while New York’s S 5668 (proposed) would have expanded the scope to include physical or financial harm to others.
These provisions are intended to operate as accountability mechanisms, requiring operators to proactively identify and mitigate risks associated with companion chatbots. However, as discussed above, requiring chatbot operators to make interventions in response to perceived mental health crises or other potential harms increases the likelihood that operators will need to retain chat logs and make potentially sensitive inferences about users. Retention and processing of user data in this way may be inconsistent with users’ expressed privacy preferences and potentially conflict with operators’ obligations under privacy laws.
Notably, safety protocol requirements appeared only in companion chatbot legislation, not in broader chatbot bills such as Maine’s LD 1727 (enacted), reflecting lawmakers’ heightened concern about self-harm and mental health risks linked to ongoing litigation and public scrutiny of companion chatbots.
C. Other Trends and Influences
California’s SB 243 also reflects other trends within 2025 chatbot legislation. For example, chatbot legislation generally did not include requirements to undertake impact assessments or audits. An earlier draft of SB 243 included a third-party audit requirement for companion chatbot operators, but the provision was removed before enactment, suggesting that state lawmakers continue to favor disclosure and protocols over more prescriptive oversight mechanisms.
Governor Newsom’s signature on SB 243 also coincided with his veto of California’s AB 1064, a more restrictive companion chatbot bill for minors. AB 1064 would have prohibited making companion chatbots available to minors unless they were “not foreseeably capable” of encouraging self-harm or other high-risk behaviors. In his veto message, Newsom cautioned that the measure’s prohibitions were overly broad and could “unintentionally lead to a total ban” on such products, while signaling interest in building on SB 243’s transparency-based framework for future legislation.
As of the close of 2025 legislative sessions, no state had enacted a ban on chatbot availability for minors or adults. SB 243’s emphasis on transparency and safety protocols, rather than outright restrictions, may preview future legislative debates.
Looking Ahead: What to Expect in 2026
The surge of chatbot legislation in 2025 offers a strong signal of where lawmakers may turn next. Companion chatbots are likely to remain central, particularly around youth safety and mental health, with future proposals potentially building on California’s SB 243 by adding youth-specific provisions or linking chatbot oversight to age assurance and data protection frameworks. A key question for 2026 is whether states will continue to favor these disclosure-based frameworks or begin shifting toward use restrictions. While Governor Newsom’s veto of AB 1064 suggested lawmakers may prioritize transparency and safety standards over outright bans, the newly introduced federal “Guidelines for User Age-Verification and Responsible Dialogue (GUARD) Act,” which includes both disclosure requirements and a ban on AI companions for minors, may reopen that debate.
The scope of regulation could also expand as states begin to explore sector-specific chatbots, particularly in mental health, where new legislation in New York and Massachusetts would prohibit AI chatbots for therapeutic use. Other areas such as education and employment, already the focus of broader AI legislation, may also draw attention as lawmakers consider how conversational AI shapes consumer and workplace experiences. Taken together, these developments suggest that 2026 may be the “year of the chatbots,” with states prepared to test new approaches to transparency, safety, and youth protection while continuing to define responsible chatbot governance.
- Other bills enacted in 2025 include provisions that would cover chatbots within their broader scope of AI technologies; however, these figures reflect legislation that focused narrowly on chatbots. ↩︎
- See FPF’s Issue Brief: Daniel Berrick and Stacey Grey, “Concepts in AI Governance: Personality vs. Personalization,” September 2025, https://fpf.org/wp-content/uploads/2025/09/Concepts-in-AI-Governance_-Personality-vs.-Personalization.pdf ↩︎