The Chatbot Moment: Mapping the Emerging 2026 U.S. Chatbot Legislative Landscape
Special thanks to Rafal Fryc, U.S. Legislation Intern, for his research and development of the resources referenced.
If there is one area of AI policy that lawmakers seem particularly eager to regulate in 2026, it’s chatbots. As state legislative sessions ramp up across the country, policymakers at both the state and federal levels have introduced dozens of bills aimed at chatbots from so-called “AI companions” to “mental health chatbots.” The Future of Privacy Forum (FPF) is currently tracking 98 chatbot-specific bills across 34 states, as well as three federal proposals.
Yet despite the shared concern driving these proposals (often tied to safety risks, youth protections, and several high-profile incidents involving chatbots and self-harm) the bills themselves look very different from one another. Definitions of “chatbot” vary widely across legislation. The result is the early contours of a potential regulatory patchwork, where different tools may fall within the scope of different state laws and where compliance obligations, like disclosures or safety protocols, could vary broadly across jurisdictions. As states including Oregon and Washington prepare to imminently enact new chatbot legislation, it remains to be seen how closely 2026 frameworks will ultimately align.
To help make sense of this rapidly evolving landscape, FPF developed two one-pager resources summarizing key trends in chatbot legislation. The first highlights some of the definitional patterns beginning to appear, identifying eleven legislative frameworks used to define chatbots. The second maps the six most common regulatory provisions appearing across chatbot bills.
With these resources, we explore two central questions shaping the emerging chatbot policy debate: how lawmakers are defining chatbots and what regulatory approaches are beginning to emerge across states.
Chatbot Legislation in 2025
Chatbots began attracting legislative attention in 2025. Last year, five states enacted chatbot-specific legislation: California (SB 243), New York (S-3008C), New Hampshire (HB 143), Utah (HB 452), and Maine (LD 1727). Other laws enacted in 2025, such as Illinois (HB 1806) and Nevada (AB 406), did not define chatbots directly but regulate the use of AI systems, including chatbots, in the delivery of licensed mental or behavioral health services.1
Even with this activity in 2025, the scale of legislative interest in 2026 represents a significant expansion. In both volume and policy focus, chatbot legislation is emerging as one of the most active areas of AI policymaking. Interest is also broadly bipartisan, with 53 percent of the chatbot bills tracked by FPF introduced by Democrats and 46 percent introduced by Republicans.
Defining Chatbots: Why It’s Harder Than It Looks
One of the central challenges lawmakers face is: what exactly counts as a chatbot?
In practice, chatbots appear in a wide range of contexts, from customer service assistants and tutoring tools to wellness apps, voice assistants, and AI companions designed for social interaction. Small shifts in how legislation defines these systems can dramatically impact which technologies fall inside or outside a bill’s scope.
Three Terms Are Defined in Chatbot Legislation
As detailed in the FPF one-pager resources, three primary terms are emerging to scope chatbot legislation in 2026: “chatbots,” “companion chatbots,” and “mental health chatbots.” Each term attempts to capture a different category and type of risk. For example, mental health chatbot bills typically focus on preventing AI systems from providing therapy without licensed professional oversight. Meanwhile, roughly half of the chatbot-related bills introduced this year focus specifically on companion chatbots, reflecting concern about systems designed to simulate interpersonal relationships with users, especially minors.
At the same time, states are experimenting with a wide range of definitional approaches. Some definitions focus on technology, such as limiting scope to systems that use generative AI (NE LB 939). Others define chatbots based on capabilities or interaction behaviors, like whether the tool sustains dialogue about personal matters (MI SB 760, TN HB 1455). Still others define chatbots based on deployment context, such as whether a system is publicly accessible or marketed as a companion (U.S. S 3062, HI SB 2788). Legislators are not converging on a single definition but rather exploring multiple models simultaneously.
Three Models for Companion Chatbot Definitions
In the case of companion chatbots, three definitional approaches are beginning to emerge as particularly influential:
- Capability-based definitions, modeled on California SB 243 (enacted), focus on whether a system is capable of simulating social or relational interaction with users.
- Behavior-based definitions, modeled on New York S-3008C (enacted), define companion chatbots based on how the system behaves during interactions, for example, whether it retains user information across sessions or initiates emotionally oriented dialogue.
- Intent-based definitions, reflected in the federal GUARD Act (proposed), focus on whether a system is designed or marketed to simulate companionship or emotional relationships.
Why Definitions Matter for Regulatory Scope
These definitional differences matter because they determine who must comply with a law and who does not. For example, a definition that focuses on conversational capability may capture general-purpose assistants, tutoring tools, or wellness applications even when companionship is not their primary function.
To narrow scope, many bills, similar to last year’s California’s SB 243 (enacted), include carveouts for tools such as customer service systems, research assistants, or workplace productivity tools. While these exclusions may reduce the likelihood that certain tools fall in scope, they also introduce interpretive questions. Chatbots often serve multiple purposes simultaneously. A chatbot might act as a customer support tool but also answer general informational questions or engage in broader dialogue. In these cases, it may be unclear when a system is “used for customer service” versus when it becomes a more general conversational chatbot. Many proposals leave the issue unresolved, for example, by not specifying that exclusions apply when a system’s primary purpose falls within those categories.
Themes Within Chatbot Legislation
Chatbot bills vary widely not only in how they define chatbots, but also in the substantive regulatory requirements they propose. Still, several common policy themes are beginning to emerge. Across proposals introduced in 2026, six broad regulatory themes appear: transparency; age assurance and minors’ access controls; content safety and harm prevention; professional licensure and regulated services; data protection; and liability and enforcement.
These provisions reflect a notable shift from the first generation of chatbot legislation. Early laws such as California SB 243 and New York S-3008C focused primarily on disclosure that users were interacting with AI and basic safety protocols, such as providing crisis hotline resources when users express suicidal ideation. (FPF previously analyzed SB 243 in an earlier blog.)
In 2026, however, lawmakers appear to be treating chatbot legislation as a broader regulatory vehicle. Bills now frequently incorporate issues beyond disclosure and companion AI safety, including restrictions on engagement optimization, data governance provisions, and even regulatory sandbox programs (CT SB 86). In some cases, this expansion has prompted debate about whether chatbot bills may implicate speech concerns and raise potential First Amendment questions. For instance, many chatbot proposals would require recurring disclosures during conversations or mandate reporting about specific categories of user speech (e.g. statements of suicidal ideation), raising questions about compelled speech and editorial discretion. As detailed in the chatbot provisions chart, the six most common regulatory provisions appearing across proposed chatbot legislation include:
- Transparency: Nearly every chatbot bill includes a non-human disclosure requirement, mandating that operators inform users they are interacting with an AI system rather than a human. Most proposals require “clear and conspicuous” disclosure, though timing and format vary. Some require disclosure only at the start of an interaction or every three hours (PA SB 1090), while others require persistent reminders during conversations every hour or thirty minutes (SC HB 5138). A smaller subset goes further by requiring transparency reporting, such as public disclosures about safety protocols or incidents (WA HB 2225, UT HB 438).
- Age Assurance and Minors’ Access Controls: Youth safety has become a central focus of chatbot legislation in 2026. Several proposals require operators to determine whether a user is a minor and impose additional safeguards. Approaches vary widely: some bills require age verification (GA SB 540, SD SB 168), others restrict or prohibit minors’ access to certain content (IA SF 2417), and some require parental consent or monitoring tools (AZ HB 2311). Notably, none of the chatbot laws enacted in 2025 (and few bills advancing in 2026) establish robust, standalone age verification systems.
- Content Safety and Harm Prevention: Almost every chatbot bill advancing in 2026 incorporates harm detection and response protocols. Similar to California and New York’s laws, many require operators to provide crisis resources, such as suicide hotline referrals, when detecting indicators of self-harm. A growing number also address anthropomorphic or manipulative interactions, including restrictions on emotional deception or features designed to foster dependency, like rewarding prolonged interaction (HI HB 2502, OR SB 1546).
- Professional Licensure and Regulated Services: Another category of provisions addresses the use of chatbots to deliver services traditionally regulated through professional licensure. Several laws prohibit AI systems from diagnosing, treating, or representing themselves as licensed professionals, particularly in mental or behavioral healthcare (VA HB 669, TN HB 1470). Others allow AI tools to assist licensed professionals but require human oversight or transparency of recommendations or treatment plans (VA HB 668, RI HB 7538).
- Data Protection: Chatbot legislation is also beginning to incorporate data governance requirements, particularly around conversational logs and sensitive user information. These proposals include restrictions on collecting, sharing, or selling chatbot input data, along with requirements for data minimization or deletion (MD SB 827). Some bills also restrict the use of minors’ data for AI training or advertising (UT HB 438).
- Liability and Enforcement: Most proposals grant enforcement authority to state attorneys general and establish civil penalties for violations. Some also introduce private rights of action (OR SB 1546), allowing individuals to bring lawsuits directly, as seen in California SB 243. A smaller number go further, establishing non-disclaimable liability for certain harms involving minors or creating criminal penalties for specific chatbot behaviors (TN SB 1493).
Litigation is Shaping The Legislative Agenda
One notable feature of the 2026 chatbot landscape is how closely these proposed provisions mirror themes emerging from recent chatbot litigation and enforcement, such as Commonwealth of Kentucky v. Character Technologies Inc. and Garcia v. Character Technologies Inc. et al.
Across lawsuits and investigations2, several recurring concerns appear:
- Anthropomorphic design features that create emotional dependency, like engagement optimization mechanisms and evocation of human-like qualities; and
- Unreliable or non-existent safety features, like lack of age verification or parental controls.
Many of these concerns are now directly reflected in legislative proposals, particularly those targeting engagement optimization and emotional manipulation (WA HB 2225, OR SB 1546, IA SF 2417, GA SB 540, among others). At the same time, most proposed safety interventions remain reactive rather than preventative. Many bills require chatbots to provide crisis resources once a user expresses distress, but none mandate features that would automatically terminate risky conversations or set session limits.
Where Chatbot Legislation May Go Next
As more states move from proposal to enactment in 2026, the coming months will provide an early signal of which legislative approaches for chatbot governance will ultimately prevail.
Much of this experimentation is happening at the state level, where lawmakers are advancing a wide range of chatbot definitions and regulatory approaches. But the conversation is increasingly moving to the federal stage as well. Recent activity in the U.S. House to amend the KIDS Act—including the addition of the SAFE Bots Act establishing requirements for AI chatbots interacting with minors—demonstrates that chatbots are now firmly on the national policy agenda, even as the federal administration has expressed opposition to certain state regulatory efforts in this space.
Still, the regulatory picture remains unsettled. Several proposals gaining traction this year introduce provisions that were largely absent from the first generation of chatbot laws, including restrictions on engagement optimization practices, parental control tools for minors’ chatbot interactions and data, and limits on the use of conversational data for advertising or training purposes. As these bills move through legislatures, the next few months will help clarify which of these emerging approaches are most likely to shape the next phase of chatbot governance.
- For more information on these laws and other enacted AI legislation, see FPF’s blog: From Proposal to Passage: Enacted U.S. AI Laws 2023-2025 ↩︎
- There have been 13 cases filed to date, most brought by parents of minors on behalf of children who either committed or attempted self-harm following interactions with AI chatbots. ↩︎