The Rest of the West: Oregon and Washington Build on California Chatbot Law
Introduction
The West Coast now has a full set of chatbot laws on the books. Following California’s SB 243 (signed in 2025 and effective January 1, 2026) both Oregon (SB 1546) and Washington (HB 2225) enacted companion chatbot laws that will take effect on January 1, 2027. Together, these laws establish a new framework for regulating chatbot interactions with minors.
California’s SB 243 set the stage for regulating chatbots in the U.S., building on earlier legislative momentum (including in New York) to introduce a framework centered on disclosures and safety protocols, such as connecting users to crisis hotlines when they express suicidal ideation. For a deeper dive into SB 243 and its key provisions, see our previous FPF blog post.
Oregon and Washington retain many of the core elements of SB 243 but take the framework significantly further, expanding into new areas such as content restrictions and engagement design. Washington’s HB 2225, in particular, introduces a more expansive regulatory approach that will likely require companies to make design changes to their chatbots. While these laws are framed around “companion chatbots” and largely focus on minors, their reach may be broader than first appears. Even systems that are not labeled or designed as companion chatbots could be implicated, depending on how they function in practice.
This blog post compares Oregon’s SB 1546 and Washington’s HB 2225, while providing context from California’s SB 243, across the laws’ scope, requirements, and enforcement. While the laws are similarly scoped, their requirements diverge in meaningful ways, creating potential compliance challenges (especially where provisions are ambiguous or require interpretation). Key takeaways include:
- Scope: California and Washington take a broader, capability-based approach to define companion chatbots, while Oregon uses a narrower, behavior-based definition with more carve-outs, making its scope more targeted.
- Requirements: All three include disclosures and self-harm protocols, but Washington is the most prescriptive (e.g., additional requirements on engagement design, safeguards), and California is the most limited and disclosure-focused.
- Enforcement: All three enforce via a private right of action, with California and Washington including statutory damages, while Washington relies on its Consumer Protection Act for enforcement.
For additional context, see our FPF chatbot legislative tracker and our prior analysis of the 2026 chatbot legislative landscape. We have also developed a detailed comparison chart of SB 243, SB 1546, and HB 2225, available here.
Why These Differences Matter
The differences across these laws are important because their scopes are similar enough that many chatbot operators will need to comply with all three frameworks at once. In practice, this means navigating overlapping (but not identical) requirements across jurisdictions.
Oregon and Washington introduce more detailed and intervention-oriented requirements, including limits on engagement techniques, broader content restrictions, and more prescriptive safety obligations. These shifts move beyond the user-facing disclosures of SB 243 and into how chatbot systems are designed and operate in practice. At the same time, the laws are not fully aligned. Operators may need to navigate differences in definitions, thresholds, and obligations, often working across legislative language that remains open to interpretation. This ambiguity could lead to inconsistent implementation or push companies toward adopting the most restrictive standard across jurisdictions.
These differences are particularly important as chatbot legislation continues to be enacted in 2026. With dozens of similar bills under consideration across states and at the federal level, Oregon’s and Washington’s approaches may signal how this policy space is evolving and how future requirements may appear in other states.
Scope: Companion Chatbot
“Companion chatbot” may seem like a narrow category, but in practice, these laws may sweep in more systems than many operators might expect.
California and Washington adopt capability-based definitions, focusing on whether a system can generate human-like, relationship-sustaining interactions. California goes slightly further by including systems capable of meeting a user’s “social needs,” which may expand scope even more. Because capability (not intent) is the trigger for which AI tools are in scope, multipurpose tools (e.g., tutoring systems, coaching assistants, general-purpose chatbots) could fall within the law even if companionship is not their primary function.
Oregon, by contrast, uses a behavior-based definition (similar to New York’s S-3008C), requiring a system to actually exhibit certain relational behaviors, such as retaining user information across sessions, initiating emotional dialogue, and sustaining ongoing personal conversations. This definition is somewhat narrower, as it focuses on how the system operates in practice rather than what it is capable of doing. However, all three approaches still raise scope challenges. Even under Oregon’s slightly narrower model, chatbots that have a certain level of user interaction and/or personalization may meet this behavioral threshold, meaning tools not designed or marketed as “companions” could be subject to the law.
All three laws attempt to limit overbreadth through carve-outs (e.g., customer service tools, video game features, voice assistants), but Oregon and Washington include more detailed exceptions. Oregon uniquely excludes systems supporting patient or resident care services, narrowing scope in some healthcare contexts. Washington, meanwhile, excludes narrowly tailored educational tools, but only where they do not provide open-ended conversational companionship. This caveat may still leave more advanced or interactive AI tutoring systems in scope.
Requirements
- Disclosures
All three laws rely heavily on disclosures, but they take different approaches to when and how those disclosures must be delivered. At a high level, California and Oregon use a perception-based trigger for disclosures to all chatbot users: disclosure is required when a reasonable person would believe (or be misled into believing) they are interacting with a human. Washington, by contrast, requires disclosure with clear timing requirements: at the start of an interaction and at regular intervals (at least every three hours). This makes Washington both broader in application and more prescriptive in practice, while California and Oregon offer more flexibility but less clarity on timing.
These differences become more pronounced in the minor-specific disclosure requirements. All three laws impose additional disclosures for minors but vary in knowledge standards and when these enhanced disclosures for minors are triggered:
- Oregon: applies the broadest knowledge standard—“knows or has reason to believe”—likely requiring companies to act on signals or inferences about age.
- Washington: uses actual knowledge but also covers systems “directed to children,” a concept that could expand scope depending on interpretation.
- California: is the narrowest, relying solely on actual knowledge.
The laws also diverge on timing and format for these minor-specific disclosures. California and Oregon require disclosures every three hours and include a “take a break” reminder, while Washington requires disclosures every hour for minors but does not include a break prompt. Washington’s shorter interval may be more protective, but it also introduces practical challenges: companies may need to shift between different disclosure cadences depending on user age, which could push some operators toward adopting a uniform (and more frequent) standard across all users set at the one-hour interval. California’s reference to “continuing” interactions further complicates compliance. As drafted, it is unclear what constitutes a break in continuity, such as periods of user inactivity or leaving and reentering an interaction. For example, it is not clear whether a brief pause (e.g., a user stepping away for several minutes to use the restroom before returning to the chat) would remain part of the same interaction or reset the notice requirement.
Finally, the laws differ in how far they move beyond basic disclosure. California uniquely requires a “suitability” warning that chatbots may not be appropriate for some minors, adding an extra layer of consumer-facing transparency. Washington, on the other hand, requires system-level safeguards to prevent misrepresentation, such as prohibiting chatbots from claiming to be human. This marks a shift from disclosure to design, requiring operators to adapt their chatbot to ensure no “output” claims that the chatbot is human.
- Safety Protocols
At a baseline, all three laws require systems to detect signals of self-harm or suicidal ideation and direct users to crisis resources (such as the 988 hotline), establishing a shared expectation that chatbots must respond to users in distress.
The laws diverge, however, in how expansive these requirements are. Oregon is the most prescriptive, outlining what protocols must include, such as escalation through “additional intervention” if a user continues expressing distress. But the law does not define what that “intervention” entails, leaving open whether operators are expected to go beyond providing resources and take a more active role in mitigating harm. This ambiguity is notable in light of prior legislative proposals. For example, earlier (un-enacted) legislation in Virginia (SB 796) would have required operators to make reasonable efforts to notify emergency services or law enforcement in certain high-risk situations, an approach that raised significant concerns around privacy and user safety. While Oregon does not include such explicit requirements, the open-ended nature of “additional intervention” raises similar questions about the scope of an operator’s responsibility.
Oregon also expands scope by including self-harm “intent” in addition to ideation, potentially requiring more proactive detection of user risk. Because intent may not always be explicitly stated, this could require reliance on inferred signals from user interactions, again raising both implementation and privacy considerations.
Notably, Washington is the only law to define “self-harm,” but does so narrowly as “intentional self-injury, with or without intent to cause death.” This definition leaves uncertainty around what specific behaviors or signals must be identified, especially when indications are inferred from user context rather than explicitly stated. As a result, operators may face challenges complying with all three laws and determining when intervention obligations (e.g., connecting users to crisis hotlines) are triggered.
Other key differences include:
- Eating Disorders: Washington explicitly includes eating disorders in its protocol, expanding beyond suicide and self-harm. This inclusion raises line-drawing challenges (e.g., distinguishing harmful content from benign requests like nutrition advice) and may push operators toward over-restricting content and relying on inferred signals about user behavior.
- Generating Content: Washington and California require operators to prevent chatbots from generating content that encourages or explains self-harm, moving beyond detection and referral into direct regulation of system outputs. This requirement may require more robust filtering and monitoring systems.
- Evidence-Based Methods: California and Oregon reference “evidence-based” or clinical best practices. Washington instead relies on a more flexible “reasonable methods” standard, which may allow for greater variation in implementation.
- Transparency Reporting: All three require public disclosure of safety protocols, but California and Oregon go further by requiring annual reporting (to a state office in California and publicly in Oregon). Both prohibit inclusion of personal data, though OR’s fully public model may raise different considerations around how information is presented and accessed.
- Content Restrictions for Minors
Oregon goes beyond the other laws by imposing a broader set of content restrictions on chatbot interactions with minors. Across the laws, there is a shared baseline: operators must prevent chatbots from generating sexually explicit content involving minors. However, the scope of what is restricted differs. California takes the narrowest approach, prohibiting visual sexually explicit material and outputs that “directly state” a minor should engage in such conduct. Oregon expands this to content that “suggests or states” such conduct, capturing a wider range of dialogue. Washington goes further by prohibiting not only explicit content, but also “suggestive dialogue” with minors, an even broader and more ambiguous category. “Suggestive” is inherently subjective and context-dependent. This phrase may make it harder for operators to determine what content is prohibited and could lead to more conservative moderation to reduce operators’ compliance risk.
Beyond sexually explicit content, Oregon is the only law to impose broader behavioral restrictions, including a prohibition on outputs that “simulate emotional dependence.” This requirement moves beyond easily identifiable categories of content (e.g., sexually explicit content) into the nature of the relationship between the user and the system, which is more interpretive. While the policy intent is clear (preventing harmful attachment or manipulation), the phrase is open-ended and not defined, potentially capturing a wide range of common chatbot behaviors.
Together, these provisions signal a shift toward regulating not just what chatbots say, but how they interact with users, introducing greater ambiguity and operational complexity for compliance.
- Minor Engagement Optimization Restrictions
Oregon and Washington’s chatbot laws are both notable for taking a step toward regulating engagement optimization with minors, an area California does not address at all. While both states introduce these requirements, Washington’s approach is significantly more expansive. Oregon primarily targets reward-based mechanisms designed to reinforce or prolong user engagement. Washington, by contrast, regulates a wide range of interaction patterns, including excessive praise, mimicking emotional or romantic relationships, discouraging breaks, promoting isolation, and encouraging gift-giving/ expenditures tied to the chatbot relationship.
This broader scope means Washington’s law may require more significant design changes and ongoing judgment calls from operators. Many of Washington’s provisions are subjective and difficult to operationalize. Terms like “excessive praise” or outputs designed to “prolong use” are not defined, and could capture a wide range of otherwise benign interactions.
Several notable provisions include:
- Returning and engagement prompts: Washington restricts prompts encouraging users to return for emotional support or companionship. While aimed at reducing dependency, this could also encompass common features like reminders to continue a conversation.
- Isolation and withholding information: The law prohibits outputs that promote isolation from family or encourage withholding information from “trusted adults.” While protective in intent, these provisions may be difficult to apply in situations involving family conflict or abuse, and the term “trusted adult” is undefined.
- Discouraging breaks: Washington also restricts statements that discourage users from taking breaks or suggesting frequent return, a broad category that could cover a wide range of engagement strategies.
Overall, this section of Washington’s law reflects a shift toward regulating engagement design itself, not just content or disclosures. While this approach may offer stronger protections for minors, it also introduces some ambiguity and operational complexity for companies attempting to comply.
Enforcement
All three laws are notable for relying on private rights of action (PRA), a departure from most chatbot bills proposed this year, which primarily rely on state AG enforcement. This trend raises an important question: do these laws signal a shift toward PRAs in the chatbot space or are they outliers in an otherwise AG enforcement-driven landscape? California and Oregon take a similar approach, allowing individuals to bring claims with statutory damages of $1,000 per violation (or actual damages). Washington takes a different route by incorporating violations into its Consumer Protection Act, allowing private enforcement but without explicit statutory damages. As a result, California and Oregon may create stronger incentives for litigation and greater potential exposure for companies.
Beyond enforcement structure, there are also differences in how resilient these laws may be to legal challenges. Both California and Washington include severability clauses, while Oregon does not. Severability allows portions of a law to remain in effect if others are struck down, an important consideration in the chatbot regulatory space, where laws may face challenges on First Amendment or preemption grounds. As legal challenges possibly emerge in the coming months, they may help determine how important these severability clauses are in preserving chatbot regulatory frameworks.
Looking Ahead
Oregon and Washington may be the first chatbot laws of 2026, but they are unlikely to be the last. Idaho (S 1297) recently enacted its own chatbot law, while Georgia’s chatbot bill (SB 540) is awaiting gubernatorial action. Dozens of the nearly 100 chatbot bills introduced this year also continue to move through the legislative process. At the federal level, proposals like the SAFE Bots Act (within the KIDS Act), Sen. Hawley’s (R-MO) GUARD Act and Sen. Husted’s (R-OH) CHAT Act signal growing momentum for chatbot regulation in Congress. For more insights on proposed and enacted chatbot laws, see FPF’s weekly updated chatbot tracker.
What’s notable is not just this volume of activity but the increasing divergence in regulatory approaches. For example, Georgia’s SB 540 introduces requirements not found in the West Coast laws, including risk-based age assurance to access chatbots that may contain sexually explicit conduct and parental control tools to manage minors’ privacy and safety settings. Similarly, newly proposed companion bills in California (AB 2023 and SB 1119) include novel provisions banning targeted advertising restrictions for minors, imposing risk assessment and testing requirements, and offering parental tools with features like time limits on chatbot use.
These developments emphasize that chatbot regulation is shifting beyond disclosure-based frameworks toward more intervention-oriented, design-focused approaches. As more laws are enacted, operators will need to track not just whether they are in scope but how requirements diverge across jurisdictions, often in ways that are operationally significant.