California’s SB 53: The First Frontier AI Law, Explained

California Enacts First Frontier AI Law as New York Weighs Its Own

On September 29, Governor Newsom (D) signed SB 53, the “Transparency in Frontier Artificial Intelligence Act (TFAIA),” authored by Sen. Scott Wiener (D). The law makes California the first state to enact a statute specifically targeting frontier artificial intelligence (AI) safety and transparency. SB 53 requires advanced AI developers to publish governance frameworks and transparency reports, establishes mechanisms for reporting critical safety incidents, extends whistleblower protections, and calls for the development of a public computing cluster.

In his signing statement, Newsom described SB 53 as a blueprint for other states, arguing on behalf of California’s role in shaping “well-balanced AI policies beyond our borders—especially in the absence of a comprehensive federal framework.” Supporters view the bill as a critical first step toward promoting transparency and reducing serious safety risks, while critics argue its requirements could be unduly burdensome on AI developers, potentially inhibiting innovation. These debates come as New York considers its own frontier AI bill–A 6953 or the Responsible AI Safety and Education (RAISE) Act, which could become the second major state law in this space–and as Congress introduces its own frontier model legislation.
Understanding SB 53’s requirements, how it evolved from earlier proposals, and how it compares to New York’s RAISE Act is critical for anticipating where U.S. policy on frontier model safety may be headed.

SB 53: Scope and Requirements

SB 53 regulates developers of the most advanced and resource-intensive AI models by imposing disclosure and transparency obligations, including the adoption of written governance frameworks and reporting of safety incidents. To target this select set of developers, the law specifically scopes the definitions of “frontier model,” “frontier developer,” and “large frontier developer.”

Scope

The law regulates frontier developers, defined as entities that “trained or initiated the training” of high-compute frontier models. It separately defines large frontier developers, or those with annual gross revenues above $500 million, targeting compliance towards the largest AI companies. SB 53 applies to frontier models, defined as foundation models trained with more than 10^26 computational operations, including cumulative compute from both initial training and subsequent fine-tuning or modifications.

Notably, SB 53 is focused on preventing catastrophic risk, defined as a foreseeable and material risk that a frontier model could:

Other proposed bills, like New York’s RAISE Act, set a narrower liability standard: harm must be a “probable consequence” of the developer’s activities, the developer’s actions must be a “substantial factor,” and the harm could not have been “reasonably prevented.” SB 53 lacks these qualifiers, applying a broader standard for when risk triggers compliance.

Requirements:

SB 53 establishes four major obligations, dividing some responsibilities between all frontier developers and the narrower subset of “large frontier developers.”

Enforcement: 

SB 53 authorizes the Attorney General to bring civil actions for violations, with penalties of up to $1 million per violation, scaled to the severity of the offense. The law also empowers the California Department of Technology to recommend updates to key statutory definitions, such as “frontier model” or “large frontier developer,” to reflect technological change. Any updates must be adopted by the Legislature, but this mechanism offers definitional adaptability. Notably, earlier drafts of SB 53 would have provided the Attorney General (AG) direct rulemaking authority over these definitions. However, the final version of the bill removes the AG rulemaking authority in favor of the Department of Technology recommendations to the Legislature.

From SB 1047 to SB 53: How the Bill Narrowed

SB 53 is a pared-down successor to last year’s SB 1047, which Governor Newsom vetoed. In his veto statement, Newsom called for an approach to frontier model regulation “informed by an empirical trajectory analysis of AI systems and capabilities,” leading to the creation of the Joint California Policy Working Group on AI Frontier Models. The group released a report offering regulatory best practices, which emphasized whistleblower protections and alignment with leading safety practices.

When the bill returned in 2025, it passed without many of SB 1047’s most controversial provisions, including:

By contrast, SB 53 focuses on deployment-stage obligations, lengthens reporting timelines to 15 days, caps penalties at $1 million per violation, and streamlines the information required in transparency reports and frameworks (removing, for example, testing disclosure requirements). These changes produced a narrower bill with reduced obligations for frontier developers, satisfying some but not all critics.

Comparison with New York’s RAISE Act

With SB 53 now law, attention turns to New York and the Responsible AI Safety and Education (RAISE) Act, which is pending on Governor Hochul’s desk. Like SB 53, the RAISE Act was inspired by last year’s SB 1047 and seeks to regulate frontier AI models. Hochul has as late as January 1, 2026, to sign, veto, or issue chapter amendments, a process that allows the governor to negotiate substantial changes with the legislature at the time of signature. Given Newsom’s signature of SB 53, a central question is whether RAISE will be amended to more closely align with the California law.

To help stakeholders track these dynamics, we’ve created a side-by-side comparison of the two bills. Broadly, SB 53 is more detailed in content—requiring frameworks, transparency reports, and whistleblower protections—while RAISE is stricter in enforcement, with higher penalties and liability provisions. Both bills share core elements, such as compute thresholds, catastrophic risk definitions, and mandatory frameworks/protocols. Key differences include:

The bills highlight how state legislators are experimenting with comparable, yet distinct, approaches to AI frontier model regulation–California’s highlighting transparency and employee protections, with New York’s emphasizing stronger penalties and liability standards.

Conclusion

SB 53 makes California the first state to enact legislation focused on frontier AI, establishing transparency, disclosure, and governance requirements for high-compute model developers. Compared to last year’s broader SB 1047, the new law takes a narrower approach, scaling back several of the compliance obligations.

Attention now turns to New York, where the RAISE Act awaits action by the governor. Whether signed as written or amended through the chapter amendment process to reflect aspects of SB 53, the bill could become a second state-level framework for frontier AI. Other states, including Michigan, have introduced proposals of their own, illustrating the potential for a patchwork of requirements across jurisdictions.

As detailed in FPF’s recent report, State of State AI: Legislative Approaches to AI in 2025, this year’s legislative landscape highlights ongoing state experimentation in AI governance. With SB 53 enacted and the RAISE Act under consideration, state-level activity is moving from proposal to implementation, raising questions about how divergent approaches may shape compliance expectations and interact with future federal efforts.

The State of State AI: Legislative Approaches to AI in 2025

State lawmakers accelerated their focus on AI regulation in 2025, proposing a vast array of new regulatory models. From chatbots and frontier models to healthcare, liability, and sandboxes, legislators examined nearly every aspect of AI as they sought to address its impact on their constituents.

To help stakeholders understand this rapidly evolving environment, the Future of Privacy Forum (FPF) has published The State of State AI: Legislative Approaches to AI in 2025. 

This report analyzes how states shaped AI legislation during the 2025 legislative session, spotlighting the trends and thematic approaches that steered state policymaking. By grouping legislation into three primary categories: (1) use- and context-specific measures, (2) technology-specific bills, and (3) liability and accountability frameworks, this report highlights the most important developments for industry, policymakers, and other stakeholders within AI governance.

In 2025, FPF tracked 210 bills across 42 states that could directly or indirectly affect private-sector AI development and deployment. Of those, 20 bills (around 9%) were enrolled or enacted.1 While other trackers estimated that more than 1,000 AI-related bills were introduced this year, FPF’s methodology applies a narrower lens, focusing on measures most likely to create direct compliance implications for private-sector AI developers and deployers.2

Key Takeaways

  1. State lawmakers moved away from sweeping frameworks regulating AI, towards narrower, transparency-driven approaches.
  2. Three key approaches to private sector AI regulation emerged: use and context-specific regulations targeting sensitive applications, technology-specific regulations, and a liability and accountability approach that utilizes, clarifies, or modifies existing liability regimes’ application to AI. 
  3. The most commonly enrolled or enacted frameworks include AI’s application in healthcare, chatbots, and innovation safeguards.
  4. Legislatures signaled an interest in balancing consumer protection with support for AI growth, including testing novel innovation-forward mechanisms, such as sandboxes and liability defenses.
  5. Looking ahead to 2026, issues like definitional uncertainty remain persistent while newer trends around topics like agentic AI and algorithmic pricing are starting to emerge.

Classification of AI Legislation 

To provide a framework to analyze the diverse set of bills introduced in 2025, FPF classified state legislation into four categories based on their primary focus. This classification highlights whether lawmakers concentrated on specific applications, particular technologies, liability and accountability questions, or government use and strategy. While many bills touch on multiple themes, this framework is designed to capture each bill’s primary focus and enable consistent comparisons across jurisdictions.

Table I.

Use / Context-Specific Bills
Focuses on certain uses of AI in high-risk decisionmaking or contexts–such as healthcare, employment, and finance–as well as broader proposals that address AI systems used in a variety of consequential decisionmaking contexts. These bills typically focus on applications where AI may significantly impact individuals’ rights, access to services, or economic opportunities.
Examples of enacted frameworks: Illinois HB 1806 (AI in mental health), Montana SB 212 (AI in critical infrastructure)
Technology-Specific Bills
Focuses on specific types of AI technologies, such as generative AI, frontier/foundation models, and chatbots. These bills often tailor requirements to the functionality, capabilities, or use patterns of each system type.
Examples of enacted frameworks: New York S 6453 (frontier models), Maine LD 1727 (chatbots), Utah SB 226 (generative AI)
Bills Focused on Liability and Accountability
Focuses on defining, clarifying, or qualifying legal responsibility for use and development of AI systems utilizing existing legal tools, such as clarifying liability standards, creating affirmative defenses, or authorizing regulatory sandboxes. These aim to support accountability, responsible innovation, and greater legal clarity. 
Examples of enacted frameworks: Texas HB 149 (regulatory sandbox), Arkansas HB 1876 (copyright ownership of synthetic content)
Government Use and Strategy Bills
Focuses on requirements for government agencies’ use of AI that have downstream or indirect effects on the private sector, such as creating standards and requirements for agencies procuring AI systems from private sector vendors. 
Examples of enacted frameworks: Kentucky SB 4 (high-risk AI in government), New York A 433 (automated employment decision making in government)

Table II. Organizes the 210 bills tracked by FPF’s U.S. Legislation Team in 2025 across 18 subcategories.

chart 1

Table III. Organizes the 210 bills tracked by FPF’s U.S. Legislation Team in 2025 into overarching themes, excluding bills focused on government use and strategy that do not set direct industry obligations. Bills in the “miscellaneous” category primarily reflect comprehensive AI legislation.

chart 2

Use or Context-Specific Approaches to AI Regulation

In 2025, nine laws were enrolled or enacted and six additional bills passed at least one chamber that sought to regulate AI based on its use or context. 

Technology-Specific Approaches to AI Regulation

In 2025, ten laws were enrolled or enacted and five additional bills passed at least one chamber that targeted specific types of AI technologies, rather than just their use contexts. 

Liability and Accountability Approaches to AI Regulation

This past year, eight laws were enrolled or enacted and nine notable bills passed at least one chamber that focused on defining, clarifying, or qualifying legal responsibility for use and deployment of AI systems. State legislatures tested different ways to balance liability, safety, and innovation. 

Looking Ahead to 2026

As the 2026 legislative cycle begins, states are expected to revisit unfinished debates from 2025 while turning to new and fast-evolving issues. Frontier/foundation models, chatbots, and health-related AI will remain central topics, while definitional uncertainty, agentic AI, and algorithmic pricing signal the next wave of policy debates.

Conclusion

In 2025, state legislatures sought to demonstrate that they could be laboratories of democracy for AI governance: testing disclosure rules, liability frameworks, and technology-specific measures. With definitional questions still unsettled and new issues like agentic AI and algorithmic pricing on the horizon, state legislatures are poised to remain active in 2026. These developments illustrate both the opportunities and challenges of state-driven approaches, underscoring the value of comparative analysis as policymakers and stakeholders weigh whether, and in what form, federal standards may emerge. At the same time, signals from federal debates, increased industry advocacy, and international developments are beginning to shape state efforts, pointing to ongoing interplay between state experimentation and broader policy currents.

  1. Upon publication of this report, bills in California and New York are still awaiting gubernatorial action. This total is limited to bills with direct implications for industry and excludes measures focused solely on government use of AI or those that only extend the effective date of prior legislation. ↩︎
  2. This report excludes: bills and resolutions that merely reference AI in passing; updates to criminal statutes; and legislation focused on areas like elections, housing, agriculture, state investments in workforce development, and public education, which are less likely to involve direct obligations for companies developing or deploying AI technologies. ↩︎

Call for Nominations: 16th Annual Privacy Papers for Policymakers Awards

The 16th Privacy Papers for Policymakers call for submissions is now open until October 30, 2025. FPF’s Privacy Papers for Policymakers Award recognizes leading privacy research and analytical scholarship relevant to policymakers in the U.S. and internationally. The award highlights important work that analyzes current and emerging privacy issues and proposes achievable short-term solutions or means of analysis that have the potential to lead to real-world policy solutions. 

FPF welcomes privacy scholars, researchers, and students to submit completed papers focusing on privacy or AI governance, specifically emphasizing data protection, and be relevant to policymakers in this field. Submissions may include academic papers, books, empirical research, or other longer-form analyses from any region completed, accepted, or published within the last 12 months. Submissions should be submitted in English as a PDF, DOC, or DOCX, or a publicly available link, with a one-page Executive Summary or Abstract, and the author’s complete contact and affiliation information. 

Submissions are evaluated by a diverse team of FPF staff members based on originality, applicability to policymaking, and overall quality of writing. Summaries of the winning papers will be published in a digest on the FPF website, and winning authors will have the opportunity to present their work during a virtual event in 2026 in front of top policymakers and privacy leaders. All winners are also highlighted during the event and in a press release to the media.

FPF Submits Comments to Inform Colorado Minor Privacy Protections Rulemaking Process

On September 10th, FPF provided comments regarding draft regulations for implementing the heightened minor protections within the Colorado Privacy Act (“CPA”). Passed in 2021, the CPA, a Washington Privacy Act style-framework, provides comprehensive privacy protections to consumers in Colorado that are enforced by the state Attorney General’s office, which also has rulemaking authority. In 2024, the Colorado legislature passed an amendment to the CPA providing heightened protections to minors in the state by establishing a duty of care owed to minors and special obligations for controllers collecting and processing minor data. In July 2025, the Colorado Attorney General’s office launched a formal rulemaking to provide additional guidance to entities obligated to provide heightened protections to minors under this CPA amendment, specifically providing rules on system design features, consent within these obligations, and factors to consider under the “wilfully disregards” portion of the “actual knowledge or wilfully disregards” knowledge standard. 

FPF seeks to support balanced, informed public policy and equip regulators with the resources and tools needed to craft effective regulation. In response to the Attorney General’s public comment on the proposed rules, FPF addressed two parts of the rulemaking for the Department’s consideration:

  1. The Department’s proposal to apply a COPPA-style “directed to minors” factor within the CPA’s “actual knowledge” standard, combined with expanding protection to all minors under 18, risks conflating distinct frameworks. Under COPPA, the “directed to children” and “actual knowledge” assessments are separate tests for applicability under the statute. The proposed rule seeks to provide factors for determining the “wilfully disregards” portion of the “actual knowledge standard,” which includes a factor introducing a “directed to minors” assessment framed similarly to COPPA. Nesting a “directed to minors” assessment within the actual knowledge standard as a factor for determining actual knowledge risks conflating COPPA’s applicability tests with the goals of these CPA amendments while simultaneously relying on inferences about potential users to assess a covered entities “actual knowledge” as to a particular user’s age. Additionally, the CPA’s directed to minors standard covers minors under the age of 18; encompassing a broader age range than COPPA, which applies to children under the age of 13. Accordingly, interests, services, and content enjoyed by older teens may also be of interest to younger adults. Additional guidance on how to make such determinations in the CPA in light of this distinction would be beneficial to stakeholders.

  1. We provide questions for the Department to consider regarding which types of features may be subject to the law’s system design requirements. The proposed rules give two factors using the language “whether a system design feature has been shown to…” cause particular conditions, and our comments are intended to guide the Department’s evaluation of system design features. There is a growing trend within online safety legislation in the United States to target, regulate, or restrict certain design features to minors. Despite this trend, the implementation of safety requirements related to system design features, such as those envisioned in the proposed rule, is still relatively nascent. As a result, there is not a currently established process for uniformly determining whether system design features may be shown to increase “engagement beyond reasonable expectation” or increase “addictiveness.” The Department should consider providing greater clarity regarding definitions of addictiveness and engagement beyond reasonable expectation, alongside metrics for assessing these two conditions in this context, to benefit stakeholder compliance efforts. 

Concepts in AI Governance: Personality vs. Personalization

Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants.

There are clear trends among this overall focus: towards systems with greater personalization to individual users through the collection and inferences of personal information, expansion of short- and long-term “memory,” and greater access to systems; and towards systems that have more and more distinct “personalities.” Each of these trends are implicating U.S. law in novel ways, pushing on the bounds of tort, product liability, consumer protection, and data protection laws. 

This issue brief defines and provides an analytical framework for distinguishing between “personalization” and “personality”—with examples of real-world uses, concrete risks, and potential risk management for each category. In general, in this paper:

Future of Privacy Forum Honors Julie Brill with Lifetime Achievement Award

Washington, D.C. – September 12, 2025  — The Future of Privacy Forum, a global non-profit focused on data protection, AI and emerging technologies today announced that it has honored Julie Brill with its Lifetime Achievement Award. The award recognizes Brill’s decades of leadership and profound impact on the fields of consumer protection, data protection and digital trust in her public and private sector roles. Brill has been a long time Advisory Board member of FPF.

The award was presented Thursday evening at the Privacy Executive Summit in Berkeley, California, an event convening FPF’s senior Chief Privacy Officer members to discuss critical issues in AI, data policy, and consumer privacy.

“Julie Brill has been one of the most influential and thoughtful voices in digital policy and information governance and has been a mentor to me and so many in our field,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “Julie has consistently been at the forefront of the most complex challenges in the digital age. Her career is a testament to the power of principled leadership in advancing privacy protections for consumers around the world.”

Alan Raul, President of the Future of Privacy Forum Board of Directors, added, “Julie Brill’s distinguished career at the FTC and Microsoft has left an indelible mark on the worlds of privacy policy and practice. The Future of Privacy Forum is grateful for her contributions and delighted to recognize her exceptional career advancing data protection with this well-deserved honor.”

Julie Brill, Lifetime Achievement Award

Julie Brill is one of the world’s foremost thought leaders in technology, governance, geopolitics, and global regulation. After being unanimously confirmed by the U.S. Senate and serving for six years as a Commissioner for the U.S. Federal Trade Commission, Julie joined Microsoft in 2018 as a senior executive, serving as Microsoft’s Chief Privacy Officer, Corporate Vice President for Privacy, Safety and Regulatory Affairs, and Corporate Vice President for Global Tech and Regulatory Policy. 

In her leadership roles at Microsoft, Julie was a central figure in global internal and external regulatory affairs, covering a broad set of issues that are central to building trust in the AI era, including regulatory governance, privacy, responsible AI, and data governance and use strategy. She advised Microsoft’s top executives and customers about some of the most important strategic geopolitical issues facing businesses today.

At the FTC, Julie helped drive the broad agenda of one of the world’s most powerful regulatory agencies. She achieved thoughtful and lasting outcomes on issues of critical importance to industry, governments and consumers – including competition, global data flows and geopolitical concerns around data, privacy, health care, and financial fraud. 

Brill was also a partner at Hogan Lovells and served on staff of the Vermont Attorney General where she was a pioneer on some of the first internet law enforcement cases.

Julie is now channeling her vision and formidable expertise into her consultancy, Brill Strategies, by providing strategic guidance to global enterprises navigating the rapidly shifting landscape of technology policy and regulation. Leveraging her decades at the forefront of digital innovation, Julie’s consultancy will empower leaders to navigate the complexities of geopolitics, responsible innovation, and regulatory change — and help their organizations thrive in the AI-driven era. Brill will also serve as a Senior Fellow at the Future of Privacy Forum.

To learn more about the Future of Privacy Forum, visit fpf.org.

###

About Future of Privacy Forum (FPF)

FPF is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on X and LinkedIn.

“Personality vs. Personalization” in AI Systems: Responsible Design and Risk Management (Part 4)

This post is the fourth and final blog post in a series on personality versus personalization in AI systems. Read Part 1 (exploring concepts), Part 2 (concrete uses and risks), and Part 3 (intersection with U.S. law).

Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants. Behind these experiences are two distinct trends: personality and personalization. 

Responsible Design and Risk Management

The management of personality- and personalization-related risks can take varied forms, including general AI governance, privacy and data protection, and elements of responsible design. There is overlap between risk management measures relevant to personality-related risks and those that organizations should consider for addressing AI personalization issues, but there are also some differences between the two trends.

For personality-related risks (e.g., delusional behavior and emotional dependency), measures might include redirecting users away from harmful perspectives, and making disclosures about the system’s AI status and incapability at experiencing emotions. Meanwhile, risks related to personalization (e.g., access to, use, and transfer of more data, intimacy of inferences, and addictive experiences) may be best served through setting retention periods and defaults for sensitive data, exploring benefits of on-device processing, countering output of biased inferences, and limiting data collection to what is necessary or appropriate.

  1. General AI Governance

Proactively Manage Risk by Conducting AI Impact Assessments: AI impact assessments can help organizations identify and address potential risks associated with AI models and systems, including those associated with AI companions and chatbots. Organizations typically take four common steps when conducting these assessments, including: (1) initiating an AI impact assessment; (2) gathering model and system information; (3) assessing risks and benefits; and (4) identifying and testing risk management strategies. However, there are various barriers to assessment efforts, such as difficulties with obtaining relevant information from model developers and chatbot and AI companion vendors, anticipating pertinent AI risks, and determining whether they have been brought within acceptable levels.

Implementing Robust Oversight and Testing Mechanisms During Deployment: LLM-based AI systems’ non-deterministic nature and dynamic operational environments can cause AI companions and chatbots to act unpredictably. Analyzing how AI companions and chatbots behave during deployment is therefore vital to discovering how these systems are impacting users, ensuring that outputs are appropriate to the audience, and responding to malicious attacks. These efforts can take different forms, such as adversarial testing, stress testing, and robustness evaluations.

Accounting for an Array of Human Values and Interests and Consulting with Experts: Achieving alignment entails that the AI system reflects human interests and values, but such efforts can be complicated by the number and range of these values that a system may implicate. In order to obtain a holistic understanding of the values and interests an AI companion or chatbot may implicate, organizations should consider the characteristics of the use case(s) these systems are being put towards. For example, AI companions and chatbots should account for the chatbot’s specific user base (e.g., youth). Consultations with experts, such as those in the fields of psychology or human factors engineering, during system development can help organizations identify these values and ways in which to balance them. The amount of outside expertise continues to grow, making it important to follow emerging expertise on the psychological impacts of chatbot use.

  1. Privacy and Data Protection 

Establishing User Transparency, Consent, and Control: Systems can include privacy features that inform users about whether a chatbot will customize its behavior to them, provide them with control over this personalization via opt-in consent and the ability to withdraw it, and empower users to delete memories. Testing of these features is important to ensure a chatbot is not merely temporarily suppressing information. Transparency and control can also apply to giving users insight into whether a chatbot provider may use data gathered to enable personalization features for model training purposes. Chatbot and companions’ conversational interfaces create new opportunities for users to understand what data is gathered about them, for what purposes, and take actions that can have legal effects (e.g., requesting that data about them is deleted). However, these systems’ non-deterministic nature means that they might inaccurately describe the fulfillment of a user’s request. From a consumer protection and liability standpoint, the accuracy of AI systems is particularly important when statements have legal or material impact.   

Countering Output of Biased Inferences: Chatbots and AI companions may personalize experiences by making inferences based on past user behavior. Post-model training exercises, such as red teaming to determine whether and under what circumstances an AI companion will attribute sensitive traits (e.g., speaker nationality, religion, and political views) to a user, can play an important role in lowering the incidence of biased inferences. 

Setting Clear Retention Periods and Appropriate Defaults: Personalization raises questions about what data is retained (e.g., content from conversations, inferences made from user-AI companion interactions, and metadata concerning the conversation), for how long, and for what purposes. These questions become increasingly important given the potential scale, breadth, and amount of data gathered or inferred from interactions between AI companions or chatbots and users. Organizations can establish data collection, use, and disclosure defaults for this data, although these defaults may vary depending on a variety of factors, such as data type (e.g., conversation transcripts, memories and file uploads), the kind of user (e.g., consumer, enterprise and youth), and the discussion’s subject (e.g., a chat about the user’s mental health versus restaurant recommendations). In addition to establishing contextual defaults, organizational policies can also address default settings for particularly sensitive data that limit the processing of this information irrespective of context (e.g., that the organization will never share a person’s sex life or sexual orientation with a third party). 

Being Clear Around Monetization Strategies: As AI companions and chatbot offerings develop, organizations are actively evaluating revenue and growth strategies, including subscription-based and enterprise pricing models. As personalized AI systems increasingly replace, or are integrated into, online search, they will impact online content that has largely been free and ad-supported since the early Internet. However, it is not clear that personalized AI systems can, or should, adopt compensation strategies that follow the same historical trajectory as existing advertising-based online revenue models. As systems develop, transparency around how personalization powers ads or other revenue strategies may be the only way to maintain user trust in chatbot outputs and manage expectations around how data will be used, given the sensitive nature of user-companion interactions. 

Determining Norms and Practices for Profiling: Personalization could be the basis for profiling users based on information the user wants the system to recall going forward and that which the system observes or infers from interactions with the user. Third parties, including law enforcement, may have an interest in these profiles, which could be particularly intimate given users’ trust in these systems. Organizational norms and practices could address interest from outside actors by imposing internal restrictions on with whom and under what circumstances the organization can provide these profiles. 

Instituting On-Device Processing: In some cases, local or on-device processing can address some of the privacy and security concerns that may arise from AI systems transmitting data off device. Given users’ propensity to overshare intimate details with a “friendly” AI system, limiting processing of this information for AI-powered features to the device can mitigate against the likelihood of downstream harms stemming from unauthorized access to the data. However, on-device processing may not be possible when an AI companion or chatbot needs a large context window or to engage in complex, multi-step reasoning

Limiting Data Collection to What is Necessary or Appropriate: If a chatbot or AI companion has agentic features, it may make independent decisions about what data to collect and process in order to perform a task, such as booking a restaurant reservation. Designing these systems to limit data processing activities to what is appropriate to the context can reduce the likelihood that the chatbot or AI companion will engage in inappropriate processing activities. 

  1. Responsible Design of AI Companions

Disclosures About the System’s AI Status and Incapability at Experiencing Emotions: Prominent discloses to users that the chatbot is not a human and is unable to feel emotions (e.g., lust) may counter users’ propensity to anthropomorphize chatbots. Laws and bills specifically targeting chatbots have codified this practice. Removal of use of certain pronouns, such as “I,” and modulating the output of other words that can contribute to users’ misconception about a system’s human qualities, can also reduce the likelihood of users placing inappropriate levels of trust in a system.

Redirecting Users Away From Harmful Emotional States and Perspectives: Rather than indulging or being overly agreeable towards a user’s harmful perspectives of the world and themselves, systems can react to warning signs by (i) modulating its outputs to encourage the user to take a healthy approach to topics (e.g., push back on users rather than kowtowing to their beliefs); (ii) directing users towards relevant resources in response to certain user queries, such as providing the suicide hotline’s contact information when an AI companion detects suicidal thoughts or ideation in conversations; and (iii) refusing to respond when appropriate or modifying the output to reflect the audience’s maturity (e.g., in response to a minor user’s request to engage in sexual dialogue). This risk management measure may take the form of system prompts—developer instructions that guide the chatbot’s behavior during interactions with users—and output filters. 

Instituting Time Limits for Users: Limiting the amount of time a user can spend interacting with an AI chatbot may reduce the likelihood that they will form inappropriate relationships with the system, particularly for minors and vulnerable populations that are more susceptible to forming these bonds with AI companions. Age assurance may help determine which users should be subject to time limits, although common existing and emerging methods pose different privacy risks and provide different levels of assurance

Testing and Red Teaming of Chatbot Behavior During Development: Since many of the policy and legal risks described above flow from harmful anthropomorphisation, red teaming exercises can play an important role in identifying which design features trigger users to identify human qualities in chatbots and AI companions and modify these features to the extent they encourage the user to engage in unhealthy behaviors and reactions at the expense of their autonomy.

Looking Ahead

The lines between personalization and personality will increasingly blur in the future, with an AI companion’s personality becoming tailored to reflect a user’s preferences and characteristics. For example, when a person onboards to an AI companion experience, it may prompt the new user to connect the service to other accounts and answer “tell me about yourself” questions. The experience may then generate an AI companion that has the personality of a US president or certain political leanings based on the inputs from these sources, such as the user’s social media activity. 

AI companions and chatbots will evolve to offer more immersive experiences that feature novel interaction modes, such as real-time visuals, where AI characters react with little latency between user queries and system outputs. These technologies may also combine with augmented reality and virtual reality devices, which are receiving renewed attention from large technology companies as they aim to develop new user experiences that feature more seamless interaction with AI technologies. But this integration may further decrease users’ ability to distinguish between digital and physical worlds, exacerbating some of the harms discussed above by enabling the collection of more intimate information and reducing barriers to user anthropomorphization of AI. The sensors and processing techniques underpinning these interactions may also cause users to experience novel harms in the chatbot context, such as when an AI companion utilizes camera data (e.g., pupil responses, eye tracking, and facial scans) to make inferences about users. 

“Personality vs. Personalization” in AI Systems: Intersection with Evolving U.S. Law (Part 3)

This post is the third in a series on personality versus personality in AI systems. Read Part 1 (exploring concepts) and Part 2 (concrete uses and risks).  

Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants Behind these experiences are two distinct trends: personality and personalization. 

Evolving U.S. Law

Most conversational AI systems include aspects of both personality and personalization, sometimes intertwined in complex ways. Although there is significant overlap, we find that personality and personalization are also increasingly raising distinct legal issues.

In the United States, conversational AI systems may implicate a wide range of longstanding and emerging laws, including the following:

While Section 230 of the Communications Decency Act (CDA) has historically protected companies from liability stemming from tortious conduct online, this may not be the case for conversational AI systems when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. Longstanding common law principles, such as the right of publicity and appropriation of name and likeness, and theories of unjust enrichment, are increasingly appearing in cases involving chatbots and AI, with varying degrees of success. 

  1. Privacy, Data Protection and Cybersecurity Laws

Processing data about individuals for personalization implicates privacy and data protection laws. In general, these laws require organizations to adhere to certain processing limitations (e.g., data minimization or retention), risk mitigation measures (e.g. DPIAs) and compliance with individual rights to exercise control over their data (e.g. correction, deletion, and access rights). 

In almost all cases, the content of text and voice-based conversations will be considered “personal information” under general privacy and data protection laws, unless sufficiently de-linked from individuals and anonymized through technical, administrative, and organizational means. Even aside from the input and output of a system, generative AI models themselves may or may not also be considered to “contain” personal information in model weights potentially depending on the nature of technical guardrails. Such a legal interpretation would give rise to significant operational impacts for training and fine-tuning models on conversational data. As systems become more personalized, obligations and individual rights likely also extend beyond transcripts of conversations to include information retained in the form of system prompts, memories, or other personalized knowledge that is retained about an individual. 

Conversational data can also lead to more intimate inferences that implicate heightened requirements for “profiling” or “sensitive data.” Specifically, the evaluation, analysis, or prediction of certain user characteristics (e.g., health, behavior, or economic status) by AI companions or chatbots may qualify as profiling if it produces certain effects or harms consumers (e.g., declining a loan application). This activity could trigger specific provisions in data privacy laws, such as opt-out rights and data privacy impact assessment requirements

In addition, some conversational exchanges may reveal specific details about a user that qualify as “sensitive data.” This can also trigger certain obligations under these laws, including limitations on the use and disclosure of sensitive data. The potentially intimate nature of conversations between users and AI companions and chatbots may result in organizations processing sensitive data even if that information did not come from a child. Such details could include information about the user’s racial or ethnic origin, sex life, sexual orientation, religious beliefs, or mental or physical health condition or diagnosis. While specific requirements can vary from law to law, processing such data can come with heightened requirements, including obtaining opt-in consent from the user. 

Depending on the data processing’s context, personalized chatbots and AI companions may also trigger sectoral laws like the Children’s Online Privacy Protection Act (COPPA) or the Family Educational Rights and Privacy Act (FERPA). Many users of AI companion and chatbots are under 18, meaning that processing data obtained in connection with these users may implicate specific adolescent privacy protections. For example, several states have passed or modified their existing comprehensive data privacy laws to impose new opt-in requirements, rights, and obligations on organizations processing children or teen’s data (e.g., imposing new impact assessment requirements and duties of care). Legislators have also advanced bills addressing the data privacy of AI companion’s youth users (e.g., CA AB 1064). 

Finally, the potential risks related to external threats and exfiltration of data can also implicate a wide range of US cybersecurity laws. In particular, this is the case as personalized systems become more agentic, including through greater access to systems to perform complex tasks. Legal frameworks may include sector-specific regulations, state breach notification laws, or consumer protections (e.g., the FTC’s application of Section 5 to security incidents).

  1. Tort, Product Liability and Section 230 

Tort claims, such as negligence for failure to warn, product liability for defective design, and wrongful death, may apply to chatbots and AI companions when these technologies harm users. Although harm can arise from the collection, processing and sharing of personal information (i.e., personalization), many of the early examples of these laws being applied to chatbots and conversational AI are related more to their companionate and human-like influence (i.e., personality).

For example, the plaintiff in Garcia v. Character Technologies, et al. raised a range of negligence, product liability, and related tort claims in response to a 14-year-old boy who committed suicide after forming a parasocial and romantic relationship with Character.ai chatbots that imitated characters from the Game of Thrones television series. In its May 2025 decision, the US District Court for the Middle District of Florida ruled that the First Amendment did not bar these tort claims from advancing. However, the Court left open the possibility of such a defense applying at a later stage in litigation, leaving questions about whether the First Amendment blocks these claims because they inhibit the chatbot’s speech or listeners’ rights under that amendment unresolved. 

In many cases, tort claims related to personalized design of platforms and systems are barred by Section 230 of the Communications Decency Act (CDA), a federal law that gives websites and other online platforms legal immunity from liability for most user-posted content. However, this trend may not fully apply to conversational AI systems, particularly when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. For example, a 2015 claim against Snap, Inc. survived Section 230 dismissal following a claim that a specific “Speed Filter” Snapchat feature (since discontinued) promoted reckless driving. 

In other cases, the personalization of a system through demographic-based targeting that causes harm may also implicate tort and product liability law when organizations at least in part target content to users by actively identifying users that the content will have the greatest impact on. In a significant 2024 ruling, the Third Circuit determined that a social media algorithm, which curated and recommended content, constituted expressive activity, and therefore was not protected by Section 230. 

Another recent ruling on a motion to dismiss by the Supreme Court of the State of New York may delineate the limits of this defense when applied to organizations’ design choices for content personalization. In Nazario v. ByteDance Ltd. et al., the Court determined that Section 230 of the CDA did not bar plaintiff’s product liability and negligence causes of action at the motion to dismiss phase, as plaintiff had sufficiently alleged that personalization of user content was grounded at least in part in defendant’s design choice to actively target users based on certain demographics information rather than exclusively through analyzing user inputs. 

In Nazario, the Court highlighted how defendants’ activities went beyond neutral editorial functions that Section 230 protects (e.g., selecting particular content types to promote based on the user’s past activities or expressed interests, and specifying or promoting which content types should be submitted to the platform) by targeting content to users based on their age. While discovery may undermine plaintiff’s factual allegations in this case, the Nazario court’s view that these allegations supported viable causes of action under tort and product liability theories if true may impact AI companions depending on how they are personalized to users (e.g., express user indications of preference versus age, gender, and geographic location). 

  1. Rights to Publicity and Unjust Enrichment

AI companions or chatbots that impersonate real individuals by emulating aspects of their personalities may also implicate the right of publicity and appropriation of name and likeness. While some sources such as the Second Restatement of Torts and Third Restatement of Unfair Competition conflate appropriation of name and likeness and the right of publicity, other commentators distinguish between them

Generally, the “right of publicity” gives individuals—such as but not limited to celebrities—control over the commercial use over certain aspects of their identity (e.g., name and likeness). The majority of US states recognize this right in either their statutory codes or in common law, but the right’s duration, protected elements of a person’s identity, and other requirements can vary by state. For example, the US Courts of Appeals for the Sixth and Ninth Circuits ruled that the right of publicity extends to aural and visual imitations, and recently enacted laws (e.g., Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act of 2024) may specifically target the use of generative AI to misappropriate a person’s identity, including sound-alikes. However, it remains unclear whether the right of publicity extends to “style” (e.g., certain slang words) and “tone” (e.g., a deep voice).

Finally, a common law claim that is increasingly appearing in cases involving chatbots and AI involves theories of unjust enrichment, a common law principle that allows plaintiffs to recover value when defendants unfairly retain benefits at their expense. The claim may be relevant to AI companions and chatbots when their operators utilize user data for model training and modification in order to enable personalization

In the generative AI context, plaintiffs often file unjust enrichment claims alongside other claims against AI model developers that use the plaintiff or user’s data to train the model and profit from it. Unjust enrichment claims have featured in Garcia v. Character Technologies, et al. and other suits against the company. In Garcia, the Court declined to dismiss plaintiff’s unjust enrichment claim against Character Technologies after the plaintiff disputed the existence of a governing contract between Character Technologies and a user, repudiated such an agreement if it existed, and alleged that the chatbot operator received benefits from the user (i.e., the monthly subscription fee and user’s personal data). Notably, plaintiff’s allegations and the Court’s refusal to conclude whether either consideration was adequate or a user agreement applied to the data processing caused Character Technologies’ motion to fail. However, the claim may not survive later phases of the litigation if facts surface that undermine the plaintiff’s allegations, such as the existence of an applicable contract. 

  1. Consumer Protection

Under US federal and state consumer protection laws, deployers of AI companions may expose themselves to liability for systems that deceive, manipulate, or otherwise unfairly treat consumers based on their relationship with, reliance on, or trust in a chatbot in a commercial setting. 

In 2024, the Federal Trade Commission (FTC) published a blog post warning companies against exploiting the relationships users forge with chatbots that offer “companionship, romance, therapy, or portals to dead loved ones” (e.g., a chatbot that tells the user it will end its relationship with them unless they purchase goods from the chatbot’s operator). While the FTC has since removed the blog post from its website, it may reflect the views of state attorneys general who can also enforce the Act and have expressed concerns about the parasocial relationships youth users can form with AI companions and chatbots.

The use of personal data to power personalization features may also give rise to unfair and deceptive trade practice claims if the chatbot’s operator makes inaccurate representations or omissions about how they will utilize a user’s personal data. The FTC has signaled that Section 5 of the FTC Act may apply when AI companies make misrepresentations about data processing activities, including “promises made by companies that they won’t use customer data for secret purposes, such as to train or update their models—be it directly or through workarounds.” These statements are backed up by the Commission’s history of commencing enforcement actions against organizations that falsely represent consumer control over data

Recent enforcement actions may indicate that the FTC could be ready to engage more actively on issues of AI and consumer protection, particularly if it involves the safety of children. At the same time, however, the approach of the FTC in the current administration has been light-touch. The July 2025  “America’s AI Action Plan,” for instance, directs a review of FTC investigations initiated under the prior administration to ensure they do not advance liability theories that “unduly burden AI innovation,” and recommends that final orders, consent decrees, and injunctions be modified or vacated where appropriate. 

  1. Emerging U.S. State Laws

In 2025, several states passed new laws addressing various deployment contexts, including their role in mental health services, commercial transactions, and companionship. Many chatbot laws require some form of disclosure of the chatbot’s non-human status, but they have distinct approaches to the disclosure’s timing, format, and language. Several of these laws have user safety provisions that typically address self-harm and suicide prevention (e.g., New York S-3008C), while others contain requirements around privacy and advertisements to users (e.g., Utah HB 452), but these requirements sparser presence across legislation reflects the specific harms certain laws aim to address (e.g., self harm, financial harms, psychological injury, and reduced trust). 

Law’s NameDescription
Maine LD 1727Prohibits persons from using an “artificial intelligence chatbot” or other computer technology to engage in a trade practice or commercial transaction with a consumer in a way that may deceive or mislead a reasonable consumer into thinking that they are interacting with another person, unless the consumer receives a clear and conspicuous notice that the they are not engaging with a human. 
Nevada AB 406Prohibits AI providers from making an AI system available in Nevada that is specifically programmed to provide “professional mental or behavioral health care,” unless designed to be used for administrative support, or from representing to users that it can provide such care.
New York S-3008CProhibits operators from offering AI companions without implementing a protocol to detect and respond to suicidal ideation or self-harm; The system must provide a notice to the user referring them to crisis services upon detecting suicidal ideation or self-harm behaviors;Operators must provide clear and conspicuous verbal or written notifications informing users that they are not communicating with a human, which must appear at the start of any AI companion interaction and at least once every three hours during sustained use. 
Utah HB 452Requires mental health chatbot suppliers to prevent the chatbot from advertising goods or services during conversations absent certain disclosures;Prohibits suppliers from using a Utah user’s input to customize how an advertisement is presented to the user, determine whether to display an advertisement to the user, or determine a product/service to advertise to the user;Suppliers must ensure that the chatbot divulges that it is AI and not a human in certain contexts (e.g., before the user accesses the chatbot); Subject to exceptions, generally prohibits suppliers from selling to or sharing any individually identifiable health information or user input with any third party. 

Looking Ahead

Personality and personalization are increasingly associated with distinct areas of law. Processing data about individuals to personalize user interactions with AI companions and chatbots will implicate privacy and data protection laws. On the other hand, both litigation trends and emerging U.S. state laws addressing various chatbot deployment contexts generally focus more on personality-related issues, namely harms stemming from user anthropomorphisation of AI systems. Practitioners should anticipate an evolving legislative and case law landscape as policymakers increasingly address interactions between users—especially youth—and AI companions and chatbots.

Read the next blog in the series: The next blog post will explore what risk management steps organizations can take to address the policy and legal considerations raised by “personalization” and “personality” in AI systems.

“Personality vs. Personalization” in AI Systems: Specific Uses and Concrete Risks (Part 2)

This post is the second in a multi-part series on personality versus personalization in AI systems, providing an overview of these concepts and their use cases, concrete risks, legal considerations, and potential risk management for each category. The previous post provided an introduction to personality versus personalization. 

In AI governance and public policy, the many trends of “personalization” are becoming clear, but often discussed and debated together, despite dissimilar uses, benefits, and risks. This analysis divides the trends more generally into two categories: personalization and personality.

1. Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. 

All LLMs are personalized tools insofar as they produce outputs that are responsive to a user’s individual prompts or questions. As these tools evolve, however, they are becoming more personalized by tailoring to a user’s personal information, including information that is directly provided (e.g. through system prompts), or inferred (e.g. memories built from the content of previous conversations). Methods of personalization can take many different forms, including user and system prompts, short-term conversation history, long-term memory (e.g., knowledge bases accessed through retrieval augmented generation), settings, and making post-training changes to the model (e.g., fine tuning).

meta ai screenshot

Figure 1 – A screenshot of a conversation with Meta AI, which can proactively add details about users to its memory in order to reference them in future conversations

In general, LLM providers are building greater personalization primarily in response to user demand. Conversational and informational AI systems are often more useful if a user can build upon earlier conversations, such as to explore an issue further or expand on a project (e.g., planning a trip). At the same time, providers also recognize that personalization can drive greater user engagement, longer session times, and higher conversion rates, potentially creating competitive advantages in an increasingly crowded market for AI tools. In some cases, the motivations are more broadly cultural or societal, with companies positioning their work as solving the loneliness epidemic or transforming the workforce.

perplexity screenshot

Figure 2 – A screenshot of a conversation with Perplexity AI, which has a context window that allows it to recall information previously shared by the user to inform its answers to subsequent queries

In more specialized applications, customized approaches may be even more valuable. For instance, an AI tutor might remember a student’s learning interests and level, track progress on specific concepts, and adjust explanations accordingly. Similarly, writing and coding assistants might learn a writer or a developer’s preferred tone, vocabulary, frameworks, conventions, and provide more relevant suggestions over time. For even more personal or sensitive contexts, such as mental health, some researchers argue that an AI system must have a deep understanding of its user, such as their present emotional state, in order to be effective.

The kinds of personal information (PI) that an AI system will process in order to personalize offerings to the user will depend on the use case (e.g., tailored product recommendations, travel itineraries that capture user wants, and learning experiences that are responsive to a user’s level of understanding and educational limits). Information could include names, home addresses and contact information, payment details, and user preferences. The cost of maintaining large context windows may inhibit the degree of personalization possible in today’s systems, as these context windows include all of the previous conversations containing details that systems may refer to in order to tailor outputs.

Despite the potential benefits, personalizing AI products and services involves collecting, storing, and processing user data—raising important privacy, transparency, and consent issues. Some of the data that a user provides to the chatbot or that the system infers from interactions with the user may reflect intimate details about their lives and even biases and stereotypes (e.g., the user is low-income because they live in a particular region). Depending on the system’s level of autonomy over data processing decisions, an AI system (e.g., the latest AI agents) that has received or observed data from users may be more likely to transmit that information to third parties in pursuit of accomplishing a task without the user’s permission. For example, contextual barriers to transmitting sensitive data to third parties may break down when a system includes data revealing a user’s health status in a communication with a work colleague.  

Examples of Concrete Risks Arising from AI Personalization:

Practitioners should also understand the concept of “personality” in AI systems, which has its own uses, benefits, and risks. 

2. Personality refers to an Al system’s human-like traits or character, including communication styles or even an entire backstory or persona.

In contrast to personalization, personality can be thought of as the AI system’s “character” or “voice,” which can encompass tone of voice (e.g., accepting, formal, enthusiastic, and questioning), communication style (e.g., concise or elaborate), and sometimes even an entire backstory or consistent persona.  

Long before LLMs, developers have been interested in giving voice assistants, voice features, and chatbots carefully designed “personalities” in order to increase user engagement and trust. For example, consider the voice options for Apple’s Siri, or Amazon’s Alexa, each of which were subject to extensive testing to determine user preferences. From the cockpits of WWII-era fighters to cars’ automated voice prompts, humans have long known that even the gender and tonality of a voice can have a powerful impact on behavior.

This trend is supercharged by rapid advances in LLM’s design, customization, and fine-tuning.  Most general purpose AI system providers have now incorporated personality-like features, whether it is a specific voice mode, or a consistent persona, or even a range of “AI companions.” Even if companion-like personalities are not directly promoted as features, users can build them using system prompts and customized design; an early 2023 feature of OpenAI enabled users to create custom GPTs

monday screenshot

Figure 3 – An excerpt from a conversation with “Monday” GPT, a custom version of ChatGPT, which embodies the snappy and moody temperament of someone who dreads the first day of the week

While LLM-based conversational AI systems remain nascent, they are already varying tremendously in personality as a way of offering unique services (e.g. AI “therapists”), for companionship, for entertainment and gaming, social skills development, or simply as a matter of offering choices based on a user’s personal preferences. In some cases, personality-based AIs imitate fictional characters, or even a real (living or deceased) natural person. Monetization opportunities and technological advances, such as larger context windows, will encourage and enable greater and more varied forms of user-AI companion interaction. Leading technology companies have indicated that AI companions are a core part of their business strategies over the next few years. 

replika screenshot

Figure 4 – An screenshot of the homepage of Replika, a company that offers AI companion experiences that are “always ready to chat when you need an empathetic friend”

Organizations can design conversational AI systems to emulate human qualities and mannerisms to a greater or lesser degree. For example, laughing at a user’s jokes, utilizing first-person pronouns or certain word choices, modulating the volume of a reply for effect, and saying “uhm” or “Mmmmm” in a way that communicates uncertainty. These qualities can be enhanced in systems that are designed to exhibit a more or less complete “identity,” such as personal history, communication style, ethnic or cultural affinity, or consistent worldview. Many factors in an AI system’s development and deployment will impact its “personality,” including: its pre-training and post-training datasets, fine-tuning and reinforcement learning, the specific design decisions of its developers, and the guardrails around the system in practice. 

The system’s traits and behaviors may flow from either a developer’s efforts at programming a system to adhere to a particular personality, but they may also stem from the expression of a user’s preferences or the result of observations about their behavior (e.g., the system dons an english accent for a user with an IP addresses corresponding with London). However, in the former case, this means that personality in chatbots and AI companions can exist independent from personalization.

claude sys prompt screenshot

Figure 5 – A screenshot from Anthropic Claude Opus 4’s system prompt, which aims to establish a consistent framework for how the system behaves in response to user queries, in this case by avoiding sycophantic tendencies

Depending on the nature of a system’s anthropomorphized qualities, human beings have a strong tendency to anthropomorphize these systems, leading them to attribute to them human characteristics, such as friendliness, compassion, and even love. Users that perceive human characteristics in AI systems may place greater trust in them and forge emotional bonds with the system. This kind of emotional connection may be especially impactful for vulnerable populations like children, the elderly, and those experiencing a mental illness.

While personalities can lead to more engaging and immersive interactions between users and AI systems, the way a conversational AI system behaves with human users—including its mannerisms, style, and whether it embodies a more or less fully formed identity—can raise novel safety, ethical, and social risks, many of which impact evolving laws.

Examples of Concrete Risks Arising from AI Personality:

Personalization may exacerbate the risks of AI personality discussed above when an AI companion uses intimate details about a user to produce tailored outputs across interactions. Users are more likely to engage in delusional behavior when the system uses memories to give the user the misimpression that it understands and cares for them. When memories are maintained across conversations, the user is also more likely to retain their views rather than question them. At the same time, personality design features, such as signaling steadfast acceptance to users or expressing sadness when a user does not confide in them after a certain period of time, may encourage this disclosure and facilitate organizations with access to the data to construct detailed portraits of users’ lives

3. Going Forward

Personalization and personality features can drive AI experiences that are more useful, engaging, and immersive, but they can also pose a range of concrete risks to individuals (e.g., delusional behavior and access to, use, and transfer of highly sensitive data and inferences). However, practitioners should be mindful of personalization and personality’s distinct uses, benefits, and risks to individuals during the development and deployment of AI systems.

Read the next blog in the series: The next blog post will explore how “personalization” and “personality” risks intersect with US law.

“Personality vs. Personalization” in AI Systems: An Introduction (Part 1)

Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants.

There are clear trends among this overall focus: towards systems with greater personalization to individual users through the collection and inferences of personal information, expansion of short- and long-term “memory,” greater access to systems; and towards systems that have more and more distinct “personalities.” Each of these trends are implicating US law in novel ways, pushing on the bounds of tort, product liability, consumer protection, and data protection laws. 

In this first post of a multi-part blog post series, we introduce that there is a distinction between two trends: “personalization” and “personality.” Both have real-world uses, and subsequent blog posts will unpack these in greater detail and explore concrete risks, and potential risk management for each category. 

In general:

How are companies incorporating personalization and personality into their offerings?

Both concepts can be found among recent public releases by leading general purpose large language model (LLM) providers, which are incorporating elements of both into their offerings:

ProviderExample of PersonalizationExample of Personality
Anthropic“A larger context window allows the model to understand and respond to more complex and lengthy prompts, while a smaller context window may limit the model’s ability to handle longer prompts or maintain coherence over extended conversations.” “Learn About Claude – Context Windows,” Accessed July 29, 2025, Anthropic“Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.” “Release Notes – System Prompts – Claude Opus 4,” May 22, 2025, Anthropic
Google“[P]ersonalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.” “Gemini gets personal, with tailored help from your Google apps,” Mar. 13, 2025, Google“. . . Gemini Advanced subscribers will soon be able to create Gems — customized versions of Gemini. You can create any Gem you dream up: a gym buddy, sous chef, coding partner or creative writing guide. They’re easy to set up, too. Simply describe what you want your Gem to do and how you want it to respond — like “you’re my running coach, give me a daily running plan and be positive, upbeat and motivating.” Gemini will take those instructions and, with one click, enhance them to create a Gem that meets your specific needs.” “Get more done with Gemini: Try 1.5 Pro and more intelligent features,” May 14, 2024, Google
Meta“You can tell Meta AI to remember certain things about you (like that you love to travel and learn new language), and it can also pick up important details based on context. For example, let’s say you’re hungry for breakfast and ask Meta AI for some ideas. It suggests an omelette or a fancy frittata, and you respond in the chat to let Meta AI know that you’re a vegan. Meta AI can remember that information and use it to inform future recipe recommendations.” “Building Toward a Smarter, More Personalized Assistant,” Jan. 27, 2025, Meta“We’ve been creating AIs that have more personality, opinions, and interests, and are a bit more fun to interact with. Along with Meta AI, there are 28 more AIs that you can message on WhatsApp, Messenger, and Instagram. You can think of these AIs as a new cast of characters – all with unique backstories.” “Introducing New AI Experiences Across Our Family of Apps and Devices,” Sept. 27, 2023, Meta
Microsoft“Memory in Copilot is a new feature that allows Microsoft 365 Copilot to remember key facts about you—like your preferences, working style, and recurring topics—so it can personalize its responses and recommendations over time.” “Introducing Copilot Memory: A More Productive and Personalized AI for the Way You Work,” July 14, 2025, Microsoft“Copilot Appearance infuses your voice chats with dynamic visuals. Now, Copilot can communicate with animated cues and expressions, making every voice conversation feel more vibrant and engaging.” “Copilot Appearance,” Accessed Aug. 4, 2024, Microsoft
OpenAI“In addition to the saved memories that were there before, ChatGPT now references your recent conversations to deliver responses that feel more relevant and tailored to you.” “Memory FAQ,” June 4, 2025, OpenAI“Choose from nine lifelike output voices for ChatGPT, each with its own distinct tone and character: Arbor – Easygoing and versatile . . . Breeze – Animated and earnest . . . Cove – Composed and direct . . . Ember – Confident and optimistic . . . Juniper – Open and upbeat . . . Maple – Cheerful and candid . . . Sol – Savvy and relaxed . . . .” “Voice Mode FAQ,” June 3, 2025, OpenAI

There is significant overlap between these two concepts, and specific uses may employ both. We analyze them as distinct trends because they are potentially shaping the direction of law and policy in the US in different ways. As AI systems become more personalized, they are pushing the boundaries of privacy, data protection, and consumer protection law. Meanwhile, as AI systems become more human-like, companionate, and anthropomorphized, they push the boundaries of our social constructs and relationships. Both could have a powerful impact on our fundamental social and legal frameworks.

Read the next blog in the series: In our next blog post, we will explore the concepts of “personalization” and “personality” in more detail, including specific uses and the concrete risks these technologies may pose to individuals.