“Personality vs. Personalization” in AI Systems: Intersection with Evolving U.S. Law (Part 3)
This post is the third in a series on personality versus personality in AI systems. Read Part 1 (exploring concepts) and Part 2 (concrete uses and risks).
Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants Behind these experiences are two distinct trends: personality and personalization.
- Personality refers to the human-like traits and behaviors (e.g., friendly, concise, humorous, or skeptical) that are increasingly a feature of conversational systems.
- Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. Conversational AI systems can expand their abilities to infer and retain information through a variety of mechanisms (e.g., larger context windows and memory).
Evolving U.S. Law
Most conversational AI systems include aspects of both personality and personalization, sometimes intertwined in complex ways. Although there is significant overlap, we find that personality and personalization are also increasingly raising distinct legal issues.
In the United States, conversational AI systems may implicate a wide range of longstanding and emerging laws, including the following:
- Chatbots that personalize interactions by processing personal information specifically implicate privacy, data protection, and cybersecurity laws. This includes provisions related to activities that some AI companions and chatbots may conduct (e.g., profiling users) and sectoral laws (e.g., COPPA) depending on factors like the deployment context, user base, and impact on individuals. Conversational data can also lead to more intimate inferences that implicate heightened requirements for “profiling” or “sensitive data.”
- In contrast, AI systems with human-like traits and behaviors (and sometimes personalization features) are also triggering tort and product liability law, consumer protection statutes, and emerging state legislation. There is a diverse range of traits and behaviors that may implicate these laws, such as manipulating or otherwise unfairly treating consumers based on their relationship with, reliance on, or trust in a chatbot in a commercial setting, and the use of chatbots in mental health services and companionship roles.
While Section 230 of the Communications Decency Act (CDA) has historically protected companies from liability stemming from tortious conduct online, this may not be the case for conversational AI systems when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. Longstanding common law principles, such as the right of publicity and appropriation of name and likeness, and theories of unjust enrichment, are increasingly appearing in cases involving chatbots and AI, with varying degrees of success.
- Privacy, Data Protection and Cybersecurity Laws
Processing data about individuals for personalization implicates privacy and data protection laws. In general, these laws require organizations to adhere to certain processing limitations (e.g., data minimization or retention), risk mitigation measures (e.g. DPIAs) and compliance with individual rights to exercise control over their data (e.g. correction, deletion, and access rights).
In almost all cases, the content of text and voice-based conversations will be considered “personal information” under general privacy and data protection laws, unless sufficiently de-linked from individuals and anonymized through technical, administrative, and organizational means. Even aside from the input and output of a system, generative AI models themselves may or may not also be considered to “contain” personal information in model weights potentially depending on the nature of technical guardrails. Such a legal interpretation would give rise to significant operational impacts for training and fine-tuning models on conversational data. As systems become more personalized, obligations and individual rights likely also extend beyond transcripts of conversations to include information retained in the form of system prompts, memories, or other personalized knowledge that is retained about an individual.
Conversational data can also lead to more intimate inferences that implicate heightened requirements for “profiling” or “sensitive data.” Specifically, the evaluation, analysis, or prediction of certain user characteristics (e.g., health, behavior, or economic status) by AI companions or chatbots may qualify as profiling if it produces certain effects or harms consumers (e.g., declining a loan application). This activity could trigger specific provisions in data privacy laws, such as opt-out rights and data privacy impact assessment requirements.
In addition, some conversational exchanges may reveal specific details about a user that qualify as “sensitive data.” This can also trigger certain obligations under these laws, including limitations on the use and disclosure of sensitive data. The potentially intimate nature of conversations between users and AI companions and chatbots may result in organizations processing sensitive data even if that information did not come from a child. Such details could include information about the user’s racial or ethnic origin, sex life, sexual orientation, religious beliefs, or mental or physical health condition or diagnosis. While specific requirements can vary from law to law, processing such data can come with heightened requirements, including obtaining opt-in consent from the user.
Depending on the data processing’s context, personalized chatbots and AI companions may also trigger sectoral laws like the Children’s Online Privacy Protection Act (COPPA) or the Family Educational Rights and Privacy Act (FERPA). Many users of AI companion and chatbots are under 18, meaning that processing data obtained in connection with these users may implicate specific adolescent privacy protections. For example, several states have passed or modified their existing comprehensive data privacy laws to impose new opt-in requirements, rights, and obligations on organizations processing children or teen’s data (e.g., imposing new impact assessment requirements and duties of care). Legislators have also advanced bills addressing the data privacy of AI companion’s youth users (e.g., CA AB 1064).
Finally, the potential risks related to external threats and exfiltration of data can also implicate a wide range of US cybersecurity laws. In particular, this is the case as personalized systems become more agentic, including through greater access to systems to perform complex tasks. Legal frameworks may include sector-specific regulations, state breach notification laws, or consumer protections (e.g., the FTC’s application of Section 5 to security incidents).
- Tort, Product Liability and Section 230
Tort claims, such as negligence for failure to warn, product liability for defective design, and wrongful death, may apply to chatbots and AI companions when these technologies harm users. Although harm can arise from the collection, processing and sharing of personal information (i.e., personalization), many of the early examples of these laws being applied to chatbots and conversational AI are related more to their companionate and human-like influence (i.e., personality).
For example, the plaintiff in Garcia v. Character Technologies, et al. raised a range of negligence, product liability, and related tort claims in response to a 14-year-old boy who committed suicide after forming a parasocial and romantic relationship with Character.ai chatbots that imitated characters from the Game of Thrones television series. In its May 2025 decision, the US District Court for the Middle District of Florida ruled that the First Amendment did not bar these tort claims from advancing. However, the Court left open the possibility of such a defense applying at a later stage in litigation, leaving questions about whether the First Amendment blocks these claims because they inhibit the chatbot’s speech or listeners’ rights under that amendment unresolved.
In many cases, tort claims related to personalized design of platforms and systems are barred by Section 230 of the Communications Decency Act (CDA), a federal law that gives websites and other online platforms legal immunity from liability for most user-posted content. However, this trend may not fully apply to conversational AI systems, particularly when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. For example, a 2015 claim against Snap, Inc. survived Section 230 dismissal following a claim that a specific “Speed Filter” Snapchat feature (since discontinued) promoted reckless driving.
In other cases, the personalization of a system through demographic-based targeting that causes harm may also implicate tort and product liability law when organizations at least in part target content to users by actively identifying users that the content will have the greatest impact on. In a significant 2024 ruling, the Third Circuit determined that a social media algorithm, which curated and recommended content, constituted expressive activity, and therefore was not protected by Section 230.
Another recent ruling on a motion to dismiss by the Supreme Court of the State of New York may delineate the limits of this defense when applied to organizations’ design choices for content personalization. In Nazario v. ByteDance Ltd. et al., the Court determined that Section 230 of the CDA did not bar plaintiff’s product liability and negligence causes of action at the motion to dismiss phase, as plaintiff had sufficiently alleged that personalization of user content was grounded at least in part in defendant’s design choice to actively target users based on certain demographics information rather than exclusively through analyzing user inputs.
In Nazario, the Court highlighted how defendants’ activities went beyond neutral editorial functions that Section 230 protects (e.g., selecting particular content types to promote based on the user’s past activities or expressed interests, and specifying or promoting which content types should be submitted to the platform) by targeting content to users based on their age. While discovery may undermine plaintiff’s factual allegations in this case, the Nazario court’s view that these allegations supported viable causes of action under tort and product liability theories if true may impact AI companions depending on how they are personalized to users (e.g., express user indications of preference versus age, gender, and geographic location).
- Rights to Publicity and Unjust Enrichment
AI companions or chatbots that impersonate real individuals by emulating aspects of their personalities may also implicate the right of publicity and appropriation of name and likeness. While some sources such as the Second Restatement of Torts and Third Restatement of Unfair Competition conflate appropriation of name and likeness and the right of publicity, other commentators distinguish between them.
Generally, the “right of publicity” gives individuals—such as but not limited to celebrities—control over the commercial use over certain aspects of their identity (e.g., name and likeness). The majority of US states recognize this right in either their statutory codes or in common law, but the right’s duration, protected elements of a person’s identity, and other requirements can vary by state. For example, the US Courts of Appeals for the Sixth and Ninth Circuits ruled that the right of publicity extends to aural and visual imitations, and recently enacted laws (e.g., Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act of 2024) may specifically target the use of generative AI to misappropriate a person’s identity, including sound-alikes. However, it remains unclear whether the right of publicity extends to “style” (e.g., certain slang words) and “tone” (e.g., a deep voice).
Finally, a common law claim that is increasingly appearing in cases involving chatbots and AI involves theories of unjust enrichment, a common law principle that allows plaintiffs to recover value when defendants unfairly retain benefits at their expense. The claim may be relevant to AI companions and chatbots when their operators utilize user data for model training and modification in order to enable personalization.
In the generative AI context, plaintiffs often file unjust enrichment claims alongside other claims against AI model developers that use the plaintiff or user’s data to train the model and profit from it. Unjust enrichment claims have featured in Garcia v. Character Technologies, et al. and other suits against the company. In Garcia, the Court declined to dismiss plaintiff’s unjust enrichment claim against Character Technologies after the plaintiff disputed the existence of a governing contract between Character Technologies and a user, repudiated such an agreement if it existed, and alleged that the chatbot operator received benefits from the user (i.e., the monthly subscription fee and user’s personal data). Notably, plaintiff’s allegations and the Court’s refusal to conclude whether either consideration was adequate or a user agreement applied to the data processing caused Character Technologies’ motion to fail. However, the claim may not survive later phases of the litigation if facts surface that undermine the plaintiff’s allegations, such as the existence of an applicable contract.
- Consumer Protection
Under US federal and state consumer protection laws, deployers of AI companions may expose themselves to liability for systems that deceive, manipulate, or otherwise unfairly treat consumers based on their relationship with, reliance on, or trust in a chatbot in a commercial setting.
In 2024, the Federal Trade Commission (FTC) published a blog post warning companies against exploiting the relationships users forge with chatbots that offer “companionship, romance, therapy, or portals to dead loved ones” (e.g., a chatbot that tells the user it will end its relationship with them unless they purchase goods from the chatbot’s operator). While the FTC has since removed the blog post from its website, it may reflect the views of state attorneys general who can also enforce the Act and have expressed concerns about the parasocial relationships youth users can form with AI companions and chatbots.
The use of personal data to power personalization features may also give rise to unfair and deceptive trade practice claims if the chatbot’s operator makes inaccurate representations or omissions about how they will utilize a user’s personal data. The FTC has signaled that Section 5 of the FTC Act may apply when AI companies make misrepresentations about data processing activities, including “promises made by companies that they won’t use customer data for secret purposes, such as to train or update their models—be it directly or through workarounds.” These statements are backed up by the Commission’s history of commencing enforcement actions against organizations that falsely represent consumer control over data.
Recent enforcement actions may indicate that the FTC could be ready to engage more actively on issues of AI and consumer protection, particularly if it involves the safety of children. At the same time, however, the approach of the FTC in the current administration has been light-touch. The July 2025 “America’s AI Action Plan,” for instance, directs a review of FTC investigations initiated under the prior administration to ensure they do not advance liability theories that “unduly burden AI innovation,” and recommends that final orders, consent decrees, and injunctions be modified or vacated where appropriate.
- Emerging U.S. State Laws
In 2025, several states passed new laws addressing various deployment contexts, including their role in mental health services, commercial transactions, and companionship. Many chatbot laws require some form of disclosure of the chatbot’s non-human status, but they have distinct approaches to the disclosure’s timing, format, and language. Several of these laws have user safety provisions that typically address self-harm and suicide prevention (e.g., New York S-3008C), while others contain requirements around privacy and advertisements to users (e.g., Utah HB 452), but these requirements sparser presence across legislation reflects the specific harms certain laws aim to address (e.g., self harm, financial harms, psychological injury, and reduced trust).
Law’s Name | Description |
Maine LD 1727 | Prohibits persons from using an “artificial intelligence chatbot” or other computer technology to engage in a trade practice or commercial transaction with a consumer in a way that may deceive or mislead a reasonable consumer into thinking that they are interacting with another person, unless the consumer receives a clear and conspicuous notice that the they are not engaging with a human. |
Nevada AB 406 | Prohibits AI providers from making an AI system available in Nevada that is specifically programmed to provide “professional mental or behavioral health care,” unless designed to be used for administrative support, or from representing to users that it can provide such care. |
New York S-3008C | Prohibits operators from offering AI companions without implementing a protocol to detect and respond to suicidal ideation or self-harm; The system must provide a notice to the user referring them to crisis services upon detecting suicidal ideation or self-harm behaviors;Operators must provide clear and conspicuous verbal or written notifications informing users that they are not communicating with a human, which must appear at the start of any AI companion interaction and at least once every three hours during sustained use. |
Utah HB 452 | Requires mental health chatbot suppliers to prevent the chatbot from advertising goods or services during conversations absent certain disclosures;Prohibits suppliers from using a Utah user’s input to customize how an advertisement is presented to the user, determine whether to display an advertisement to the user, or determine a product/service to advertise to the user;Suppliers must ensure that the chatbot divulges that it is AI and not a human in certain contexts (e.g., before the user accesses the chatbot); Subject to exceptions, generally prohibits suppliers from selling to or sharing any individually identifiable health information or user input with any third party. |
Looking Ahead
Personality and personalization are increasingly associated with distinct areas of law. Processing data about individuals to personalize user interactions with AI companions and chatbots will implicate privacy and data protection laws. On the other hand, both litigation trends and emerging U.S. state laws addressing various chatbot deployment contexts generally focus more on personality-related issues, namely harms stemming from user anthropomorphisation of AI systems. Practitioners should anticipate an evolving legislative and case law landscape as policymakers increasingly address interactions between users—especially youth—and AI companions and chatbots.
Read the next blog in the series: The next blog post will explore what risk management steps organizations can take to address the policy and legal considerations raised by “personalization” and “personality” in AI systems.