Concepts in AI Governance: Personality vs. Personalization
Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants.
There are clear trends among this overall focus: towards systems with greater personalization to individual users through the collection and inferences of personal information, expansion of short- and long-term “memory,” and greater access to systems; and towards systems that have more and more distinct “personalities.” Each of these trends are implicating U.S. law in novel ways, pushing on the bounds of tort, product liability, consumer protection, and data protection laws.
This issue brief defines and provides an analytical framework for distinguishing between “personalization” and “personality”—with examples of real-world uses, concrete risks, and potential risk management for each category. In general, in this paper:
- Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. As conversational AI systems’ abilities to infer and retain information through a variety of mechanisms (e.g., larger context windows and enhanced memory) expand, and as they are given greater access to data and content, these systems raise critical privacy, transparency, and consent challenges.
- Personality refers to the human-like traits and behaviors (e.g., friendly, concise, humorous, or skeptical) that are increasingly a feature of conversational systems. Even without memory or data-driven personalization, the increasingly human-like qualities of interactive AI systems can evoke novel risks, including manipulation, over-reliance, and emotional dependency, which in severe cases has led to delusional behavior or self-harm.