“Personality vs. Personalization” in AI Systems: An Introduction (Part 1)
Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants.
There are clear trends among this overall focus: towards systems with greater personalization to individual users through the collection and inferences of personal information, expansion of short- and long-term “memory,” greater access to systems; and towards systems that have more and more distinct “personalities.” Each of these trends are implicating US law in novel ways, pushing on the bounds of tort, product liability, consumer protection, and data protection laws.
In this first post of a multi-part blog post series, we introduce that there is a distinction between two trends: “personalization” and “personality.” Both have real-world uses, and subsequent blog posts will unpack these in greater detail and explore concrete risks, and potential risk management for each category.
In general:
- Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. As conversational AI systems expand in their abilities to infer and retain information, through a variety of mechanisms (e.g., larger context windows, memory, system prompts, and retrieval augmented generation), and are given greater access to data and content, they raise critical privacy, transparency, and consent challenges.
- Personality refers to the human-like traits and behaviors (e.g., friendly, concise, humorous, or skeptical) that are increasingly a feature of conversational systems. Even without memory or data-driven personalization, the increasingly human-like qualities of interactive AI systems can evoke novel risks, including manipulation, over-reliance, and emotional dependency, which in severe cases has led to delusional behavior or self-harm.
How are companies incorporating personalization and personality into their offerings?
Both concepts can be found among recent public releases by leading general purpose large language model (LLM) providers, which are incorporating elements of both into their offerings:
Provider | Example of Personalization | Example of Personality |
Anthropic | “A larger context window allows the model to understand and respond to more complex and lengthy prompts, while a smaller context window may limit the model’s ability to handle longer prompts or maintain coherence over extended conversations.” “Learn About Claude – Context Windows,” Accessed July 29, 2025, Anthropic | “Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.” “Release Notes – System Prompts – Claude Opus 4,” May 22, 2025, Anthropic |
“[P]ersonalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.” “Gemini gets personal, with tailored help from your Google apps,” Mar. 13, 2025, Google | “. . . Gemini Advanced subscribers will soon be able to create Gems — customized versions of Gemini. You can create any Gem you dream up: a gym buddy, sous chef, coding partner or creative writing guide. They’re easy to set up, too. Simply describe what you want your Gem to do and how you want it to respond — like “you’re my running coach, give me a daily running plan and be positive, upbeat and motivating.” Gemini will take those instructions and, with one click, enhance them to create a Gem that meets your specific needs.” “Get more done with Gemini: Try 1.5 Pro and more intelligent features,” May 14, 2024, Google | |
Meta | “You can tell Meta AI to remember certain things about you (like that you love to travel and learn new language), and it can also pick up important details based on context. For example, let’s say you’re hungry for breakfast and ask Meta AI for some ideas. It suggests an omelette or a fancy frittata, and you respond in the chat to let Meta AI know that you’re a vegan. Meta AI can remember that information and use it to inform future recipe recommendations.” “Building Toward a Smarter, More Personalized Assistant,” Jan. 27, 2025, Meta | “We’ve been creating AIs that have more personality, opinions, and interests, and are a bit more fun to interact with. Along with Meta AI, there are 28 more AIs that you can message on WhatsApp, Messenger, and Instagram. You can think of these AIs as a new cast of characters – all with unique backstories.” “Introducing New AI Experiences Across Our Family of Apps and Devices,” Sept. 27, 2023, Meta |
Microsoft | “Memory in Copilot is a new feature that allows Microsoft 365 Copilot to remember key facts about you—like your preferences, working style, and recurring topics—so it can personalize its responses and recommendations over time.” “Introducing Copilot Memory: A More Productive and Personalized AI for the Way You Work,” July 14, 2025, Microsoft | “Copilot Appearance infuses your voice chats with dynamic visuals. Now, Copilot can communicate with animated cues and expressions, making every voice conversation feel more vibrant and engaging.” “Copilot Appearance,” Accessed Aug. 4, 2024, Microsoft |
OpenAI | “In addition to the saved memories that were there before, ChatGPT now references your recent conversations to deliver responses that feel more relevant and tailored to you.” “Memory FAQ,” June 4, 2025, OpenAI | “Choose from nine lifelike output voices for ChatGPT, each with its own distinct tone and character: Arbor – Easygoing and versatile . . . Breeze – Animated and earnest . . . Cove – Composed and direct . . . Ember – Confident and optimistic . . . Juniper – Open and upbeat . . . Maple – Cheerful and candid . . . Sol – Savvy and relaxed . . . .” “Voice Mode FAQ,” June 3, 2025, OpenAI |
There is significant overlap between these two concepts, and specific uses may employ both. We analyze them as distinct trends because they are potentially shaping the direction of law and policy in the US in different ways. As AI systems become more personalized, they are pushing the boundaries of privacy, data protection, and consumer protection law. Meanwhile, as AI systems become more human-like, companionate, and anthropomorphized, they push the boundaries of our social constructs and relationships. Both could have a powerful impact on our fundamental social and legal frameworks.
Read the next blog in the series: In our next blog post, we will explore the concepts of “personalization” and “personality” in more detail, including specific uses and the concrete risks these technologies may pose to individuals.