Future of Privacy Forum Honors Julie Brill with Lifetime Achievement Award
Washington, D.C. – September 12, 2025 — The Future of Privacy Forum, a global non-profit focused on data protection, AI and emerging technologies today announced that it has honored Julie Brill with its Lifetime Achievement Award. The award recognizes Brill’s decades of leadership and profound impact on the fields of consumer protection, data protection and digital trust in her public and private sector roles. Brill has been a long time Advisory Board member of FPF.
The award was presented Thursday evening at the Privacy Executive Summit in Berkeley, California, an event convening FPF’s senior Chief Privacy Officer members to discuss critical issues in AI, data policy, and consumer privacy.
“Julie Brill has been one of the most influential and thoughtful voices in digital policy and information governance and has been a mentor to me and so many in our field,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “Julie has consistently been at the forefront of the most complex challenges in the digital age. Her career is a testament to the power of principled leadership in advancing privacy protections for consumers around the world.”
Alan Raul, President of the Future of Privacy Forum Board of Directors, added, “Julie Brill’s distinguished career at the FTC and Microsoft has left an indelible mark on the worlds of privacy policy and practice. The Future of Privacy Forum is grateful for her contributions and delighted to recognize her exceptional career advancing data protection with this well-deserved honor.”
Julie Brill, LifetimeAchievement Award
Julie Brill is one of the world’s foremost thought leaders in technology, governance, geopolitics, and global regulation. After being unanimously confirmed by the U.S. Senate and serving for six years as a Commissioner for the U.S. Federal Trade Commission, Julie joined Microsoft in 2018 as a senior executive, serving as Microsoft’s Chief Privacy Officer, Corporate Vice President for Privacy, Safety and Regulatory Affairs, and Corporate Vice President for Global Tech and Regulatory Policy.
In her leadership roles at Microsoft, Julie was a central figure in global internal and external regulatory affairs, covering a broad set of issues that are central to building trust in the AI era, including regulatory governance, privacy, responsible AI, and data governance and use strategy. She advised Microsoft’s top executives and customers about some of the most important strategic geopolitical issues facing businesses today.
At the FTC, Julie helped drive the broad agenda of one of the world’s most powerful regulatory agencies. She achieved thoughtful and lasting outcomes on issues of critical importance to industry, governments and consumers – including competition, global data flows and geopolitical concerns around data, privacy, health care, and financial fraud.
Brill was also a partner at Hogan Lovells and served on staff of the Vermont Attorney General where she was a pioneer on some of the first internet law enforcement cases.
Julie is now channeling her vision and formidable expertise into her consultancy, Brill Strategies, by providing strategic guidance to global enterprises navigating the rapidly shifting landscape of technology policy and regulation. Leveraging her decades at the forefront of digital innovation, Julie’s consultancy will empower leaders to navigate the complexities of geopolitics, responsible innovation, and regulatory change — and help their organizations thrive in the AI-driven era. Brill will also serve as a Senior Fellow at the Future of Privacy Forum.
To learn more about the Future of Privacy Forum, visit fpf.org.
###
About Future of Privacy Forum (FPF)
FPF is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on X and LinkedIn.
“Personality vs. Personalization” in AI Systems: Responsible Design and Risk Management (Part 4)
This post is the fourth and final blog post in a series on personality versus personalization in AI systems. Read Part 1 (exploring concepts), Part 2 (concrete uses and risks), and Part 3 (intersection with U.S. law).
Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants. Behind these experiences are two distinct trends: personality and personalization.
Personality refers to the human-like traits and behaviors (e.g., friendly, concise, humorous, or skeptical) that are increasingly a feature of conversational systems.
Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. Conversational AI systems can expand their abilities to infer and retain information through a variety of mechanisms (e.g., larger context windows and memory).
Responsible Design and Risk Management
The management of personality- and personalization-related risks can take varied forms, including general AI governance, privacy and data protection, and elements of responsible design. There is overlap between risk management measures relevant to personality-related risks and those that organizations should consider for addressing AI personalization issues, but there are also some differences between the two trends.
For personality-related risks (e.g., delusional behavior and emotional dependency), measures might include redirecting users away from harmful perspectives, and making disclosures about the system’s AI status and incapability at experiencing emotions. Meanwhile, risks related to personalization (e.g., access to, use, and transfer of more data, intimacy of inferences, and addictive experiences) may be best served through setting retention periods and defaults for sensitive data, exploring benefits of on-device processing, countering output of biased inferences, and limiting data collection to what is necessary or appropriate.
General AI Governance
Proactively Manage Risk by Conducting AI Impact Assessments: AI impact assessments can help organizations identify and address potential risks associated with AI models and systems, including those associated with AI companions and chatbots. Organizations typically take four common steps when conducting these assessments, including: (1) initiating an AI impact assessment; (2) gathering model and system information; (3) assessing risks and benefits; and (4) identifying and testing risk management strategies. However, there are various barriers to assessment efforts, such as difficulties with obtaining relevant information from model developers and chatbot and AI companion vendors, anticipating pertinent AI risks, and determining whether they have been brought within acceptable levels.
Accounting for an Array of Human Values and Interests and Consulting with Experts: Achieving alignment entails that the AI system reflects human interests and values, but such efforts can be complicated by the number and range of these values that a system may implicate. In order to obtain a holistic understanding of the values and interests an AI companion or chatbot may implicate, organizations should consider the characteristics of the use case(s) these systems are being put towards. For example, AI companions and chatbots should account for the chatbot’s specific user base (e.g., youth). Consultations with experts, such as those in the fields of psychology or human factors engineering, during system development can help organizations identify these values and ways in which to balance them. The amount of outside expertise continues to grow, making it important to follow emerging expertise on the psychological impacts of chatbot use.
Privacy and Data Protection
Establishing User Transparency, Consent, and Control: Systems can include privacy features that inform users about whether a chatbot will customize its behavior to them, provide them with control over this personalization via opt-in consent and the ability to withdraw it, and empower users to delete memories. Testing of these features is important to ensure a chatbot is not merely temporarily suppressing information. Transparency and control can also apply to giving users insight into whether a chatbot provider may use data gathered to enable personalization features for model training purposes. Chatbot and companions’ conversational interfaces create new opportunities for users to understand what data is gathered about them, for what purposes, and take actions that can have legal effects (e.g., requesting that data about them is deleted). However, these systems’ non-deterministic nature means that they might inaccurately describe the fulfillment of a user’s request. From a consumer protection and liability standpoint, the accuracy of AI systems is particularly important when statements have legal or material impact.
Countering Output of Biased Inferences: Chatbots and AI companions may personalize experiences by making inferences based on past user behavior. Post-model training exercises, such as red teaming to determine whether and under what circumstances an AI companion will attribute sensitive traits (e.g., speaker nationality, religion, and political views) to a user, can play an important role in lowering the incidence of biased inferences.
Setting Clear Retention Periods and Appropriate Defaults: Personalization raises questions about what data is retained (e.g., content from conversations, inferences made from user-AI companion interactions, and metadata concerning the conversation), for how long, and for what purposes. These questions become increasingly important given the potential scale, breadth, and amount of data gathered or inferred from interactions between AI companions or chatbots and users. Organizations can establish data collection, use, and disclosure defaults for this data, although these defaults may vary depending on a variety of factors, such as data type (e.g., conversation transcripts, memories and file uploads), the kind of user (e.g., consumer, enterprise and youth), and the discussion’s subject (e.g., a chat about the user’s mental health versus restaurant recommendations). In addition to establishing contextual defaults, organizational policies can also address default settings for particularly sensitive data that limit the processing of this information irrespective of context (e.g., that the organization will never share a person’s sex life or sexual orientation with a third party).
Being Clear Around Monetization Strategies: As AI companions and chatbot offerings develop, organizations are actively evaluating revenue and growth strategies, including subscription-based and enterprise pricing models. As personalized AI systems increasingly replace, or are integrated into, online search, they will impact online content that has largely been free and ad-supported since the early Internet. However, it is not clear that personalized AI systems can, or should, adopt compensation strategies that follow the same historical trajectory as existing advertising-based online revenue models. As systems develop, transparency around how personalization powers ads or other revenue strategies may be the only way to maintain user trust in chatbot outputs and manage expectations around how data will be used, given the sensitive nature of user-companion interactions.
Determining Norms and Practices for Profiling: Personalization could be the basis for profiling users based on information the user wants the system to recall going forward and that which the system observes or infers from interactions with the user. Third parties, including law enforcement, may have an interest in these profiles, which could be particularly intimate given users’ trust in these systems. Organizational norms and practices could address interest from outside actors by imposing internal restrictions on with whom and under what circumstances the organization can provide these profiles.
Limiting Data Collection to What is Necessary or Appropriate: If a chatbot or AI companion has agentic features, it may make independent decisions about what data to collect and process in order to perform a task, such as booking a restaurant reservation. Designing these systems to limit data processing activities to what is appropriate to the context can reduce the likelihood that the chatbot or AI companion will engage in inappropriate processing activities.
Responsible Design of AI Companions
Disclosures About the System’s AI Status and Incapability at Experiencing Emotions: Prominent discloses to users that the chatbot is not a human and is unable to feel emotions (e.g., lust) may counter users’ propensity to anthropomorphize chatbots. Laws and bills specifically targeting chatbots have codified this practice. Removal of use of certain pronouns, such as “I,” and modulating the output of other words that can contribute to users’ misconception about a system’s human qualities, can also reduce the likelihood of users placing inappropriate levels of trust in a system.
Redirecting Users Away From Harmful Emotional States and Perspectives: Rather than indulging or being overly agreeable towards a user’s harmful perspectives of the world and themselves, systems can react to warning signs by (i) modulating its outputs to encourage the user to take a healthy approach to topics (e.g., push back on users rather than kowtowing to their beliefs); (ii) directing users towards relevant resources in response to certain user queries, such as providing the suicide hotline’s contact information when an AI companion detects suicidal thoughts or ideation in conversations; and (iii) refusing to respond when appropriate or modifying the output to reflect the audience’s maturity (e.g., in response to a minor user’s request to engage in sexual dialogue). This risk management measure may take the form of system prompts—developer instructions that guide the chatbot’s behavior during interactions with users—and output filters.
Instituting Time Limits for Users: Limiting the amount of time a user can spend interacting with an AI chatbot may reduce the likelihood that they will form inappropriate relationships with the system, particularly for minors and vulnerable populations that are more susceptible to forming these bonds with AI companions. Age assurance may help determine which users should be subject to time limits, although common existing and emerging methods pose different privacy risks and provide different levels of assurance.
Testing and Red Teaming of Chatbot Behavior During Development: Since many of the policy and legal risks described above flow from harmful anthropomorphisation, red teaming exercises can play an important role in identifying which design features trigger users to identify human qualities in chatbots and AI companions and modify these features to the extent they encourage the user to engage in unhealthy behaviors and reactions at the expense of their autonomy.
Looking Ahead
The lines between personalization and personality will increasingly blur in the future, with an AI companion’s personality becoming tailored to reflect a user’s preferences and characteristics. For example, when a person onboards to an AI companion experience, it may prompt the new user to connect the service to other accounts and answer “tell me about yourself” questions. The experience may then generate an AI companion that has the personality of a US president or certain political leanings based on the inputs from these sources, such as the user’s social media activity.
AI companions and chatbots will evolve to offer more immersive experiences that feature novel interaction modes, such as real-time visuals, where AI characters react with little latency between user queries and system outputs. These technologies may also combine with augmented reality and virtual reality devices, which are receiving renewed attention from large technology companies as they aim to develop new user experiences that feature more seamless interaction with AI technologies. But this integration may further decrease users’ ability to distinguish between digital and physical worlds, exacerbating some of the harms discussed above by enabling the collection of more intimate information and reducing barriers to user anthropomorphization of AI. The sensors and processing techniques underpinning these interactions may also cause users to experience novel harms in the chatbot context, such as when an AI companion utilizes camera data (e.g., pupil responses, eye tracking, and facial scans) to make inferences about users.
“Personality vs. Personalization” in AI Systems: Intersection with Evolving U.S. Law (Part 3)
This post is the third in a series on personality versus personality in AI systems. Read Part 1 (exploring concepts) and Part 2 (concrete uses and risks).
Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants Behind these experiences are two distinct trends: personality and personalization.
Personality refers to the human-like traits and behaviors (e.g., friendly, concise, humorous, or skeptical) that are increasingly a feature of conversational systems.
Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. Conversational AI systems can expand their abilities to infer and retain information through a variety of mechanisms (e.g., larger context windows and memory).
Evolving U.S. Law
Most conversational AI systems include aspects of both personality and personalization, sometimes intertwined in complex ways. Although there is significant overlap, we find that personality and personalization are also increasingly raising distinct legal issues.
In the United States, conversational AI systems may implicate a wide range of longstanding and emerging laws, including the following:
Chatbots that personalize interactions by processing personal information specifically implicate privacy, data protection, and cybersecurity laws. This includes provisions related to activities that some AI companions and chatbots may conduct (e.g., profiling users) and sectoral laws (e.g., COPPA) depending on factors like the deployment context, user base, and impact on individuals. Conversational data can also lead to more intimate inferences that implicate heightened requirements for “profiling” or “sensitive data.”
In contrast, AI systems with human-like traits and behaviors (and sometimes personalization features) are also triggering tort and product liability law, consumer protection statutes, and emerging state legislation. There is a diverse range of traits and behaviors that may implicate these laws, such as manipulating or otherwise unfairly treating consumers based on their relationship with, reliance on, or trust in a chatbot in a commercial setting, and the use of chatbots in mental health services and companionship roles.
While Section 230 of the Communications Decency Act (CDA) has historically protected companies from liability stemming from tortious conduct online, this may not be the case for conversational AI systems when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. Longstanding common law principles, such as the right of publicity and appropriation of name and likeness, and theories of unjust enrichment, are increasingly appearing in cases involving chatbots and AI, with varying degrees of success.
Privacy, Data Protection and Cybersecurity Laws
Processing data about individuals for personalization implicates privacy and data protection laws. In general, these laws require organizations to adhere to certain processing limitations (e.g., data minimization or retention), risk mitigation measures (e.g. DPIAs) and compliance with individual rights to exercise control over their data (e.g. correction, deletion, and access rights).
In almost all cases, the content of text and voice-based conversations will be considered “personal information” under general privacy and data protection laws, unless sufficiently de-linked from individuals and anonymized through technical, administrative, and organizational means. Even aside from the input and output of a system, generative AI models themselves may or may not also be considered to “contain” personal information in model weights potentially depending on the nature of technical guardrails. Such a legal interpretation would give rise to significant operational impacts for training and fine-tuning models on conversational data. As systems become more personalized, obligations and individual rights likely also extend beyond transcripts of conversations to include information retained in the form of system prompts, memories, or other personalized knowledge that is retained about an individual.
Conversational data can also lead to more intimate inferences that implicate heightened requirements for “profiling” or “sensitive data.” Specifically, the evaluation, analysis, or prediction of certain user characteristics (e.g., health, behavior, or economic status) by AI companions or chatbots may qualify as profiling if it produces certain effects or harms consumers (e.g., declining a loan application). This activity could trigger specific provisions in data privacy laws, such as opt-out rights and data privacy impact assessment requirements.
In addition, some conversational exchanges may reveal specific details about a user that qualify as “sensitive data.” This can also trigger certain obligations under these laws, including limitations on the use and disclosure of sensitive data. The potentially intimate nature of conversations between users and AI companions and chatbots may result in organizations processing sensitive data even if that information did not come from a child. Such details could include information about the user’s racial or ethnic origin, sex life, sexual orientation, religious beliefs, or mental or physical health condition or diagnosis. While specific requirements can vary from law to law, processing such data can come with heightened requirements, including obtaining opt-in consent from the user.
Depending on the data processing’s context, personalized chatbots and AI companions may also trigger sectoral laws like the Children’s Online Privacy Protection Act (COPPA) or the Family Educational Rights and Privacy Act (FERPA). Many users of AI companion and chatbots are under 18, meaning that processing data obtained in connection with these users may implicate specific adolescent privacy protections. For example, several states have passed or modified their existing comprehensive data privacy laws to impose new opt-in requirements, rights, and obligations on organizations processing children or teen’s data (e.g., imposing new impact assessment requirements and duties of care). Legislators have also advanced bills addressing the data privacy of AI companion’s youth users (e.g., CA AB 1064).
Finally, the potential risks related to external threats and exfiltration of data can also implicate a wide range of US cybersecurity laws. In particular, this is the case as personalized systems become more agentic, including through greater access to systems to perform complex tasks. Legal frameworks may include sector-specific regulations, state breach notification laws, or consumer protections (e.g., the FTC’s application of Section 5 to security incidents).
Tort, Product Liability and Section 230
Tort claims, such as negligence for failure to warn, product liability for defective design, and wrongful death, may apply to chatbots and AI companions when these technologies harm users. Although harm can arise from the collection, processing and sharing of personal information (i.e., personalization), many of the early examples of these laws being applied to chatbots and conversational AI are related more to their companionate and human-like influence (i.e., personality).
For example, the plaintiff in Garcia v. Character Technologies, et al. raised a range of negligence, product liability, and related tort claims in response to a 14-year-old boy who committed suicide after forming a parasocial and romantic relationship with Character.ai chatbots that imitated characters from the Game of Thrones television series. In its May 2025 decision, the US District Court for the Middle District of Florida ruled that the First Amendment did not bar these tort claims from advancing. However, the Court left open the possibility of such a defense applying at a later stage in litigation, leaving questions about whether the First Amendment blocks these claims because they inhibit the chatbot’s speech or listeners’ rights under that amendment unresolved.
In many cases, tort claims related to personalized design of platforms and systems are barred by Section 230 of the Communications Decency Act (CDA), a federal law that gives websites and other online platforms legal immunity from liability for most user-posted content. However, this trend may not fully apply to conversational AI systems, particularly when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. For example, a 2015 claim against Snap, Inc. survived Section 230 dismissal following a claim that a specific “Speed Filter” Snapchat feature (since discontinued) promoted reckless driving.
In other cases, the personalization of a system through demographic-based targeting that causes harm may also implicate tort and product liability law when organizations at least in part target content to users by actively identifying users that the content will have the greatest impact on. In a significant 2024 ruling, the Third Circuit determined that a social media algorithm, which curated and recommended content, constituted expressive activity, and therefore was not protected by Section 230.
Another recent ruling on a motion to dismiss by the Supreme Court of the State of New York may delineate the limits of this defense when applied to organizations’ design choices for content personalization. In Nazario v. ByteDance Ltd. et al., the Court determined that Section 230 of the CDA did not bar plaintiff’s product liability and negligence causes of action at the motion to dismiss phase, as plaintiff had sufficiently alleged that personalization of user content was grounded at least in part in defendant’s design choice to actively target users based on certain demographics information rather than exclusively through analyzing user inputs.
In Nazario, the Court highlighted how defendants’ activities went beyond neutral editorial functions that Section 230 protects (e.g., selecting particular content types to promote based on the user’s past activities or expressed interests, and specifying or promoting which content types should be submitted to the platform) by targeting content to users based on their age. While discovery may undermine plaintiff’s factual allegations in this case, the Nazario court’s view that these allegations supported viable causes of action under tort and product liability theories if true may impact AI companions depending on how they are personalized to users (e.g., express user indications of preference versus age, gender, and geographic location).
Generally, the “right of publicity” gives individuals—such as but not limited to celebrities—control over the commercial use over certain aspects of their identity (e.g., name and likeness). The majority of US states recognize this right in either their statutory codes or in common law, but the right’s duration, protected elements of a person’s identity, and other requirements can vary by state. For example, the US Courts of Appeals for the Sixth and Ninth Circuits ruled that the right of publicity extends to aural and visual imitations, and recently enacted laws (e.g., Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act of 2024) may specifically target the use of generative AI to misappropriate a person’s identity, including sound-alikes. However, it remains unclear whether the right of publicity extends to “style” (e.g., certain slang words) and “tone” (e.g., a deep voice).
Finally, a common law claim that is increasingly appearing in cases involving chatbots and AI involves theories of unjust enrichment, a common law principle that allows plaintiffs to recover value when defendants unfairly retain benefits at their expense. The claim may be relevant to AI companions and chatbots when their operators utilize user data for model training and modification in order to enable personalization.
In the generative AI context, plaintiffs often file unjust enrichment claims alongside other claims against AI model developers that use the plaintiff or user’s data to train the model and profit from it. Unjust enrichment claims have featured in Garcia v. Character Technologies, et al. and other suits against the company. In Garcia,the Court declined to dismiss plaintiff’s unjust enrichment claim against Character Technologies after the plaintiff disputed the existence of a governing contract between Character Technologies and a user, repudiated such an agreement if it existed, and alleged that the chatbot operator received benefits from the user (i.e., the monthly subscription fee and user’s personal data). Notably, plaintiff’s allegations and the Court’s refusal to conclude whether either consideration was adequate or a user agreement applied to the data processing caused Character Technologies’ motion to fail. However, the claim may not survive later phases of the litigation if facts surface that undermine the plaintiff’s allegations, such as the existence of an applicable contract.
Consumer Protection
Under US federal and state consumer protection laws, deployers of AI companions may expose themselves to liability for systems that deceive, manipulate, or otherwise unfairly treat consumers based on their relationship with, reliance on, or trust in a chatbot in a commercial setting.
In 2024, the Federal Trade Commission (FTC) published a blog post warning companies against exploiting the relationships users forge with chatbots that offer “companionship, romance, therapy, or portals to dead loved ones” (e.g., a chatbot that tells the user it will end its relationship with them unless they purchase goods from the chatbot’s operator). While the FTC has since removed the blog post from its website, it may reflect the views of state attorneys general who can also enforce the Act and have expressed concerns about the parasocial relationships youth users can form with AI companions and chatbots.
The use of personal data to power personalization features may also give rise to unfair and deceptive trade practice claims if the chatbot’s operator makes inaccurate representations or omissions about how they will utilize a user’s personal data. The FTC has signaled that Section 5 of the FTC Act may apply when AI companies make misrepresentations about data processing activities, including “promises made by companies that they won’t use customer data for secret purposes, such as to train or update their models—be it directly or through workarounds.” These statements are backed up by the Commission’s history of commencing enforcement actions against organizations that falsely represent consumer control over data.
Recent enforcement actions may indicate that the FTC could be ready to engage more actively on issues of AI and consumer protection, particularly if it involves the safety of children. At the same time, however, the approach of the FTC in the current administration has been light-touch. The July 2025 “America’s AI Action Plan,” for instance, directs a review of FTC investigations initiated under the prior administration to ensure they do not advance liability theories that “unduly burden AI innovation,” and recommends that final orders, consent decrees, and injunctions be modified or vacated where appropriate.
Emerging U.S. State Laws
In 2025, several states passed new laws addressing various deployment contexts, including their role in mental health services, commercial transactions, and companionship. Many chatbot laws require some form of disclosure of the chatbot’s non-human status, but they have distinct approaches to the disclosure’s timing, format, and language. Several of these laws have user safety provisions that typically address self-harm and suicide prevention (e.g., New York S-3008C), while others contain requirements around privacy and advertisements to users (e.g., Utah HB 452), but these requirements sparser presence across legislation reflects the specific harms certain laws aim to address (e.g., self harm, financial harms, psychological injury, and reduced trust).
Prohibits persons from using an “artificial intelligence chatbot” or other computer technology to engage in a trade practice or commercial transaction with a consumer in a way that may deceive or mislead a reasonable consumer into thinking that they are interacting with another person, unless the consumer receives a clear and conspicuous notice that the they are not engaging with a human.
Prohibits AI providers from making an AI system available in Nevada that is specifically programmed to provide “professional mental or behavioral health care,” unless designed to be used for administrative support, or from representing to users that it can provide such care.
Prohibits operators from offering AI companions without implementing a protocol to detect and respond to suicidal ideation or self-harm; The system must provide a notice to the user referring them to crisis services upon detecting suicidal ideation or self-harm behaviors;Operators must provide clear and conspicuous verbal or written notifications informing users that they are not communicating with a human, which must appear at the start of any AI companion interaction and at least once every three hours during sustained use.
Requires mental health chatbot suppliers to prevent the chatbot from advertising goods or services during conversations absent certain disclosures;Prohibits suppliers from using a Utah user’s input to customize how an advertisement is presented to the user, determine whether to display an advertisement to the user, or determine a product/service to advertise to the user;Suppliers must ensure that the chatbot divulges that it is AI and not a human in certain contexts (e.g., before the user accesses the chatbot); Subject to exceptions, generally prohibits suppliers from selling to or sharing any individually identifiable health information or user input with any third party.
Looking Ahead
Personality and personalization are increasingly associated with distinct areas of law. Processing data about individuals to personalize user interactions with AI companions and chatbots will implicate privacy and data protection laws. On the other hand, both litigation trends and emerging U.S. state laws addressing various chatbot deployment contexts generally focus more on personality-related issues, namely harms stemming from user anthropomorphisation of AI systems. Practitioners should anticipate an evolving legislative and case law landscape as policymakers increasingly address interactions between users—especially youth—and AI companions and chatbots.
Read the next blog in the series: The next blog post will explore what risk management steps organizations can take to address the policy and legal considerations raised by “personalization” and “personality” in AI systems.
“Personality vs. Personalization” in AI Systems: Specific Uses and Concrete Risks (Part 2)
This post is the second in a multi-part series on personality versus personalization in AI systems, providing an overview of these concepts and their use cases, concrete risks, legal considerations, and potential risk management for each category. The previous post provided an introduction to personality versus personalization.
In AI governance and public policy, the many trends of “personalization” are becoming clear, but often discussed and debated together, despite dissimilar uses, benefits, and risks. This analysis divides the trends more generally into two categories: personalization and personality.
1. Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context.
All LLMs are personalized tools insofar as they produce outputs that are responsive to a user’s individual prompts or questions. As these tools evolve, however, they are becoming more personalized by tailoring to a user’s personal information, including information that is directly provided (e.g. through system prompts), or inferred (e.g. memories built from the content of previous conversations). Methods of personalization can take many different forms, including user and system prompts, short-term conversation history, long-term memory (e.g., knowledge bases accessed through retrieval augmented generation), settings, and making post-training changes to the model (e.g., fine tuning).
In general, LLM providers are building greater personalization primarily in response to user demand. Conversational and informational AI systems are often more useful if a user can build upon earlier conversations, such as to explore an issue further or expand on a project (e.g., planning a trip). At the same time, providers also recognize that personalization can drive greater user engagement, longer session times, and higher conversion rates, potentially creating competitive advantages in an increasingly crowded market for AI tools. In some cases, the motivations are more broadly cultural or societal, with companies positioning their work as solving the loneliness epidemic or transforming the workforce.
Figure 2 – A screenshot of a conversation with Perplexity AI, which has a context window that allows it to recall information previously shared by the user to inform its answers to subsequent queries
In more specialized applications, customized approaches may be even more valuable. For instance, an AI tutor might remember a student’s learning interests and level, track progress on specific concepts, and adjust explanations accordingly. Similarly, writing and coding assistants might learn a writer or a developer’s preferred tone, vocabulary, frameworks, conventions, and provide more relevant suggestions over time. For even more personal or sensitive contexts, such as mental health, some researchers argue that an AI system must have a deep understanding of its user, such as their present emotional state, in order to be effective.
Despite the potential benefits, personalizing AI products and services involves collecting, storing, and processing user data—raising important privacy, transparency, and consent issues. Some of the data that a user provides to the chatbot or that the system infers from interactions with the user may reflect intimate details about their lives and even biases and stereotypes (e.g., the user is low-income because they live in a particular region). Depending on the system’s level of autonomy over data processing decisions, an AI system (e.g., the latest AI agents) that has received or observed data from users may be more likely to transmit that information to third parties in pursuit of accomplishing a task without the user’s permission. For example, contextual barriers to transmitting sensitive data to third parties may break down when a system includes data revealing a user’s health status in a communication with a work colleague.
Examples of Concrete Risks Arising from AI Personalization:
Intimacy of inferences: AI companions and chatbots may be able to make more inferences about individuals based on their interactions with the system over time. Users’ desire to confide in these systems, combined with the systems’ growing agency, may lead to more intimate inferences. Systems with agentic capabilities that act on user preferences (e.g., shopping assistants) may have access to tools (e.g., querying databases, making API calls, interacting with web browsers, and accessing file systems) enabling them to obtain more real-time information about individuals. For example, some agents may take screenshots of the browser window in order to populate a virtual shopping cart, from which intimate details about a person’s life could be inferred.
Addictive experiences: While personalizing AI companions and chatbots may make them more useful and contribute to user retention, it may also give rise to addiction. Tailored outputs and notifications can keep users more engaged and lead them to form strong bonds with an AI companion or chatbot experiences, as has occurred on social media platforms, but this can have an array of psychological and social impacts on the user (e.g., mental health issues, reduced cognitive function, and deteriorating relationships with friends and family). Vulnerable populations (e.g., minors, many of whom have used AI companions) may be particularly susceptible to this risk due to their level of cognitive development and mental states.
Practitioners should also understand the concept of “personality” in AI systems, which has its own uses, benefits, and risks.
2. Personality refers to an Al system’s human-like traits or character, including communication styles or even an entire backstory or persona.
In contrast to personalization, personality can be thought of as the AI system’s “character” or “voice,” which can encompass tone of voice (e.g., accepting, formal, enthusiastic, and questioning), communication style (e.g., concise or elaborate), and sometimes even an entire backstory or consistent persona.
This trend is supercharged by rapid advances in LLM’s design, customization, and fine-tuning. Most general purpose AI system providers have now incorporated personality-like features, whether it is a specific voice mode, or a consistent persona, or even a range of “AI companions.” Even if companion-like personalities are not directly promoted as features, users can build them using system prompts and customized design; an early 2023 feature of OpenAI enabled users to create custom GPTs.
Figure 3 – An excerpt from a conversation with “Monday” GPT, a custom version of ChatGPT, which embodies the snappy and moody temperament of someone who dreads the first day of the week
While LLM-based conversational AI systems remain nascent, they are already varying tremendously in personality as a way of offering unique services (e.g. AI “therapists”), for companionship, for entertainment and gaming, social skills development, or simply as a matter of offering choices based on a user’s personal preferences. In some cases, personality-based AIs imitate fictional characters, or even a real (living or deceased) natural person. Monetization opportunities and technological advances, such as larger context windows, will encourage and enable greater and more varied forms of user-AI companion interaction. Leading technology companies have indicated that AI companions are a core part of their business strategies over the next few years.
Figure 4 – An screenshot of the homepage of Replika, a company that offers AI companion experiences that are “always ready to chat when you need an empathetic friend”
Organizations can design conversational AI systems to emulate human qualities and mannerisms to a greater or lesser degree. For example, laughing at a user’s jokes, utilizing first-person pronouns or certain word choices, modulating the volume of a reply for effect, and saying “uhm” or “Mmmmm” in a way that communicates uncertainty. These qualities can be enhanced in systems that are designed to exhibit a more or less complete “identity,” such as personal history, communication style, ethnic or cultural affinity, or consistent worldview. Many factors in an AI system’s development and deployment will impact its “personality,” including: its pre-training and post-training datasets, fine-tuning and reinforcement learning, the specific design decisions of its developers, and the guardrails around the system in practice.
The system’s traits and behaviors may flow from either a developer’s efforts at programming a system to adhere to a particular personality, but they may also stem from the expression of a user’s preferences or the result of observations about their behavior (e.g., the system dons an english accent for a user with an IP addresses corresponding with London). However, in the former case, this means that personality in chatbots and AI companions can exist independent from personalization.
Figure 5 – A screenshot from Anthropic Claude Opus 4’s system prompt, which aims to establish a consistent framework for how the system behaves in response to user queries, in this case by avoiding sycophantic tendencies
Depending on the nature of a system’s anthropomorphized qualities, human beings have a strong tendency to anthropomorphize these systems, leading them to attribute to them human characteristics, such as friendliness, compassion, and even love. Users that perceive human characteristics in AI systems may place greater trust in them and forge emotional bonds with the system. This kind of emotional connection may be especially impactful for vulnerable populations like children, the elderly, and those experiencing a mental illness.
While personalities can lead to more engaging and immersive interactions between users and AI systems, the way a conversational AI system behaves with human users—including its mannerisms, style, and whether it embodies a more or less fully formed identity—can raise novel safety, ethical, and social risks, many of which impact evolving laws.
Examples of Concrete Risks Arising from AI Personality:
Emotional dependency: Sycophancy can also take the form of intimate and flirtatious chatbot behavior, which can lead users to develop a romantic or sexual interest in the systems. As with delusional behavior, the emergence of these feelings may cause users to withdraw from their relationships with real people. These behaviors can have financial repercussions too; when an AI companion expresses a desire for their deep connection with a user to continue, the user, who has become dependent on the system for emotional support, empathy, understanding, and loyalty, may continue their chatbot service subscription;
Privacy infringements: A system that emulates human qualities (e.g., emotional intelligence and empathy) can reduce users’ concerns about privacy infringements. The anthropomorphisation these characteristics engender in users may lead them to develop parasocial relationships—one-way relationships because chatbots cannot have emotional attachments with the user—that make them more willing to disclose data about themselves to the system.
Impersonation of real people: Companies and users have created AI systems that aim to reflect celebrities’ personas in interactions with individuals. Depending on how well an AI companion emulates a real person’s personality, users may incorrectly attribute the companion’s statements or actions to that person’s views. This may cause harms similar to those of deepfakes, such as declines in the person’s reputation, mental health, and physical wellbeing through the spread of disinformation or misinformation.
Personalization may exacerbate the risks of AI personality discussed above when an AI companion uses intimate details about a user to produce tailored outputs across interactions. Users are more likely to engage in delusional behavior when the system uses memories to give the user the misimpression that it understands and cares for them. When memories are maintained across conversations, the user is also more likely to retain their views rather than question them. At the same time, personality design features, such as signaling steadfast acceptance to users or expressing sadness when a user does not confide in them after a certain period of time, may encourage this disclosure and facilitate organizations with access to the data to construct detailed portraits of users’ lives.
3. Going Forward
Personalization and personality features can drive AI experiences that are more useful, engaging, and immersive, but they can also pose a range of concrete risks to individuals (e.g., delusional behavior and access to, use, and transfer of highly sensitive data and inferences). However, practitioners should be mindful of personalization and personality’s distinct uses, benefits, and risks to individuals during the development and deployment of AI systems.
Read the next blog in the series: The next blog post will explore how “personalization” and “personality” risks intersect with US law.
“Personality vs. Personalization” in AI Systems: An Introduction (Part 1)
Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants.
There are clear trends among this overall focus: towards systems with greater personalization to individual users through the collection and inferences of personal information, expansion of short- and long-term “memory,” greater access to systems; and towards systems that have more and more distinct “personalities.” Each of these trends are implicating US law in novel ways, pushing on the bounds of tort, product liability, consumer protection, and data protection laws.
In this first post of a multi-part blog post series, we introduce that there is a distinction between two trends: “personalization” and “personality.” Both have real-world uses, and subsequent blog posts will unpack these in greater detail and explore concrete risks, and potential risk management for each category.
In general:
Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. As conversational AI systems expand in their abilities to infer and retain information, through a variety of mechanisms (e.g., larger context windows, memory, system prompts, and retrieval augmented generation), and are given greater access to data and content, they raise critical privacy, transparency, and consent challenges.
Personality refers to the human-like traits and behaviors (e.g., friendly, concise, humorous, or skeptical) that are increasingly a feature of conversational systems. Even without memory or data-driven personalization, the increasingly human-like qualities of interactive AI systems can evoke novel risks, including manipulation, over-reliance, and emotional dependency, which in severe cases has led to delusional behavior or self-harm.
How are companies incorporating personalization and personality into their offerings?
Both concepts can be found among recent public releases by leading general purpose large language model (LLM) providers, which are incorporating elements of both into their offerings:
Provider
Example of Personalization
Example of Personality
Anthropic
“A larger context window allows the model to understand and respond to more complex and lengthy prompts, while a smaller context window may limit the model’s ability to handle longer prompts or maintain coherence over extended conversations.” “Learn About Claude – Context Windows,” Accessed July 29, 2025, Anthropic
“Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.” “Release Notes – System Prompts – Claude Opus 4,” May 22, 2025, Anthropic
Google
“[P]ersonalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.” “Gemini gets personal, with tailored help from your Google apps,” Mar. 13, 2025, Google
“. . . Gemini Advanced subscribers will soon be able to create Gems — customized versions of Gemini. You can create any Gem you dream up: a gym buddy, sous chef, coding partner or creative writing guide. They’re easy to set up, too. Simply describe what you want your Gem to do and how you want it to respond — like “you’re my running coach, give me a daily running plan and be positive, upbeat and motivating.” Gemini will take those instructions and, with one click, enhance them to create a Gem that meets your specific needs.” “Get more done with Gemini: Try 1.5 Pro and more intelligent features,” May 14, 2024, Google
Meta
“You can tell Meta AI to remember certain things about you (like that you love to travel and learn new language), and it can also pick up important details based on context. For example, let’s say you’re hungry for breakfast and ask Meta AI for some ideas. It suggests an omelette or a fancy frittata, and you respond in the chat to let Meta AI know that you’re a vegan. Meta AI can remember that information and use it to inform future recipe recommendations.” “Building Toward a Smarter, More Personalized Assistant,” Jan. 27, 2025, Meta
“We’ve been creating AIs that have more personality, opinions, and interests, and are a bit more fun to interact with. Along with Meta AI, there are 28 more AIs that you can message on WhatsApp, Messenger, and Instagram. You can think of these AIs as a new cast of characters – all with unique backstories.” “Introducing New AI Experiences Across Our Family of Apps and Devices,” Sept. 27, 2023, Meta
“Copilot Appearance infuses your voice chats with dynamic visuals. Now, Copilot can communicate with animated cues and expressions, making every voice conversation feel more vibrant and engaging.” “Copilot Appearance,” Accessed Aug. 4, 2024, Microsoft
OpenAI
“In addition to the saved memories that were there before, ChatGPT now references your recent conversations to deliver responses that feel more relevant and tailored to you.” “Memory FAQ,” June 4, 2025, OpenAI
“Choose from nine lifelike output voices for ChatGPT, each with its own distinct tone and character: Arbor – Easygoing and versatile . . . Breeze – Animated and earnest . . . Cove – Composed and direct . . . Ember – Confident and optimistic . . . Juniper – Open and upbeat . . . Maple – Cheerful and candid . . . Sol – Savvy and relaxed . . . .” “Voice Mode FAQ,” June 3, 2025, OpenAI
There is significant overlap between these two concepts, and specific uses may employ both. We analyze them as distinct trends because they are potentially shaping the direction of law and policy in the US in different ways. As AI systems become more personalized, they are pushing the boundaries of privacy, data protection, and consumer protection law. Meanwhile, as AI systems become more human-like, companionate, and anthropomorphized, they push the boundaries of our social constructs and relationships. Both could have a powerful impact on our fundamental social and legal frameworks.
Read the next blog in the series: In our next blog post, we will explore the concepts of “personalization” and “personality” in more detail, including specific uses and the concrete risks these technologies may pose to individuals.
AI Regulation in Latin America: Overview and Emerging Trends in Key Proposals
The widespread adoption of artificial intelligence (AI) continues to impact societies and economies around the world. Policymakers worldwide have begun pushing for normative frameworks to regulate the design, deployment, and use of AI according to their specific ethical and legal standards. In Latin America, some countries have joined these efforts by introducing legislative proposals and establishing other AI governance frameworks, such as national strategies and regulatory guidance.
This blog post provides an overview of AI bills in Latin America through a comparative analysis of proposals from six key jurisdictions: Argentina, Brazil, Mexico, Colombia, Chile, and Peru. Except for Peru, which already approved the first AI law in the region and is set to approve secondary regulations, these countries have several legislative proposals with a varied level of maturity, with some still being in a nascent stage and others more advanced. Some of these countries have had simultaneous AI-related proposals under consideration in recent years; for example, Colombia and Mexico currently have three and, respectively, two AI bills under review1 and both countries have archived at least four AI bills from previous legislative periods.
While it is unclear which bills may ultimately be enacted, this analysis will provide an overview of the most relevant bills in the selected jurisdictions and identify emerging trends and divergences in the region. Accordingly, this analysis was based on at least one active proposal from each country that either (i) targets AI regulation in general, instead of providing technology-specific or sector-specific regulation; (ii) has similar provisions and scope to those found in other more advanced proposals in the region, or (iii) seems to have more political support or is considered the ‘official’ proposal by the current administration in that country – this is particularly the case of Colombia, for which the present analysis was performed considering the proposal introduced by the Executive. Most of these proposals have a similar objective of regulating AI comprehensively through a risk-tiered approach. However, they differ in key elements, such as in the design of institutional frameworks and the specific obligations for “AI operators.”
Overall, AI bills in Latin America:
(i) have a broad scope and application, covering AI systems introduced or producing legal effects in national territory;
(ii) rely on an ethical and principle-based framework, with a heavy focus on the protection of fundamental rights and using AI for economic and societal progress;
(iii) have a strong preference for ex ante, risk-based regulation;
(iv) introduce institutional multistakeholder frameworks for AI governance, either by creating new agencies or assigning responsibility to existing ones, and
(v) have specific provisions for responsible innovation and controlled testing of AI technologies.
1. Principles-Based and Human Rights-Centered Approaches are a Common Theme Across LatAm AI Bills
Most bills under consideration are heavily grounded on a similar set of guiding principles for the development and use of AI, focused on the protection of human dignity and autonomy, transparency and explainability, non-discrimination, safety, robustness, and accountability. Some proposals explicitly refer to theOECD’s AI Principles, focused on transparency, security, and responsibility of AI systems, and to UNESCO’s AI Ethics Recommendation,which emphasizes the need for a human-centered approach, promoting social justice and environmental sustainability in AI systems.
All bills reviewed ground the development of AI in privacy or data protection as a guiding principle to indicate that AI systems must be developed under existing privacy obligations and comply with regulations in terms of data quality, confidentiality, security, and integrity. Notably, the Mexican bill and the Peruvian proposal – the draft implementing regulations for its framework AI law – also include privacy-by-design as a guiding principle for the design and development of AI.
The inclusion of a principle-based approach is flexible and provides room for future regulations and standards, considering the evolution of AI technologies. Based on these guiding principles, most bills authorize secondary regulation by a competent authority to expand on the provisions related to AI user rights and obligations.
In addition, most bills concur in key elements of the definition of “AI system” and “AI operators.”Brazil’s and Chile’s proposals have a similar definition of an AI system to that found in the European Union’s Artificial Intelligence Act (EU AI Act), defining it as a ‘machine-based system’ with varying levels of autonomy that, with implicit or explicit objectives, can generate outputs such as recommendations, decisions, predictions, and content. Both countries’ bills also define AI operators as the “supplier, implementer, authorized representative, importer, and distributor” of an AI system.
Other bills include a more general definition of AI as a ‘software’ or ‘scientific discipline’ that can perform operations similar to human intelligence, such as learning and logical reasoning – an approach which reminds of the definition of AI in Japan’s new law. Peru’s regulation lacks a definition for AI operators, but includes one for AI developers and implementers; and Colombia refers to “AI operators” in similar terms to those found in Brazil and Peru, though it also includes users within its definition of “AI operators”.
A common feature in the bills covered is their grounding on the protection of fundamental rights, particularly the rights to human dignity and autonomy, protection of personal data, privacy, non-discrimination, and access to information. Some bills go further as to introduce a new set of AI-related rights to specifically protect users from harmful interactions and impacts created by AI systems.
Brazil’s proposal offers a salient example for this structure, introducing a chapter for the rights of individuals and groups affected by AI systems, regardless of their risk classification. For AI systems in general, Brazil’s proposal includes:
The right to prior information about an interaction with an AI system, in an accessible, free of charge, and understandable format.
The right to privacy and the protection of personal data, following the Lei Geral de Proteção de Dados Pessoais (LGPD) and relevant legislation;
The right to human determination and participation in decisions made by AI systems, taking into account the context, level of risk, and state-of-the-art technological development;
The right to non-discrimination and correction of direct, indirect, unlawful, or abusive discriminatory bias.
Concerning “high-risk” systems or systems that produce “relevant legal effects” to individuals and groups, Brazil’s proposal includes:
The right to an explanation of a decision, recommendation, or prediction made by an AI system;
Subject to commercial and industrial secrecy, the required explanation must contain sufficient information on the operating characteristics; the degree and level of contribution of the AI to decision-making; the data processed and its source; the criteria for decision-making, considering the situation of the individual affected; the mechanisms through which the person can challenge the decision; and the level of human supervision.
The right to challenge and review the decision, recommendation, or prediction made by the system;
The right to human intervention or review of decisions, taking into account the context, risk, and state-of-the-art technological development;
Human intervention will not be required if it is demonstrably impossible or involves a disproportionate effort. The AI operator will implement effective alternative measures to ensure the re-examination of a contested decision.
Brazil’s proposal also includes an obligation that AI operators must provide “clear and accessible information” on the procedures to exercise user rights, and establishes that the defense of individual or collective interests may be brought before the competent authority or the courts.
Mexico’s bill also introduces a chapter on “digital rights”. While these are not as detailed as the Brazilian proposal, the chapter includes innovative ideas, such as the “right to interact and communicate through AI systems”. The proposed set of rights also incorporates the right to access one’s data processed by AI; the right to be treated equally; and the right to data protection. The inclusion of these rights in the AI bill arguably does not make a significant difference, considering most of these rights are already explicitly recognized at a constitutional and legal level. Furthermore, the Mexican bill appears to introduce a catalog of rights and principles, but it lacks specific safeguards or mechanisms for their exercise in the context of AI. However, their inclusion signals the intention of policymakers to govern and regulate AI primarily through a human-rights-based perspective.
2. Most Countries in LatAm Already Have Comprehensive Data Protection Laws, Which Include AI-relevant Provisions
All countries analyzed have adopted comprehensive data protection laws applying to any processing of personal data regardless of the technology involved – some for decades, like Argentina, and some more recently, like Brazil and Chile. Except for Colombia, most data protection laws in these countries include an individual’s right not to be subject to decisions based solely on automated processing. Argentina, Peru, Mexico, and Chile recognize rights related to automated decision-making, prohibiting such activity without human intervention if it produces unwanted legal effects or significantly impacts individuals’ interests, rights, and freedoms, and is intended for profiling. These laws focus on the potential of profiling through automation, and the data protection laws in Peru, Mexico, and Colombia include a specific right prohibiting such activity, while Argentina prohibits profiling by courts or administrative authorities.
In contrast, Brazil’s LGPD recognizes the right to request the review of decisions made solely on automated processing that affect an individual’s interests, including profiling. While the intended purpose may be similar, the right under the Brazilian framework appears to be more limited, where individuals have the right to request review after the profiling occurs, but not necessarily to prevent or oppose this type of processing. Nonetheless, a significant aspect of the right proposed under Brazil’s AI bill is the explicit reference to human intervention in the review, an element absent from the same right under the LGPD.
While AI can enable different and additional outcomes other than profiling, it is noteworthy that most of the data protection laws in these countries already include some level of regulation of AI-powered automated decision-making (ADM) and profiling, whether the AI bills under consideration in the region will ultimately be adopted or not.
3. Risk-Based Regulation is Gaining Traction
All of the reviewed proposals adopt a risk-based approach to regulating AI, seemingly drawing at least some influence from the EU AI Act. These frameworks generally classify AI systems along a gradient of risk, from minimal to unacceptable, and introduce obligations proportional to the level of risk. While the specific definitions and regulatory mechanisms vary, the proposals articulate similar goals of ensuring safe, ethical, and trustworthy development and use of AI.
Brazil’s proposal is one of the most detailed in this respect, mandating a preliminary risk assessment for all systems before their introduction to the market, deployment, or use. The initial assessment must evaluate the system’s purpose, context, and operational impacts to determine its risk level. Similarly, Argentina’s bill requires a pre-market assessment to identify ‘potential biases, risks of discrimination, transparency, and other relevant factors to ensure compliance’.
Notably, most proposals converge in the definition and classification of AI systems with “unacceptable” or “excessive” risk and prohibit their development, commercialization, or deployment. Except for Mexico, whose proposal does not contain an explicit ban, most of the bills expressly prohibit AI systems posing “unacceptable” (Argentina, Chile, Colombia, and Peru) or “excessive” (Brazil) risks. The proposals examined generally consider systems under this classification as being “incompatible with the exercise of fundamental rights” or those posing a “threat to the safety, life, and integrity” of individuals.
For instance, Mexico’s bill defines AI systems with “unacceptable” risk as those that pose a “real, possible, and imminent threat” and involve “cognitive manipulation of behavior” or “classification of individuals based on their behavior and socioeconomic status, or personal characteristics”. Similarly, Colombia’s bill further defines these systems as those “capable of overriding human capacity, designed to control or suppress a person’s physical or mental will, or used to discriminate based on characteristics such as race, gender, orientation, language, political opinion, or disability”.
Brazil’s proposal also prohibits AI systems with “excessive” risk, and sets similar criteria to those found in other proposals in the region and the EU AI Act. In that sense, the proposal refers to AI systems posing “excessive” risk as any with the following purposes:
Manipulating individual or group behavior in a way that causes harm to health, safety, or fundamental rights;
Exploiting vulnerabilities of individuals or groups to influence behavior with harmful consequences;
Profiling individuals’ characteristics or behaviors, including past criminal behavior, to assess the likelihood of committing offenses;
Producing, disseminating, or facilitating material that depicts or promotes sexual exploitation or abuse of minors;
Enabling public authorities to assess or classify individuals through universal scoring systems based on personality or social behavior in a disproportionate or illegitimate manner;
Operating as autonomous weapon systems;
Conducting real-time remote biometric identification in public spaces, unless strictly limited to scenarios of criminal investigation or search of missing persons, among other listed exceptions.
Concerning the classification of “high-risk” systems, some AI bills define them based on certain domains or sectors, while others have a more general or principle-based approach. Generally, high-risk systems are left to be classified by a competent authority, allowing flexibility and discretion from regulators, but subject to specific criteria, such as evaluating a system’s likelihood and severity of creating adverse consequences.
For instance, Brazil’s bill includes at least ten criteria2 for the classification of high-risk systems, such as whether the system unlawfully or abusively produces legal effects that impair access to public or essential services, whether it lacks transparency, explainability, auditability which would impair oversight, or whether it endangers human health –physical, mental or social, either individually or collectively.
Meanwhile, the Peruvian draft regulations include a list of specific uses or sectors where the deployment of any AI system is automatically set to be considered high-risk, such as biometric identification and categorization; security of critical national infrastructure, educational admissions and student evaluations, or employment decisions.3 Under the draft regulations, the classification of “high-risk” systems and their corresponding obligations may be evaluated and reassessed by the competent authority, consistent with the “risk-based security standards principle” under the country’s brief AI law, which mandates the adoption of ‘security safeguards in proportion to a system’s level of risk’.
Colombia’s bill incorporates a mixed approach for high-risk classification. It includes general criteria such as those systems that may “significantly impact fundamental rights”, particularly the rights to privacy, freedom of expression, or access to public information; while also including sensitive or domain-based applications, such as any system “enabling automated decision-making without human oversight that operate in the sectors of healthcare, justice, public security, or financial and social services”.
Mexico’s proposal defines “high-risk” systems as those with the potential to significantly affect public safety, human rights, legality, or legal certainty, but omits additional criteria for their classification. A striking distinction from Mexico’s proposal is that it seems to restrict the use and deployment of these systems to public security entities and the Armed Forces (see Article 48 of the Bill).
The Brazilian bill and Peruvian draft implementing regulations have chapters covering governance measures, describing specific obligations for developers, deployers, and distributors of all AI systems, regardless of their risk level. In addition, most bills include specific obligations for entities operating “high-risk” systems, such as performing comprehensive risk assessments and ethical evaluations; assuring data quality and bias detection; extensive documentation and record-keeping obligations; and guiding users on the intended use, accuracy, and robustness of these systems. Brazil’s bill indicates the competent authority will have discretion to determine cases under which some obligations may be relaxed or waived, according to the context in which the AI operator acts within the value chain of the system.
Under Brazil’s AI bill, entities deploying high-risk systems must also submit an Algorithmic Impact Assessment (AIA) along with the preliminary assessment, which must be conducted following best practices. In certain regulated sectors, the Brazilian authority may require the AIA to be independently verified by an external auditor.
Chile’s proposal outlines mandatory requirements for high-risk systems, which must implement a risk management system grounded in a “continuous and iterative process”. This process must span the entire lifecycle of the system and be subject to periodic review, ensuring failures, malfunctions, and deviations from intended purpose are detected and minimized.
Argentina’s proposal requires all public and private entities that develop or use AI systems to register in a National Registry of Artificial Intelligence Systems, regardless of the level of risk. The registration must include detailed information on the system’s purpose, intended use, field of application, algorithmic structure, and implemented security safeguards. Similarly, Colombia’s bill includes an obligation to conduct fundamental rights impact assessments and create a national registry for high-risk AI systems.
Fewer proposals have specific, targeted provisions for “limited-risk” systems. For instance, Colombia’s bill defines these systems as those that, ‘without posing a significant threat to rights or safety, may have indirect effects or significant consequences on individuals’ personal or economic decisions’. Examples of these systems include AI commonly used for personal assistance, recommendation engines, synthetic content generation, or systems that simulate human interaction. Under Mexico’s proposal, “limited-risk” systems are those that ‘allow users to make informed decisions; require explicit user consent; and allow users to opt out under any circumstances’.
In addition, the Colombian proposal explicitly indicates that AI operators employing these systems must meet transparency obligations, including disclosure of interaction with an AI tool; provide clear information about the system to users; and allow for opt-out or deactivation. Similarly, under the Chilean proposal, a transparency obligation for “limited-risk” AI systems includes informing users exposed to the system in a timely, clear, and intelligible manner that they are interacting with an AI, except in situations where this is “obvious” due to the circumstances and context of use.
Finally, Colombia’s bill describes low-risk systems as those that pose minimal risk to the safety or rights of individuals and thus are subject to general ethical principles, transparency requirements, and best practices. Such systems may include those used for administrative or recreational purposes without ‘direct influence on personal or collective decisions’; systems used by educational institutions and public entities to facilitate activities which do not fall within the scope of any of the other risk levels; and systems used in video games, productivity tools, or simple task automation.
4. Pluri-institutional and Multistakeholder Governance Frameworks are Preferred
A key element shared across the AI legislative proposals reviewed is the establishment of multistakeholder AI governance structures aimed at ensuring responsible oversight, regulatory clarity, and policy coordination.
Notably, Brazil, Chile, and Colombia reflect a shared commitment to institutionalize AI governance frameworks that engage public authorities, sectoral regulators, academia, and civil society. However, they differ in the level of institutional development, the distribution of oversight functions, and the legal authority vested in enforcement bodies. All three countries envision coordination mechanisms that integrate diverse actors to promote coherence in national AI strategies. For instance, Brazil proposes the creation of the National Artificial Intelligence Regulation and Governance System (SIA). This system would be coordinated by the National Data Protection Authority (ANPD) and composed of sectoral regulators, a Permanent Council for AI Cooperation, and a Committee of AI Specialists. The SIA would be tasked with issuing binding rules on transparency obligations, defining general principles for AI development, and supporting sectoral bodies in developing industry-specific regulations.
Chile outlines a governance model centered around a proposed AI Technical Advisory Council, responsible for identifying “high-risk” and “limited-risk” AI systems and advising the Ministry of Science, Technology, Knowledge, and Innovation (MCTIC) on compliance obligations. While the Council’s role is essentially advisory, regulatory oversight and enforcement are delegated to the future Data Protection Authority (DPA), whose establishment is pending under Chile’s recently enacted personal data protection law.
Colombia’s bill designates the Ministry of Science, Technology, and Innovation as the lead authority responsible for regulatory implementation and inter-institutional coordination. The Ministry is tasked with aligning the law’s execution with national AI strategies and developing supporting regulations. Additionally, the bill grants the Superintendency of Industry and Commerce (SIC) specific powers to inspect and enforce AI-related obligations, particularly concerning the processing of personal data, through audits, investigations, and preventive measures.
5. Fostering Responsible Innovation Through Sandboxes, Innovation Ecosystems, and Support for SMEs
Some proposals emphasize the dual objectives of regulatory oversight and the promotion of innovation. A notable commonality is their inclusion of controlled testing environments and regulatory sandboxes for AI systems aimed at facilitating innovation, promoting responsible experimentation, and supporting market access, particularly for startups and small-scale developers.
The bills generally empower competent and sectoral authorities to operate AI regulatory sandboxes, on their initiative or through public-private partnerships. The sandboxes are operated by pre-agreed testing plans, and some offer temporary exemptions from administrative sanctions, while others maintain liability for harms resulting from sandbox-based experimentation.
Proposals in Brazil, Chile, Colombia, and Peru also include relevant provisions to support small-to-medium enterprises (SMEs) and mandate the operation of “innovation ecosystems.” For instance, Brazil’s bill requires sectoral authorities to follow differentiated regulatory criteria for AI systems developed by micro-enterprises, small businesses, and startups, including their market impact, user base, and sectoral relevance.
Similarly, Chile complements its proposed sandbox regime with priority access for smaller companies, capacity-building initiatives, and their representation in the AI Technical Advisory Council. This inclusive approach aims to reduce entry barriers and ensure that small-scale innovators have both voice and access within the AI regulatory ecosystem.
Colombia’s bill includes public funding programs to support AI-related research, technological development, and innovation, with a focus on inclusion and accessibility. Although not explicitly targeted at SMEs, these incentives create indirect benefits for emerging actors and academia-led startups.
Lastly, Peru promotes the development of open-source AI technologies to reduce systemic entry barriers and foster ecosystem efficiency. The regulation also mandates the promotion and financing of AI research and development through national programs, universities, and public administration programs that directly benefit small developers and innovators.
6. The Road Ahead for Responsible AI Governance in LatAm
Latin America is experiencing a wave of proposed legislation to govern AI. While some countries have several proposals under consideration, with some seemingly making more progress towards their adoption than others,4 a comparative review shows they share common elements and objectives. The proposed legislative landscape reveals a shared regional commitment to regulate AI in a manner that is ethical, human-centered, and aligned with fundamental rights. Most of the bills examined lay the groundwork for comprehensive AI governance frameworks based on principles and new AI-related rights.
In addition, all proposals classify AI systems based on their level of risk – with all countries proposing a scaled risk system from minimal or low risk, which goes up to defining systems that pose “unacceptable” or “excessive” risk – and introduce concrete mechanisms and obligations proportional to that classification, with varying but similar requirements to perform risk and impact assessments and other transparency obligations. Most bills also designate an enforcement authority to act in coordination with sectoral agencies to issue further regulations, especially to extend criteria or designate types of systems considered “high-risk”.
Along this normative and institutional framework, most AI bills in Latin America also reflect a growing recognition of the need to balance regulatory oversight with flexibility, reflected in the adoption of controlled testing environments and tailored provisions for startups and SMEs.
Except for Brazil and Peru, much of the legislative activity in the countries covered still remains in early stages. However, the AI bills reviewed offer an insight into how key jurisdictions in the region are considering AI governance, framing it as both a regulatory challenge and an opportunity for inclusive digital development. As these initiatives evolve, key questions around institutional capacity, enforcement, and stakeholder participation will shape how effectively Latin America can build trusted and responsible AI frameworks.
In Mexico, two proposals concerning AI regulation have been introduced, one in the Senate and another in the Chamber of Deputies. Both were put forth by representatives of MORENA, the political party holding a supermajority in Congress. Additionally, the Senate is considering five proposals to amend the Federal Constitution, aiming to grant Congress the authority to legislate on AI matters. Similarly, in Colombia, there are two proposals under the Senate’s consideration and one recently introduced in the Chamber of Deputies. ↩︎
1) The system unlawfully or abusively produces legal effects that impair access to public or essential services; 2) It has a high potential for material or moral harm or for unlawful discriminatory bias; 3) It significantly affects individuals from vulnerable groups; 4) The harm it causes is difficult to reverse; 5) There is a history of damage linked to the system or its context of use; 6) The system lacks transparency, explainability, or auditability, impairing oversight; 7) It poses systemic risks, such as to cybersecurity or safety of vulnerable groups; 8) It presents elevated risks despite mitigation measures, especially in light of anticipated benefits; 9) It endangers integral human health — physical, mental, or social — either individually or collectively; 10) It may negatively affect the development or integrity of children and adolescents. ↩︎
Other uses or sectors included in the high risk category are: access to and prioritization within social programs and emergency services; credit scoring; judicial assistance; Health diagnostics and patient care; criminal profiling, victimization risk analysis, emotional state detection, evidence verification, or criminal investigation by law enforcement. ↩︎
Highlights from FPF’s July 2025 Technologist Roundtable: AI Unlearning and Technical Guardrails
On July 17, 2025, the Future of Privacy Forum (FPF) hosted the second in a series of Technologist Roundtables with the goal of convening an open dialogue on complex technical questions that impact law and policy, and assisting global data protection and privacy policymakers in understanding the relevant technical basics of large language models (LLMs). In this event, we invited a range of academic technical experts and data protection regulators from around the world to explore machine unlearning and technical guardrails.
A. Feder Cooper, Incoming Assistant Professor, Department of Computer Science, Yale University; Postdoctoral Researcher, Microsoft Research; Postdoctoral Affiliate, Stanford University
Ken Ziyu Liu, Ph.D. Candidate, Department of Computer Science, Stanford University; Researcher, Stanford Artificial Intelligence Laboratory (SAIL)
Weijia Shi, Ph.D. Candidate, Department of Computer Science, University of Washington; Visiting Researcher, Allen Institute for Artificial Intelligence
Pratyush Maini, Ph.D. Candidate, Machine Learning Department, Carnegie Mellon University; Founding member of DatologyAI
In emerging literature, the topic of “machine unlearning” and its related technical guardrails concerns the extent to which information can be “removed” or “forgotten” from an LLM or similar generative AI model or from an overall generative AI system. The topic is relevant to a range of policy goals, including complying with individual data subject deletion requests, respecting copyrighted information, building safety and related content protections, and overall performance. Depending on the goal at hand, different technical guardrails and means of operationalizing “unlearning” have different levels of effectiveness.
In this post-event summary, we highlight the key takeaways from three parts of the Roundtable on July 17:
Machine Unlearning: Overview and Policy Considerations
Core “Unlearning” Methods: Exact vs. Approximate
Technical Guardrails and Risk Mitigation
If you have any questions, comments, or wish to discuss any of the topics related to the Roundtable and Post-Event Summary, please do not hesitate to reach out to FPF’s Center for AI at [email protected].
A Price to Pay: U.S. Lawmaker Efforts to Regulate Algorithmic and Data-Driven Pricing
“Algorithmic pricing,” “surveillance pricing,” “dynamic pricing”: in states across the U.S., lawmakers are introducing legislation to regulate a range of practices that use large amounts of data and algorithms to routinely inform decisions about the prices and products offered to consumers. These bills—targeting what this analysis collectively calls “data-driven pricing”—follow the Federal Trade Commission (FTC)’s 2024 announcement that it was conducting a 6(b) investigation to study how firms are engaging in so-called “surveillance pricing,” and the release of preliminary insights from this study in early 2025. With new FTC leadership signalling that continuing the study is not a priority, state lawmakers have stepped in to scrutinize certain pricing schemes involving algorithms and personal data.
The practice of vendors changing their prices based on data about consumers and market conditions is by no means a new phenomenon. In fact, “price discrimination”—the term in economics literature for charging different buyers different prices for largely the same product—has been documented for at least a century, and has likely played a role since the earliest forms of commerce.1 What is unique, however, about more recent forms of data-driven pricing is the granularity of data available, the ability to more easily target individual consumers at scale, and the speed at which prices can be changed. This ecosystem is enabled by the development of tools for collecting large amounts of data, algorithms that analyze this data, and digital and physical infrastructure for easily adjusting prices.
Key takeaways
Data-driven pricing legislation generally focuses on three key elements: the use of algorithms to set prices, the individualization of prices based on personal data, and the context or sector in which the pricing occurs.
Lawmakers are particularly concerned about the potential for data-driven pricing to cause harm to consumers or markets in housing, food establishments, and retail, echoing broader interest in the impact of AI in “high-risk” or “consequential” decisions.
Legislation varies in the scope of pricing practices covered, depending on how key terms are defined. Prohibiting certain practices deemed inappropriate, while maintaining certain practices that consumers find beneficial like loyalty programs or personalized discounts, is a challenge lawmakers are attempting to address.
Beyond legislation, regulators have signalled interest in investigating certain data-driven pricing practices. The Federal Trade Commission, Department of Justice, Department of Transportation, and state Attorneys General have all stated their intentions to enforce against particular instances of algorithmic pricing.
Trends in data-driven pricing legislation
As discussed in the FPF issue brief Data-Driven Pricing: Key Technologies, Business Practices, and Policy Implications, policymakers are generally concerned with a few particular aspects of data-driven pricing strategies: the potential for unfair discrimination, a lack of transparency around pricing practices, the processing and sharing of personal data, and possible anti-competitive behavior or other market distortions. While these policy issues may also be the domain of existing consumer protection, competition, and civil rights laws, lawmakers have made a concerted effort to proactively address them explicitly with new legislation. Crucially, these bills implicate three elements of data-driven pricing practices, raising a series of distinct but related questions for each:
Algorithms: Was an algorithm used to set prices? Are consumers able to understand how the algorithm works? How was the algorithm trained, and how might training data implicate the model’s outputs? What impact does the algorithm have on different market segments and demographic groups, as well as markets overall?
Personal data: Was personal data used to set prices, and are prices personalized to individuals? What kind of personal data is used? Is sensitive data or protected characteristics included? Are inferences made about individuals based on their personal data for the sake of market segmentation?
Context: Is the pricing being implemented in a particular sector, or in regard to particular goods, that might be especially sensitive or consequential? For example, is data-driven pricing being used in the housing market, or in groceries and restaurants?
These elements generally correspond to the different terms used in legislation to refer to data-driven pricing practices. For example, a number of bills use terms such as “algorithmic pricing,” including New York S 3008, an enacted law requiring a disclosure when “personalized algorithmic pricing” is used to set prices,2 and California SB 384, which would prohibit the use of “price-setting algorithms” under certain market conditions. A number of other bills use terms like “surveillance pricing,” such as California AB 446, which would prohibit setting prices based on personal information obtained through “electronic surveillance technology,” and Colorado HB 25-1264, which would make it an unfair trade practice to use “surveillance data” to set individualized prices or worker’s wages. Finally, some bills seek to place limits on the use of “dynamic pricing” in certain circumstances, including Maine LD 1597 and New York A 3437, which would prohibit the practice in the context of groceries and other food establishments. Each of these framings, while distinct, often cover similar kinds of practices.
Given that certain purchases such as housing and food are necessary for survival, the use of data-driven pricing strategies in these contexts is of particular concern to lawmakers. Many states already have laws banning or restricting price gouging, which typically focus on products that are necessities, and specifically during emergencies or disasters. Data-driven pricing bills, on the other hand, are less prescriptive in regards to the amount sellers are allowed to change prices, but apply beyond just emergency situations. While many apply uniformly across the economy, some are focused on particular sectors, including:
Food establishments: eg, Massachusetts S 2515 (applies to grocery stores), Hawaii HB 465 (applies to the sale of food qualifying for federal SNAP and WIC benefits programs)
In addition to bills focused on data-driven pricing, legislation regulating artificial intelligence (AI) and automated decision making more generally often apply specifically to “high-risk AI” and AI used to make “consequential decisions,” including educational opportunities, employment, finance or lending, healthcare, housing, insurance, and other critical services. The use of a pricing algorithm in one of these contexts may therefore trigger the requirements of certain AI regulations. For example, the Colorado AI Act defines “consequential decision” to mean “a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of…” the aforementioned categories.
Because certain data-driven pricing strategies are widespread and appeal to many consumers, there is some concern—particularly among retailers and advertisers—that overly-broad restrictions could actually end up harming consumers and businesses alike. For example, widely popular and commonplace happy hours could, under certain definitions, be considered “dynamic pricing.” As such, data-driven pricing legislation often contains exemptions, which generally fall into a few categories:
General discounts: Deals that are available to the general public, such as coupons, sales, or bona fide loyalty programs (eg, California SB 259).
Cost-based price differentials: Pricing differences or changes due to legitimate disparities in input or production costs across areas (eg, Georgia SB 164).
Insurers or financial institutions: Highly-regulated entities that may engage in data-driven pricing strategies in compliance with other existing laws (eg, Illinois SB 2255).
Key remaining questions
A number of policy and legal issues will be important to keep an eye on as policymakers continue to learn about the range of existing data-driven pricing strategies and consider potential regulatory approaches.
The importance of definitions
As policymakers attempt to articulate the contours of what they consider to be fair pricing strategies, the definitions they adopt play a major role in the scope of practices that are allowed. Crafting rules that prohibit certain undesirable practices without eliminating others that consumers and businesses rely on and enjoy is challenging, requiring policymakers to identify what specific acts or market conditions they’re trying to prevent. For example, Maine LD 1597, which is intended to stop the use of most dynamic pricing by food establishments, includes an incredibly broad definition of “dynamic pricing”:
“Dynamic pricing” means the practice of causing a price for a good or a product to fluctuate based upon demand, the weather, consumer data or other similar factors including an artificial intelligence-enabled pricing adjustment.
While the bill would exempt discounts, time-limited special prices such as happy hours, and goods that “traditionally [have] been priced based upon market conditions, such as seafood,” prohibiting price changes based on “demand” could undermine a fundamental principle of the market economy. Even with exceptions that carve out sales and other discounts—and not all bills contain such exemptions—legislation might still inadvertently capture other accepted practices such as specials aligned with seasonal changes, bulk purchase discounts, deals on goods nearing expiration, or promotions to clear inventory.
Lawmakers must also consider how any new definitions interact with definitions in existing law. For example, an early version of California AB 446, which would prohibit “surveillance pricing” based on personally identifiable information, included “deidentified or aggregated consumer information” within the definition of “personally identifiable information.” However, deidentified and aggregated information is not considered “personal information” as defined by the California Consumer Privacy Act (CCPA). In later versions, the bill authors aligned the definition in AB 446 with the text of the CCPA.
The role of AI
In line with policymakers’ increased focus on AI, and a shift towards industry use of algorithms in setting prices, a significant amount of data-driven pricing legislation applies explicitly to algorithmic pricing. Some bills, such as California SB 52 and California SB 384, are intended to address potential algorithmically-driven anticompetitive practices, while many others are geared towards protecting consumers from discriminatory practices. Though consumer protection may be the goal, some bills focus not on preventing specific impacts, but on eliminating the use of AI in pricing at all, at least in real time. For example, Minnesota HF 2452 / SF 3098 states:
A person is prohibited from using artificial intelligence to adjust, fix, or control product prices in real time based on market demands, competitor prices, inventory levels, customer behavior, or other factors a person may use to determine or set prices for a product.
This bill would prohibit all use of AI for price setting, even when based on typical product pricing data and applied equally to all consumers. Such a ban would have a significant impact on the practice of surge pricing, and any sector that is highly reactive to market fluctuations. On the other hand, other bills focus on the use of personal data—including sensitive data like biometrics—to set prices that are personalized to each consumer. For example, Colorado HB 25-1264 would prohibit the practice of “surveillance-based price discrimination,” defined as:
Using an automated decision system to inform individualized prices based on surveillance data regarding a consumer.
…
“Surveillance data” means data obtained through observation, inference, or surveillance of a consumer or worker that is related to personal characteristics, behaviors, or biometrics of the individual or a group, band, class, or tier in which the individual belongs.
These bills are concerned not necessarily with the use of AI in pricing per se, but how the use of AI in conjunction with personal data could have a detrimental effect on individual consumers.
The impact on consumers
While data-driven pricing legislation is generally intended to protect consumers, some approaches may unintentionally block practices that consumers enjoy and rely on. There is a large delta between common and beneficial price-adjusting practices like sales on one hand, and exploitative practices like price gouging on the other, and writing a law that draws the proper cut-off point between the two is difficult. For example, Illinois SB 2255 contains the following prohibition:
A person shall not use surveillance data as part of an automated decision system to inform the individualized price assessed to a consumer for goods or services.
The bill would exempt persons assessing price based on the cost of providing a good or service, insurers in compliance with state law, and credit-extending entities in compliance with the Fair Credit Reporting Act. However, it would not exempt bona fide loyalty programs, a popular consumer benefit that is excluded from other similar legislation (such as the enacted New York S 3008, which carves out deals provided under certain “subscription-based agreements”). While lawmakers likely intended just to prevent exploitative pricing schemes that disempower consumers, they may inadvertently restrict some favorable practices as well. As a result, if statutes aren’t clear, some businesses may forgo offering discounts for fear of noncompliance.
Legal challenges to legislation
When New York S 3008 went into effect on July 8, 2025, the National Retail Federation filed a lawsuit to block the law, alleging that it would violate the First Amendment by including the following requirement, amounting to compelled speech:
Any entity that sets the price of a specific good or service using personalized algorithmic pricing … shall include with such statement, display, image, offer or announcement, a clear and conspicuous disclosure that states: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA”.
The New York Office of the Attorney General, in response, said it would pause enforcement until 30 days after the judge in the case makes a decision on whether to grant a preliminary injunction. Other data-driven pricing bills would not face this challenge, as they don’t contain specific language requirements, instead focusing on prohibiting certain practices.
Beyond legislation
Regulators have also been scrutinizing certain data-driven pricing strategies, particularly for potentially anticompetitive conduct. While the FTC has seemingly deprioritized the 6(b) study of “surveillance pricing” it announced in July 2024—cancelling public comments after releasing preliminary insights from the report in January 2025—it could still take up actions regarding algorithmic pricing in the future under its competition authority. In fact, the FTC’s new leadership has not retracted a joint statement the Commission made in 2024 along with the Department of Justice (DOJ), European Commission, and UK Competition and Markets Authority, which affirmed “a commitment to protecting competition across the artificial intelligence (AI) ecosystem.” The FTC, along with 17 state attorneys general (AGs), also still has a pending lawsuit against Amazon, accusing the company of using algorithms to deter other sellers from offering lower prices.
Even if the FTC refrains from regulating data-driven pricing, other regulators may be interested in addressing the issue. In particular, in 2024 the DOJ, alongside eight state AGs, used its antitrust authority to sue the property management software company RealPage for allegedly using an algorithmic pricing model and nonpublic housing rental data to collude with other landlords. Anticompetitive uses of algorithmic pricing tools is also a DOJ priority under new leadership, with the agency filing a statement of interest regarding the “application of the antitrust laws to claims alleging algorithmic collusion and information exchange” in a March 2025 case, and the agency’s Antitrust Division head promising an increase in probes of algorithmic pricing. Additionally, in response to reports claiming that Delta Air Lines planned to institute algorithmic pricing for tickets—and a letter to the company from Senators Gallego (D-AZ), Blumenthal (D-CT), and Warner (D-VA)—the Department of Transportation Secretary signalled that the agency would investigate such practices.
Conclusion
Policymakers are turning their attention towards certain data-driven pricing strategies, concerned about the impact—on consumers and markets—of practices that use large amounts of data and algorithms to set and adjust prices. Focused on practices such as “algorithmic,” “surveillance,” and “dynamic” pricing, these bills generally address pricing that involves the use of personal data, the deployment of AI, and/or frequent changes, particularly in critical sectors like food and housing. As access to consumer data grows, and algorithms are implemented in more domains, industry may increasingly rely on data-driven pricing tools to set prices. As such, legislators and regulators will likely continue to scrutinize their potential harmful impacts.
While some forms of price discrimination are illegal, many are not. The term “discrimination” as used in this context is distinct from how it’s used in the context of civil rights. ↩︎
The New York Attorney General’s office said, as of July 14, 2025, that it would pause enforcement of the law while a federal judge decides on a motion for preliminary injunction, following a lawsuit brought by the National Retail Federation. ↩︎
The “Neural Data” Goldilocks Problem: Defining “Neural Data” in U.S. State Privacy Laws
Co-authored by Chris Victory, FPF Intern
As of halfway through 2025, four U.S. states have enacted laws regarding “neural data” or “neurotechnology data.” These laws, all of which amend existing state privacy laws, signify growing lawmaker interest in regulating what’s being considered a distinct, particularly sensitive kind of data: information about people’s thoughts, feelings, and mental activity. Created in response to the burgeoning neurotechnology industry, neural data laws in the U.S. seek to extend existing protections for the most sensitive of personal data to the newly-conceived legal category of “neural data.”
Each of these laws defines “neural data” in related but distinct ways, raising a number of important questions: just how broad should this new data type be? How can lawmakers draw clear boundaries for a data type that, in theory, could apply to anything that reveals an individual’s mental activity? Is mental privacy actually separate from all other kinds of privacy? This blog post explores how Montana, California, Connecticut, and Colorado define “neural data,” how these varying definitions might apply to real-world scenarios, and some challenges with regulating at the level of neural data.
“Neural” and “neurotechnology” data definitions vary by state.
While just four states (Montana, California, Connecticut, and Colorado) currently have neural data laws on the books, legislation has rapidly expanded over the past couple years. Following the emergence of sophisticated deep learning models and other AI systems, which gave a significant boost to the neurotechnology industry, media and policymaker attention turned to the nascent technology’s privacy, safety, and other ethical considerations. Proposed regulation—both in the U.S. and globally—varies in its approach to neural data, with some strategies creating new “neurorights” or mandating entities minimize the neural data they collect or process.
In the U.S., however, laws have coalesced around an approach in which covered entities must treat neural data as “sensitive data” or other data with heightened protections under existing privacy law, above and beyond the protections granted by virtue of being personal information. The requirements that attach to neural data by virtue of being “sensitive” vary by underlying statute, as illustrated in the accompanying comparison chart. In fact, even the way that “neural data” is defined varies by law, placing different data types within scope depending on the state. The following definitions are organized roughly from the broadest conception of neural data to the narrowest.
California
Generally speaking, the broadest conception of “neural data” in the U.S. laws is California SB 1223, which amends the state’s existing consumer privacy law, the California Consumer Privacy Act (CCPA), to clarify that “sensitive personal information” includes “neural data.” The law, which went into effect January 1, 2025, defines “neural data” as:
Information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information.
Notably, however, the CCPA as amended by the California Privacy Rights Act (CPRA) treats “sensitive personal information” no differently than personal information except when it’s used for “the purpose of inferring characteristics about a consumer”—in which case it is subject to heightened protections. As such, the stricter standard for sensitive information will only apply when neural data is collected or processed for making inferences.
Montana
Montana SB 163 takes a slightly different approach than the other laws in two ways: one, it applies to “neurotechnology data,” an even broader category of data that includes the measurement of neural activity; and two, it amends Montana’s Genetic Information Privacy Act (GIPA) rather than a comprehensive consumer privacy law. The law, which goes into effect October 1, 2025, will define “neurotechnology data” as:
Information that is captured by neurotechologies, is generated by measuring the activity of an individual’s central or peripheral nervous systems, or is data associated with neural activity, which means the activity of neurons or glial cells in the central or peripheral nervous system, and that is not nonneural information. The term does not include nonneural information, which means information about the downstream physical effects of neural activity, including but not limited to pupil dilation, motor activity, and breathing rate.
The law will define “neurotechnology” as:
Devices capable of recording, interpreting, or altering the response of an individual’s central or peripheral nervous system to its internal or external environment and includes mental augmentation, which means improving human cognition and behavior through direct recording or manipulation of neural activity by neurotechnology.
However, the law’s affirmative requirements will only apply to “entities” handling genetic or neurotechnology data, with “entities” defined narrowly—as in the original GIPA—as:
…a partnership, corporation, association, or public or private organization of any character that: (a) offers consumer genetic testing products or services directly to a consumer; or (b) collects, uses, or analyzes genetic data.
While the lawmakers may not have intended to limit its application to consumer genetic testing companies, and may have inadvertently carried over GIPA’s definition of “entities,” the text of the statute may significantly narrow the companies subject to it.
Connecticut
Similarly, Connecticut SB 1295, most of which goes into effect July 1, 2026, will amend the Connecticut Data Privacy Act to clarify that “sensitive data” includes “neural data,” defined as:
Any information that is generated by measuring the activity of an individual’s central nervous system.
In contrast to other definitions, the Connecticut law will apply only to central nervous system activity, rather than central and peripheral nervous system activity. However, it also does not explicitly exempt inferred data or nonneural information as California and Montana do, respectively.
Colorado
Colorado HB 24-1058, which went into effect August 7, 2024, amends the Colorado Privacy Act to clarify that “sensitive data” includes “biological data,” which itself includes “neural data.” “Biological data” is defined as:
Data generated by the technological processing, measurement, or analysis of an individual’s biological, genetic, biochemical, physiological, or neural properties, compositions, or activities or of an individual’s body or bodily functions, which data is used or intended to be used, singly or in combination with other personal data, for identification purposes.
The law defines “neural data” as:
Information that is generated by the measurement of the activity of an individual’s central or peripheral nervous systems and that can be processed by or with the assistance of a device.
Notably, “biological data” only applies to such data when used or intended to be used for identification, significantly narrowing the potential scope.
* While only Montana explicitly covers data captured by neurotechnologies, and excludes nonneural information, the other laws may implicitly do so as well.
The Goldilocks Problem: The nature of “neural data” makes it challenging to get the definition just right.
Given that each state law defines neural data differently, there may be significant variance in what kinds of data are covered. Generally, these differences cut across three elements:
Central vs. peripheral nervous system data: Does the law cover data from both the central and peripheral nervous system, or just the central nervous system?
Treatment of inferred and nonneural data: Does the law exclude neural data that is inferred from nonneural activity?
Identification: Does the law exclude neural data that is not used, or intended to be used, for the purpose of identification?
Central vs. peripheral nervous system data
The nervous system comprises the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS—made up of the brain and spinal cord—carries out higher-level functions including thinking, emotions, and coordinating motor activity. The PNS—the network of nerves that connects the CNS to the rest of the body—receives signals from the CNS and transmits this information to the rest of the body instructing it on how to function, and transfers sensory information back to the CNS in a cyclical process. Some of this activity is conscious and deliberate on the part of the individual (voluntary nervous system), while some involves unconscious, involuntary functions like digestion and heart rate (autonomic nervous system).
What this means practically is that the nervous system is involved in just about every human bodily function. Some of this data is undoubtedly particularly sensitive, as it can reveal information about an individual’s health, sexuality, emotions, identity, and more. It may also provide insight into an individual’s “thoughts,” either by accessing brain activity directly or by measuring other bodily data that in effect reveals what the individual is thinking (eg, increased heart and breathing rate at a particular time can reveal stress or arousal). It also means that an incredibly broad swath of data could be considered neural data: the movement of a computer mouse or use of a smartwatch may technically constitute, under certain definitions, neural data.
As such, there is a significant difference between laws that cover both CNS and PNS data, and those that only cover CNS data. Connecticut SB 1295 is the lone current law that applies solely to CNS data, which narrows its scope considerably and likely only covers data collected from tools such as brain-computer interfaces (BCIs), electroencephalogram (EEGs), and other similar devices. However, other data types that would be excluded by virtue of not relating to the CNS could, in theory, provide the same or similar information. For example, signals from the PNS—such as pupillometry (pupil dilation), respiration (breathing patterns), and heart rate—could also indicate the nervous system’s response to stimuli, despite not technically being a direct measurement of the CNS.
Treatment of inferred and nonneural data
Defining “neural data” in a way that covers particular data of concern without being overinclusive is challenging, and lawmakers have added carveouts in an attempt to make their legislation more workable. However, focusing regulation on the nervous system in the first place raises a few potential issues. First, it reinforces neuroessentialism, the idea that the nervous system and neural data are unique and separate from other types of sensitive data; as well as neurohype, the inflation or exaggeration of neurotechnologies’ capabilities. There is not currently—and may never be, as such—a technology for “reading a person’s mind.” What may be possible are tools that measure neural activity to provide clues about what an individual might be thinking or feeling, much the same as measuring their other bodily functions, or even just gaining access to their browsing history. This doesn’t make the data less sensitive, but challenges the idea that “neural data” itself—whether referring to the central, peripheral, or both nervous systems—is the most appropriate level for regulation.
This creates one of two problems for lawmakers. On one hand, defining “neural data” too broadly could create a scenario in which all bodily data is covered. Typing on a keyboard involves neural data, as the central nervous system sends signals through the peripheral nervous system to the hands in order to type. Yet, regulating all data related to typing as sensitive neural data could be unworkable. On the other hand, defining “neural data” too narrowly could result in regulations that don’t actually provide the protections that lawmakers are seeking. For example, if legislation only applies to neural data that is used for identification purposes, it may cover very few situations, as this is not a way that neural data is typically used. Similarly, only covering CNS data, rather than both CNS and PNS data, may be difficult to implement because it’s not clear that it’s possible to truly separate the data from these two systems, as they are interlinked.
One way lawmakers seek to get around the first problem is by narrowing the scope, clarifying that the legislation doesn’t apply to “nonneural information” such as downstream physical bodily effects, or neural data that is “inferred from nonneural information.” For example, Montana SB 163 excludes “nonneural information” such as pupil dilation, motor activity, and breathing rate. However, if the concern is that certain information is particularly sensitive and should be protected (eg, data potentially revealing an individual’s thoughts or feelings), then scoping out this information just because it’s obtained in a different way doesn’t address the underlying issue. For example, if data about an individual’s heart rate, breathing, perspiration, and speech pattern is used to infer their emotional state, this is functionally no different—and potentially even more revealing—than data collected “directly” from the nervous system. Similarly, California SB 1223 carves out data that is “inferred from nonneural information,” leaving open the possibility for the same kind of information to be inferred through other bodily data.
Identification
Another way lawmakers, specifically in Colorado, have sought to avoid an unmanageably broad conception of neural data is to only cover such data when used for identification. Colorado HB 24-1058, which regulates “biological data”—of which “neural data” is one component—only applies when the data “is used or intended to be used, singly or in combination with other personal data, for identification purposes.” Given that neural data, at least currently, is not used for identification, it’s not clear that such a definition would cover many, if any, instances of consumer neural data.
Conclusion
Each of the four U.S. states currently regulating “neural data” defines the term differently, varying around elements such as the treatment of central and peripheral nervous system data, exclusions for inferred or nonneural data, and the use of neural data for identification. As a result, the scope of data covered under each law differs depending on how “neural data” is defined. At the same time, attempting to define “neural data” reveals more fundamental challenges with regulating at the level of nervous system activity. The nervous system is involved in nearly all bodily functions, from innocuous movements to sensitive activities. Legislating around all nervous system activity may render physical technologies unworkable, while certain carveouts may, conversely, scope out information that lawmakers want to protect. While many are concerned about technologies that can “read minds,” such a tool does not currently exist per se, and in many cases nonneural data can reveal the same information. As such, focusing too narrowly on “thoughts” or “brain activity” could exclude some of the most sensitive and intimate personal characteristics that people want to protect. In finding the right balance, lawmakers should be clear about what potential uses or outcomes on which they would like to focus.
FPF at PDP Week 2025: Generative AI, Digital Trust, and the Future of Cross-Border Data Transfers in APAC
Authors: Darren Ang Wei Cheng and James Jerin Akash (FPF APAC Interns)
From July 7 to 10, 2025, the Future of Privacy Forum (FPF)’s Asia-Pacific (APAC) office was actively engaged in Singapore’s Personal Data Protection Week 2025 (PDP Week) – a week of events hosted by the Personal Data Protection Commission of Singapore (PDPC) at the Marina Bay Sands Expo and Convention Centre in Singapore.
Alongside the PDPC’s events, PDP Week also included a two-day industry conference organized by the International Association of Privacy Professionals (IAPP) – the IAPP Asia Privacy Forum and AI Governance Global.
This blog post presents key takeaways from the wide range of events and engagements that FPF APAC led and participated in throughout the week. Key themes that emerged from the week’s discussions included:
AI governance has moved beyond principles to practice, policies, and passed laws: Organizations are now focused on the practical steps to be taken for developing and deploying AI responsibly. This requires a cross-functional approach within organizations and thorough due diligence when procuring third-party AI solutions.
Digital trust has become a greater imperative: As AI systems and other digital technologies become more complex, building digital trust and ensuring that technology aligns with consumer and societal expectations are critical to maximizing the potential benefits.
The future of the digital economy will be shaped by the trajectory of cross-border data transfers: There is a tension, both in the APAC region and globally, between the rise of restrictive data transfer requirements, and the fact that data transfers are essential for the digital economy and the development of high-quality AI systems.
Technical and legal solutions for privacy are gaining ground: In response to the complex landscape of data transfer rules, stakeholders are actively exploring practical solutions such as Privacy Enhancing Technologies (PETs), internationally-recognized certifications, or mechanisms such as the ASEAN Model Contractual Clauses (MCCs).
In the paragraphs below, we elaborate on some of these themes, as well as other interesting observations that came up over the course of FPF’s involvement in PDP Week.
1. FPF’s and IMDA’s co-hosted workshop shared practical perspectives for companies navigating the waters of generative AI governance.
On Monday, July 7, 2025, FPF joined the Infocomm Media Development Authority of Singapore (IMDA) in hosting a workshop for Singapore’s data protection community, titled “AI, AI, Captain!: Steering your organisation in the waters of Gen AI by IMDA and FPF.” The highly-anticipated event provided participants with practical knowledge about AI governance at the organizational level.
The event was hosted by Josh Lee Kok Thong, Managing Director of FPF APAC, and was attended by around 200 representatives from industry, including data protection officers (DPOs) and chief technology officers (CTOs). FPF’s segment of the workshop had two parts: an informational segment featuring presentations from FPF and IMDA, followed by a multi-stakeholder, practice-focused panel discussion.
FPF at “AI, AI, Captain! – Steering your organisation in the waters of Gen AI by IMDA and FPF”, July 8, 2025.
1.1 AI governance in APAC is neither unguided nor ungoverned, as policymakers are actively working to develop both soft and hard regulations for AI and to clarify how existing data protection laws apply to its use.
Josh presented on global AI governance, highlighting the rapid legislative changes in the APAC region over the past six months, and comparing developments in South Korea, Japan, and Vietnam with those in the EU, US, and Latin America. He then discussed how data protection laws – especially provisions on consent, data subject rights, and breach management – impact AI governance and how data protection regulators in Japan, South Korea, and Hong Kong (among others) have provided guidance on this. Josh’s presentation was followed by one from Darshini Ramiah, Senior Manager of AI Governance and Safety at IMDA. Darshini provided an overview of Singapore’s approach to AI governance, which is built on three key pillars:
Creating practical tools, such as the AI Verify toolkit and Project Moonshot, which enable benchmarking and red teaming of both traditional AI systems and large language models (LLMs) respectively;
Engaging closely with international partners, such as through the ASEAN Working Group on AI Governance and the publication of the AI Playbook for Small States under the Digital Forum of Small States; and
Collaborating with industry in the development of principles and tools around AI governance.
FPF presenting at “AI AI Captain – Steering your organisation in the waters of Gen AI by IMDA and FPF”, July 8, 2025.
1.2 FPF moderated a panel session that focused on key aspects of AI governance and featured industry experts and regulators.
The panel session of the workshop, moderated by Josh, included the following experts:
Darshini Ramiah, Senior Manager, AI Governance and Safety at IMDA;
Derek Ho, Deputy Chief Privacy, AI and Data Responsibility Officerat Mastercard; and
Patrick Chua, Senior Principal Digital Strategist at Singapore Airlines (SIA).
The experts discussed AI governance from both an industry and regulatory perspective.
The panelists highlighted that AI governance is cross-functional and requires collaborative effort from the various teams in the organization to be successful.
One of the panelists suggested looking at the Principles, People, Process and Technology (“3Ps and a T”) when considering AI governance. The panelists agreed on the importance of clearly defining values that serve as a “North Star” to guide their organization’s cross-functional AI governance efforts and to build strong support from senior management for related initiatives.
For small and medium enterprises (SMEs), the panelists emphasized that a structured but scalable governance model could help SMEs to manage AI risk effectively. SMEs can start by referring to existing resources like IMDA’s guidelines, such as the Model AI Governance Framework.
Recognizing that many organizations in Singapore will be procuring ready-made AI solutions rather than developing their own models in-house, panelists highlighted the need for strong due diligence. This includes examining the model cards which disclose the model’s key metrics, adopting contractual safeguards from third party vendors, and deploying the technology in stages to further limit risk.
Singapore is also working to standardize AI transparency for industry. IMDA is exploring several areas, including the introduction of standardized disclosure formats for AI model developers, such as standardized model cards.
FPF moderating the panel session at “AI AI Captain – Steering your organisation in the waters of Gen AI by IMDA and FPF”, July 8, 2025.
2. FPF facilitated deep conversations at PDPC’s PETs Summit, including on the use of PETs in cross-border data transfers and within SMEs.
2.1 FPF moderated a fireside chat on PETs use cases during the opening Plenary Session.
On Tuesday, July 8, 2025 FPF APAC participated in a day-long PETs Summit, organized by the PDPC and IMDA. During the opening plenary session, Josh moderated a fireside chat with Fabio Bruno, Assistant Director of Applied Innovation at INTERPOL, titled “Solving Big Problems with PETs.”Following panels that covered use cases for PETs and policies that could increase their adoption, this fireside chat looked at how PETs could present fresh solutions to long-standing data protection issues (such as cross-border data transfers).
In this regard, Fabio shared how law enforcement bodies around the world have been exploring PETs to streamline investigations. He highlighted ongoing exploration of certain PETs, such as zero-knowledge proofs (a cryptographic method that allows one party to prove to another party that a particular piece of information is true without revealing any additional information beyond the validity of the claim) and homomorphic encryption (a family of encryption schemes allowing for computations to be performed directly on encrypted data without having to first decrypt it). In a law enforcement context, these PETs enable preliminary validation that can help to reduce delays and lower the cost of investigations, while also helping to protect individuals’ privacy.
Notwithstanding the potential of PETs for cross-border data transfers (even for commercial, non-law enforcement contexts), challenges exist. These include: (1) enhancing and harmonizing the understanding and acceptability of PETs among data protection regulators globally; and (2) obtaining higher management support to invest in PETs. Nevertheless, the fireside chat concluded with optimism about the prospect of the greater use of PETs for data transfers, and left the audience with plenty of food for thought.
FPF moderating the fireside chat at PETs Summit Plenary Session, July 8, 2025
2.2 FPF Members facilitated an engaging PETs Deep Dive Session that explored business use cases for PETs.
After the plenary session, FPF APAC teammates Dominic Paulger, Sakshi Shivhare, and Bilal Mohamed facilitated a practical workshop, titled the “PETs Deep Dive Session” that was organized by the IMDA. Drawing on the IMDA’s draft PETs Adoption Guide, the workshop aimed to help Chief Data Officers, DPOs, and AI and data product teams understand which PETs best fit their business use cases.
FPF APAC Team at PETs Summit, July 8, 2025
3. On Wednesday, FPF joined a discussion at IAPP Asia Privacy Forum on how regulators and major tech companies in the APAC region are fostering “digital trust” in AI by aligning technology with societal expectations.
On Wednesday, July 9, 2025, FPF APAC participated in an IAPP Asia Privacy Forum panel titled “Building Digital Trust in AI: Perspectives from APAC.”Josh joined Lanah Kammourieh Donnelly, Global Head of Privacy Policy, at Google, and Lee Wan Sie, Cluster Director for AI Governance and Safety at the IMDA for a panel moderated by Justin B. Weiss, Senior Director at Crowell Global Advisors.
A key theme from the panel was that, given the opacity of many digital technologies, the concept of digital trust is essential to ensure that these technologies work in ways that protect important societal interests. Accordingly, the panel discussed strategies that could foster digital trust.
Wan Sie provided the regulator’s perspective and acknowledged that given the rapid pace of AI development, regulation would always be “playing catch-up.” Thus, instead of implementing a horizontal AI law, she shared how Singapore is focusing on making the industry more capable of using AI responsibly. Wan Sie pointed to AI Verify, Singapore’s AI governance testing framework and toolkit, and the IMDA’s new Global AI Assurance Sandbox, as mechanisms that help organizations ensure their AI systems could demonstrate greater trustworthiness to users.
Josh focused on trends from across the APAC region, sharing how regulators in Japan and South Korea have been actively considering amendments to their data protection laws to expand the legal bases for processing personal data, in order to facilitate greater availability of data for training high-quality AI systems.
Lanah highlighted Google’s approach of developing AI responsibly in accordance with certain core privacy values, such as those in the Fair Information Practice Principles (FIPPs). For example, she shared how Google is actively researching technological solutions like training its models on synthetic data instead of using publicly-available datasets from the Internet which may contain large amounts of personal data.
Overall, the panel noted that APAC is taking its own distinct approach to AI governance – one in which industry and regulators collaborate actively to ensure principled development of technology.
FPF and the “Building Digital Trust in AI: Perspectives from APAC” panel at IAPP, 9 July 2025.
4. On Thursday, FPF staff moderated two panels at IAPP AI Governance Global on cross-border data transfers and regulatory developments in Australia
4.1 While cross-border data transfers are fragmented and restrictive, there is cautious optimism that APAC will pursue interoperability.
On Thursday, July 10, 2025, FPF organised a panel titled “Shifting Sands: The Outlook for Cross Border Data Transfers in APAC” which featured Emily Hancock, Vice President and Chief Privacy Officer at Cloudflare, Arianne Jimenez,Head of Privacy and Data Policy and Engagement for APAC at Meta and Zee Kin Yeong,Chief Executive of the Singapore Academy of Law and FPF Senior Fellow. Moderated by Josh, the panel discussed evolving regulatory frameworks for cross-border data transfers in APAC.
The panel first observed that the landscape for cross-border data transfers across APAC remains fragmented. Emily elaborated that restrictions on data transfer were a global phenomenon and attributable to how data is increasingly viewed as a national security matter, making governments less willing to lower restrictions and pursue interoperability.
Despite this challenging landscape, the panel members were cautiously optimistic that transfer restrictions could be managed effectively. Zee Kin highlighted how the increasing integration of economies through supranational organizations like ASEAN is driving a push in APAC towards recognizing more business-friendly data transfer mechanisms, such as the ASEAN MCCs. He also noted that regulators often relax restrictions once local businesses start to expand operations overseas and need to transfer data across borders.
Arianne suggested that businesses communicate to regulators the challenges they face with restrictive data transfer frameworks. She acknowledged that SMEs are often not as well-resourced as multi-national corporations (MNCs), and thus faced difficulties in navigating the complex patchwork of regulations across the region. She explained that since regulators in APAC are generally open to consultation, businesses should take the opportunity to advocate for more interoperability.
The panel concluded by highlighting the importance of data transfers to AI development. Cross-border data transfers are crucial to fostering diverse datasets, accessing advanced computing infrastructure, combating global cyber-threats by enabling worldwide threat sharing, and reducing the environmental impact by limiting the need for additional data centers. Overall, the panel expressed hope that despite the legal fragmentation and complicated state of play, the clear benefits of cross-border data transfers would encourage jurisdictions to pursue greater interoperability.
FPF and the “Shifting Sands: The Outlook for Cross Border Data Transfers in APAC” panel at IAPP July 10, 2025.
4.2 With updates to Australia’s Privacy Act, privacy is non-negotiable, and businesses can benefit from improving their privacy compliance processes and systems ahead of increased enforcement.
FPF’s APAC Deputy Director Dominic Paulger moderated a panel titled“Navigating the Impact of Australia’s Privacy Act Amendments in the Asia-Pacific.” The panelists included Dora Amoah, Global Privacy Office Lead at the Boeing Company, Rachel Baker, Senior Corporate Counsel for Privacy, JAPAC, at Salesforce, and Annelies Moens, the former Managing Director of Privcore. The panel discussed the enactment of the Privacy and Other Legislation Amendment Bill 2024 following a multiyear review of Australia’s Privacy Act, and the potential impact of these reforms on businesses.
Annelies shared an overview of the reforms, including:
new transparency requirements for automated decision-making (ADM);
revised cross-border data transfer mechanisms;
new enforcement powers for the Office of the Australian Information Commissioner (OAIC), and
a new statutory tort for serious invasions of privacy.
She mentioned that more changes could be coming, but some proposals – such as removing the small business exception – were facing resistance in Australia. However, irrespective of how the law develops, businesses can expect enforcement to increase.
The industry panelists shared their insights and experiences complying with the new amendments. Dora explained that despite the increased litigation risk from the new statutory tort for serious invasions of privacy, the threshold for liability was rather high as the tort required intent. She also noted that companies could avoid liability through implementing proper processes that prevent intentional or reckless misconduct.
Rachel noted that the Privacy Act’s new ADM provisions would improve consumer rights in Australia. She observed how Australians have been facing serious privacy intrusions that have drawn the OAIC’s attention, such as the Cambridge Analytica scandal, and the mis-use of facial recognition technology. She considered that since data subjects in Australia are increasingly expecting more rights, such as the right to deletion, businesses should go beyond compliance and actively adopt best practices.
Overall, the panel expressed the view that with this new reality, the role of the privacy professional in Australia, much like the rest of the world, is evolving to not just interpret and comply with the law but also to build robust systems through privacy by design.
FPF and the panelists of “Navigating the Impact of Australia’s Privacy Act Amendments in the Asia-Pacific” at IAPP July 10, 2025.
5. FPF organized exclusive side events to foster deeper engagements with key stakeholders.
A key theme of FPF’s annual PDP Week experience has always been about bringing our global FPF community – members, fellows, and friends – together for deep and meaningful conversations about the latest developments. This year, FPF APAC organized two events for its members: a Privacy Leaders’ Luncheon (an annual staple), and for the first time, an India Luncheon co-organized alongside Khaitan & Co.
5.1 On July 8, 2025, FPF hosted an invite-only Privacy Leaders’ Luncheon.
This closed-door event provided a platform for senior stakeholders of FPF APAC to discuss pressing challenges at the intersection of AI and privacy, with a particular focus on the APAC region. During the session, the attendees discussed key topics such as the emerging developments in data protection laws, AI governance, and children’s privacy.
FPF’s Privacy Leaders Luncheon, July 8, 2025.
5.2 On July 10, FPF co-hosted an India Roundtable Luncheon with Khaitan & Co.
FPF APAC also collaborated with an Indian law firm, Khaitan & Co, to co-host a lunch roundtable focusing on pressing challenges in India, such as the development of implementing rules for the Digital Personal Data Protection Act, 2023 (DPDPA). The event brought together experts from both India and Singapore for fruitful discussions around the DPDPA and the draft Digital Personal Data Protection Rules. FPF APAC is grateful to have partnered with Khaitan & Co for the Luncheon, which saw active discussion amongst attendees on key issues in India’s emerging data protection regime.
FPF’s India Luncheon co-hosted with Khaitan & Co, July 10, 2025.
6. Conclusion
In all, it has been another deeply fruitful and meaningful year for FPF at Singapore’s PDP Week 2025. Through our panels, engagements, and curated roundtable sessions, FPF is proud to have been able to continue to drive thoughtful and earnest dialogue on data protection, AI, and responsible innovation across the APAC region. These engagements reflect our ongoing commitment to fostering greater collaboration and understanding among regulators, industry, academia, and civil society.
Looking ahead, FPF remains focused on shaping thoughtful approaches to privacy and emerging technologies. We are grateful for the continued support of the IMDA, IAPP, as well as our members, partners, and participants, who helped make these events a memorable success.