Future of Privacy Forum Launches the FPF Center for Artificial Intelligence

The FPF Center for Artificial Intelligence will serve as a catalyst for AI policy and compliance leadership globally, advancing responsible data and AI practices for public and private stakeholders

Today, the Future of Privacy Forum (FPF) launched the FPF Center for Artificial Intelligence, established to better serve policymakers, companies, non-profit organizations, civil society, and academics as they navigate the challenges of AI policy and governance. The Center will expand FPF’s long-standing AI work, introduce large-scale novel research projects, and serve as a source for trusted, nuanced, nonpartisan, and practical expertise. 

FPF’s Center work will be international as AI continues to deploy globally and rapidly. Cities, states, countries, and international bodies are already grappling with implementing laws and policies to manage the risks.“Data, privacy, and AI are intrinsically interconnected issues that we have been working on at FPF for more than 15 years, and we remain dedicated to collaborating across the public and private sectors to promote their ethical, responsible, and human-centered use,” said Jules Polonetsky, FPF’s Chief Executive Officer. “But we have reached a tipping point in the development of the technology that will affect future generations for decades to come. At FPF, the word Forum is a core part of our identity. We are a trusted convener positioned to build bridges between stakeholders globally, and we will continue to do so under the new Center for AI, which will sit within FPF.”

The Center will help the organization’s 220+ members navigate AI through the development of best practices, research, legislative tracking, thought leadership, and public-facing resources. It will be a trusted evidence-based source of information for policymakers, and it will collaborate with academia and civil society to amplify relevant research and resources. 

“Although AI is not new, we have reached an unprecedented moment in the development of the technology that marks a true inflection point. The complexity, speed and scale of data processing that we are seeing in AI systems can be used to improve people’s lives and spur a potential leapfrogging of societal development, but with that increased capability comes associated risks to individuals and to institutions,” said Anne J. Flanagan, Vice President for Artificial Intelligence at FPF. “The FPF Center for AI will act as a collaborative force for shared knowledge between stakeholders to support the responsible development of AI, including its fair, safe, and equitable use.”

The Center will officially launch at FPF’s inaugural summit DC Privacy Forum: AI Forward. The in-person and public-facing summit will feature high-profile representatives from the public and private sectors in the world of privacy, data and AI. 

FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers. 

See the full list of founding FPF Center for AI Leadership Council members here.

I am excited about the launch of the Future of Privacy Forum’s new Center for Artificial Intelligence and honored to be part of its leadership council. This announcement builds on many years of partnership and collaboration between Workday and FPF to develop privacy best practices and advance responsible AI, which has already generated meaningful outcomes, including last year’s launch of best practices to foster trust in this technology in the workplace.  I look forward to working alongside fellow members of the Council to support the Center’s mission to build trust in AI and am hopeful that together we can map a path forward to fully harness the power of this technology to unlock human potential.

Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday

I’m honored to be a founding member of the Leadership Council of the Future of Privacy Forum’s new Center for Artificial Intelligence. AI’s impact transcends borders, and I’m excited to collaborate with a diverse group of experts around the world to inform companies, civil society, policymakers, and academics as they navigate the challenges and opportunities of AI governance, policy, and existing data protection regulations.

Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden

“As we enter this era of AI, we must require the right balance between allowing innovation to flourish and keeping enterprises accountable for the technologies they create and put on the market. IBM believes it will be crucial that organizations such as the Future of Privacy Forum help advance responsible data and AI policies, and we are proud to join others in industry and academia as part of the Leadership Council.”

Learn more about the FPF Center for AI here.

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. 

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.

FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance

FPF’s Youth and Education team has developed a checklist and accompanying policy brief to help schools vet generative AI tools for compliance with student privacy laws. Vetting Generative AI Tools for Use in Schools is a crucial resource as the use of generative AI tools continues to increase in educational settings. It’s critical for school leaders to understand how existing federal and state student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) apply to the complexities of machine learning systems to protect student privacy. With these resources, FPF aims to provide much-needed clarity and guidance to educational institutions grappling with these issues.

Click here to access the checklist and policy brief.

“AI technology holds immense promise in enhancing educational experiences for students, but it must be implemented responsibly and ethically,” said David Sallay, the Director for Youth & Education Privacy at the Future of Privacy Forum. “With our new checklist, we aim to empower educators and administrators with the knowledge and tools necessary to make informed decisions when selecting generative AI tools for classroom use while safeguarding student privacy.”

The checklist, designed specifically for K -12 schools, outlines key considerations when incorporating generative AI into a school or district’s edtech vetting checklist. 

These include: 

By prioritizing these steps, educational institutions can promote transparency and protect student privacy while maximizing the benefits of technology-driven learning experiences for students. 

The in-depth policy brief outlines the relevant laws and policies a school should consider, the unique compliance considerations of generative AI tools (including data collection, transparency and explainability, product improvement, and high-risk decision-making), and their most likely use cases (student, teacher, and institution-focused).

The brief also encourages schools and districts to update their existing edtech vetting policies to address the unique considerations of AI technologies (or to create a comprehensive policy if one does not already exist) instead of creating a separate vetting process for AI. It also highlights the role that state legislatures can play in ensuring the efficiency of school edtech vetting and oversight and calls on vendors to be proactively transparent with schools about their use of AI.

li live promo

Check out the LinkedIn Live with CEO Jules Polonetsky and Youth & Education Director David Sallay about the Checklist and Policy Brief.

To read more of the Future of Privacy Forum’s youth and student privacy resources, visit www.StudentPrivacyCompass.org

FPF Releases “The Playbook: Data Sharing for Research” Report and Infographic

Today, the Future of Privacy Forum (FPF) published “The Playbook: Data Sharing for Research,” a report on best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report.

Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific insights and drive progress in public health, education, social science, and a myriad of other fields for the betterment of the broader society. Academic researchers use this data to consider consumer, commercial, and scientific questions at a scale they cannot reach using conventional research data-gathering techniques alone. This data also helped researchers answer questions on topics ranging from bias in targeted advertising and the influence of misinformation on election outcomes to early diagnosis of diseases through data collected by fitness and health apps.

The playbook addresses vital steps for data management, sharing, and program execution between companies and researchers. Creating a data-sharing ecosystem that positively advances scientific research requires a better understanding of the established risks, opportunities to address challenges, and the diverse stakeholders involved in data-sharing decisions. This report aims to encourage safe, responsible data-sharing between industries and researchers.

“Corporate data sharing connects companies with research institutions, by extension increasing the quantity and quality of research for social good,” said Shea Swauger, Senior Researcher for Data Sharing and Ethics. “This Playbook showcases the importance, and advantages, of having appropriate protocols in place to create safe and simple data sharing processes.”

In addition to the Playbook, FPF created a companion infographic summarizing the benefits, challenges, and opportunities of data sharing for research outlined in the larger report.

research data sharing infographic

As a longtime advocate for facilitating the privacy-protective sharing of data by industry to the research community, FPF is proud to have created this set of best practices for researchers, institutions, policymakers, and data-holding companies. In addition to the Playbook, the Future of Privacy Forum has also opened nominations for its annual Award for Research Data Stewardship.

“Our goal with these initiatives is to celebrate the successful research partnerships transforming how corporations and researchers interact with each other,” Swauger said. “Hopefully, we can continue to engage more audiences and encourage others to model their own programs with solid privacy safeguards.”

Shea Swauger, Senior Researcher for Data Sharing and Ethics, Future of privacy Forum

Established by FPF in 2020 with support from The Alfred P. Sloan Foundation, the Award for Research Data Stewardship recognizes excellence in the privacy-protective stewardship of corporate data shared with academic researchers. The call for nominations is open and closes on Tuesday, January 17, 2023. To submit a nomination, visit the FPF site.

FPF has also launched a newly formed Ethics and Data in Research Working Group; this group receives late-breaking analyses of emerging US legislation affecting research and data, meets to discuss the ethical and technological challenges of conducting research, and collaborates to create best practices to protect privacy, decrease risk, and increase data sharing for research, partnerships, and infrastructure. Learn more and join here

FPF Testifies Before House Subcommittee on Energy and Commerce, Supporting Congress’s Efforts on the “American Data Privacy and Protection Act” 

This week, FPF’s Senior Policy Counsel Bertram Lee testified before the U.S. House Energy and Commerce Subcommittee on Consumer Protection and Commerce hearing, “Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security” regarding the bipartisan, bicameral privacy discussion draft bill, “American Data Privacy and Protection Act” (ADPPA). FPF has a history of supporting the passage of a comprehensive federal consumer privacy law, which would provide businesses and consumers alike with the benefit of clear national standards and protections.

Lee’s testimony opened by applauding the Committee on its efforts towards comprehensive federal privacy legislation and emphasized the “time is now” for its passage. As it is written, the ADPPA would address gaps in the sectoral approach to consumer privacy, establish strong national civil rights protections, and establish new rights and safeguards for the protection of sensitive personal information. 

“The ADPPA is more comprehensive in scope, inclusive of civil rights protections, and provides individuals with more varied enforcement mechanisms in comparison to some states’ current privacy regimes,” Lee said in his testimony. “It also includes corporate accountability mechanisms, such as the requiring privacy designations, data security offices, and executive certifications showing compliance, which is missing from current states’ laws. Notably, the ADPPA also requires ‘short-form’ privacy notices to aid consumers of how their data will be used by companies and their rights — a provision that is not found in any state law.” 

Lee’s testimony also provided four recommendations to strengthen the bill, which include: 

Many of the recommendations would ensure that the legislation gives individuals meaningful privacy rights and places clear obligations on businesses and other organizations that collect, use and share personal data. The legislation would expand civil rights protections for individuals and communities harmed by algorithmic discrimination as well as require algorithmic assessments and evaluations to better understand how these technologies can impact communities. 

The submitted testimony and a video of the hearing can be found on the House Committee on Energy & Commerce site.

Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework

The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield. 

This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached. 

The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.

This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases. 

After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement. 

In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.

Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.

Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.

The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.

The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”. 

Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.

One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.

Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side. 

Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.  

What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.

Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.

In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.

Privacy Best Practices for Rideshare Drivers Using Dashcams

FPF & Uber Publish Guide Highlighting Privacy Best Practices for Drivers who Record Video and Audio on Rideshare Journeys

FPF and Uber have created a guide for US-based rideshare drivers who install “dashcams” – video cameras mounted on a vehicle’s dashboard or windshield. Many drivers install dashcams to improve safety, security, and accountability; the cameras can capture crashes or other safety-related incidents outside and inside cars. Dashcam footage can be helpful to drivers, passengers, insurance companies, and others when adjudicating legal claims. At the same time, dashcams can pose substantial privacy risks if appropriate safeguards are not in place to limit the collection, use, and disclosure of personal data. 

Dashcams typically record video outside a vehicle. Many dashcams also record in-vehicle audio and some record in-vehicle video. Regardless of the particular device used, ride-hail drivers who use dashcams must comply with applicable audio and video recording laws.

The guide explains relevant laws and provides practical tips to help drivers be transparent, limit data use and sharing, retain video and audio-only for practical purposes, and use strict security controls. The guide highlights ways that drivers can employ physical signs, in-app notices, and other means to ensure passengers are informed about dashcam use and can make meaningful choices about whether to travel in a dashcam-equipped vehicle. Drivers seeking advice concerning specific legal obligations or incidents should consult legal counsel.

Privacy best practices for dashcams include: 

  1. Give individuals notice that they are being recorded
    • Place recording notices inside and on the vehicle.
    • Mount the dashcam in a visible location.
    • Consider, in some situations, giving an oral notification that recording is taking place.
    • Determine whether the ride sharing service provides recording notifications in the app, and utilize those in-app notices.
  2. Only record audio and video for defined, reasonable purposes
    • Only keep recordings for as long as needed for the original purpose.
    • Inform passengers as to why video and/or audio is being recorded.
  3. Limit sharing and use of recorded footage
    • Only share video and audio with third parties for relevant reasons that align with the original reason for recording.
    • Thoroughly review the rideshare service’s privacy policy and community guidelines if using an app-based rideshare service, and be aware that many rideshare companies maintain policies against widely disseminating recordings.
  4. Safeguard and encrypt recordings and delete unused footage
    • Identify dashcam vendors that provide the highest privacy and security safeguards.
    • Carefully read the terms and conditions when buying dashcams to understand the data flows.

Uber will be making these best practices available to drivers in their app and website. 

Many ride-hail drivers use dashcams in their cars, and the guidance and best practices published today provide practical guidance to help drivers implement privacy protections. But driver guidance is only one aspect of ensuring individuals’ privacy and security when traveling. Dashcam manufacturers must implement privacy-protective practices by default and provide easy-to-use privacy options. At the same time, ride-hail platforms must provide drivers with the appropriate tools to notify riders, and carmakers must safeguard drivers’ and passengers’ data collected by OEM devices.

In addition, dashcams are only one example of increasingly sophisticated sensors appearing in passenger vehicles as part of driver monitoring systems and related technologies. Further work is needed to apply comprehensive privacy safeguards to emerging technologies across the connected vehicle sector, from carmakers and rideshare services to mobility services providers and platforms. Comprehensive federal privacy legislation would be a good start. And in the absence of Congressional action, FPF is doing further work to identify key privacy risks and mitigation strategies for the broader class of driver monitoring systems that raise questions about technologies beyond the scope of this dashcam guide.

12th Annual Privacy Papers for Policymakers Awardees Explore the Nature of Privacy Rights & Harms

The winners of the 12th annual Future of Privacy (FPF) Privacy Papers for Policymakers Award ask big questions about what should be the foundational elements of data privacy and protection and who will make key decisions about the application of privacy rights. Their scholarship will inform policy discussions around the world about privacy harms, corporate responsibilities, oversight of algorithms, and biometric data, among other topics.

“Policymakers and regulators in many countries are working to advance data protection laws, often seeking in particular to combat discrimination and unfairness,” said FPF CEO Jules Polonetsky. “FPF is proud to highlight independent researchers tackling big questions about how individuals and society relate to technology and data.”

This year’s papers also explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and among international data protection authorities. The winning papers will be presented at a virtual event on February 10, 2022. 

The winners of the 2022 Privacy Papers for Policymakers Award are:

From the record number of nominated papers submitted this year, these six papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were selected based on the research and solutions that are relevant for policymakers and regulators in the U.S. and abroad.

In addition to the winning papers, FPF has selected two papers for Honorable Mention: Verification Dilemmas and the Promise of Zero-Knowledge Proofs by Kenneth Bamberger, University of California, Berkeley – School of Law; Ran Canetti, Boston University, Department of Computer Science, Boston University, Faculty of Computing and Data Science, Boston University, Center for Reliable Information Systems and Cybersecurity; Shafi Goldwasser, University of California, Berkeley – Simons Institute for the Theory of Computing; Rebecca Wexler, University of California, Berkeley – School of Law; and Evan Zimmerman, University of California, Berkeley – School of Law; and A Taxonomy of Police Technology’s Racial Inequity Problems by Laura Moy, Georgetown University Law Center.

FPF also selected a paper for the Student Paper Award, A Fait Accompli? An Empirical Study into the Absence of Consent to Third Party Tracking in Android Apps by Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford. The Student Paper Award Honorable Mention was awarded to Yeji Kim, University of California, Berkeley – School of Law, for her paper, Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent.

The winning authors will join FPF staff to present their work at a virtual event with policymakers from around the world, academics, and industry privacy professionals. The event will be held on February 10, 2022, from 1:00 – 3:00 PM EST. The event is free and open to the general public. To register for the event, visit https://bit.ly/3qmJdL2.

Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release

The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users.

“We have a prime opportunity now to implement strong privacy and human rights protections as brain-computer interfaces become more widely used,” said Jeremy Greenberg, Policy Counsel at the Future of Privacy Forum. “Among other uses, these technologies have tremendous potential to treat people with diseases and conditions like epilepsy or paralysis and make it easier for people with disabilities to communicate, but these benefits can only be fully realized if meaningful privacy and ethical safeguards are in place.”

Brain-computer interfaces are computer-based systems that are capable of directly recording, processing, analyzing, or modulating human brain activity. The sensitivity of data that BCIs collect and the capabilities of the technology raise concerns over consent, as well as the transparency, security, and accuracy of the data. The report offers a number of policy and technical solutions to mitigate the risks of BCIs and highlights their positive uses.

“Emerging innovations like neurotechnology hold great promise to transform healthcare, education, transportation, and more, but they need the right guardrails in place to protect individuals’ privacy,” said IBM Chief Privacy Officer Christina Montgomery. “Working together with the Future of Privacy Forum, the IBM Policy Lab is pleased to release a new framework to help policymakers and businesses navigate the future of neurotechnology while safeguarding human rights.”

FPF and IBM have outlined several key policy recommendations to mitigate the privacy risks associated with BCIs, including:

FPF and IBM have also included several technical recommendations for BCI devices, including:

FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.

Read FPF’s four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.

FPF Launches Asia-Pacific Region Office, Global Data Protection Expert Clarisse Girot Leads Team

The Future of Privacy Forum (FPF) has appointed Clarisse Girot, PhD, LLM, an expert on Asian and European privacy legislation, to lead its new FPF Asia-Pacific office based in Singapore as Director. This new office expands FPF’s international reach in Asia and complements FPF’s offices in the U.S., Europe, and Israel, as well as partnerships around the globe.
 
Dr. Clarisse Girot is a privacy professional with over twenty years of experience in the privacy and data protection fields. Since 2017, Clarisse has been leading the Asian Business Law Institute’s (ABLI) Data Privacy Project, focusing on the regulations on cross-border data transfers in 14 Asian jurisdictions. Prior to her time at ABLI, Clarisse served as the Counsellor to the President of the French Data Protection Authority (CNIL) and Chair of the Article 29 Working Party. She previously served as head of CNIL’s Department of European and International Affairs, where she sat on the Article 29 Working Party, the group of EU Data Protection Authorities, and was involved in major international cases in data protection and privacy.
 
“Clarisse is joining FPF at an important time for data protection in the Asia-Pacific region. The two most populous countries in the world, India, and China, are introducing general privacy laws, and established data protection jurisdictions, like Singapore, Japan, South Korea, and New Zealand, have recently updated their laws,” said FPF CEO Jules Polonetsky. “Her extensive knowledge of privacy law will provide vital insights for those interested in compliance with regional privacy frameworks and their evolution over time.”
 
FPF Asia-Pacific will focus on several priorities by the end of the year including hosting an event at this year’s Singapore Data Protection Week. The office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and legal professionals. Rajah & Tann Singapore LLP is supporting the work of the FPF Asia-Pacific office.
 
“The FPF global team will greatly benefit from the addition of Clarisse. She will advise FPF staff, advisory board members, and the public on the most significant privacy developments in the Asia-Pacific region, including data protection bills and cross-border data flows,” said Gabriela Zanfir-Fortuna, Director for Global Privacy at FPF. “Her past experience in both Asia and Europe gives her a unique ability to confront the most complex issues dealing with cross-border data protection.”
 
As over 140 countries have now enacted a privacy or data protection law, FPF continues to expand its international presence to help data protection experts grapple with the challenges of ensuring responsible uses of data. Following the appointment of Malavika Raghavan as Senior Fellow for India in 2020, the launch of the FPF Asia-Pacific office further expands FPF’s international reach.
 
Dr. Gabriela Zanfir-Fortuna leads FPF’s international efforts and works on global privacy developments and European data protection law and policy. The FPF Europe office is led by Dr. Rob van Eijk, who prior to joining FPF worked at the Dutch Data Protection Authority as Senior Supervision Officer and Technologist for nearly ten years. FPF has created thriving partnerships with leading privacy research organizations in the European Union, such as Dublin City University and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB). FPF continues to serve as a leading voice in Europe on issues of international data flows, the ethics of AI, and emerging privacy issues. FPF Europe recently published a report comparing the regulatory strategy for 2021-2022 of 15 Data Protection Authorities to provide insights into the future of enforcement and regulatory action in the EU.
 
Outside of Europe, FPF has launched a variety of projects to advance tech policy leadership and scholarship in regions around the world, including Israel and Latin America. The work of the Israel Tech Policy Institute (ITPI), led by Managing Director Limor Shmerling Magazanik, includes publishing a report on AI Ethics in Government Services and organizing an OECD workshop with the Israeli Ministry of Health on access to health data for research.
 
In Latin America, FPF has partnered with the leading research association Data Privacy Brasil, provided in-depth analysis on Brazil’s LGPD privacy legislation and various data privacy cases decided in the Brazilian Supreme Court. FPF recently organized a panel during the CPDP LatAm Conference which explored the state of Latin American data protection laws alongside experts from Uber, the University of Brasilia, and the Interamerican Institute of Human Rights.
 

Read Dr. Girot’s Q&A on the FPF blog. Stay updated: Sign up for FPF Asia-Pacific email alerts.
 

FPF and Leading Health & Equity Organizations Issue Principles for Privacy & Equity in Digital Contact Tracing Technologies

With support from the Robert Wood Johnson Foundation, FPF engaged leaders within the privacy and equity communities to develop actionable guiding principles and a framework to help bolster the responsible implementation of digital contact tracing technologies (DCTT). Today, seven privacy, civil rights, and health equity organizations signed on to these guiding principles for organizations implementing DCTT.

“We learned early in our Privacy and Pandemics initiative that unresolved ethical, legal, social, and equity issues may challenge the responsible implementation of digital contact tracing technologies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “So we engaged leaders within the civil rights, health equity, and privacy communities to create a set of actionable principles to help guide organizations implementing digital contact tracing that respects individual rights.”

Contact tracing has long been used to monitor the spread of various infectious diseases. In light of COVID-19, governments and companies began deploying digital exposure notification using Bluetooth and geolocation data on mobile devices to boost contact tracing efforts and quickly identify individuals who may have been exposed to the virus. However, as DCTT begins to play an important role in public health, it is important to take necessary steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that might accompany its implementation today and in the future. Governance efforts that seek to better understand these risks will be better able to bolster public trust in DCTT technologies. 

“LGBT Tech is proud to have participated in the development of the Principles and Framework alongside FPF and other organizations. We are heartened to see that the focus of these principles is on historically underserved and under-resourced communities everywhere, like the LGBTQ+ community. We believe the Principles and Framework will help ensure that the needs and vulnerabilities of these populations are at the forefront during today’s pandemic and future pandemics.”

Carlos Gutierrez, Deputy Director, and General Counsel, LGBT Tech

“If we establish practices that protect individual privacy and equity, digital contact tracing technologies could play a pivotal role in tracking infectious diseases,” said Dr. Rachele Hendricks-Sturrup, Research Director at the Duke-Margolis Center for Health Policy. “These principles allow organizations implementing digital contact tracing to take ethical and responsible approaches to how their technology collects, tracks, and shares personal information.”

FPF, together with Dialogue on Diversity, the National Alliance Against Disparities in Patient Health (NADPH), BrightHive, and LGBT Tech, developed the principles, which advise organizations implementing DCTT to commit to the following actions:

  1. Be Transparent About How Data Is Used and Shared. 
  1. Apply Strong De-Identification Techniques and Solutions. 
  1. Empower Users Through Tiered Opt-in/Opt-out Features and Data Minimization. 
  1. Acknowledge and Address Privacy, Security, and Nondiscrimination Protection Gaps. 
  1. Create Equitable Access to DCTT. 
  1. Acknowledge and Address Implicit Bias Within and Across Public and Private Settings.
  1. Democratize Data for Public Good While Employing Appropriate Privacy Safeguards. 
  1. Adopt Privacy-By-Design Standards That Make DCTT Broadly Accessible. 

Additional supporters of these principles include the Center for Democracy and Technology and Human Rights First.

To learn more and sign on to the DCTT Principles visit fpf.org/DCTT.

Support for this program was provided by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the Foundation.

Navigating Preemption through the Lens of Existing State Privacy Laws

This post is the second of two posts on federal preemption and enforcement in United States federal privacy legislation. See Preemption in US Privacy Laws (June 14, 2021).

In drafting a federal baseline privacy law in the United States, lawmakers must decide to what extent the law will override state and local privacy laws. In a previous post, we discussed a survey of 12 existing federal privacy laws passed between 1968-2003, and the extent to which they are preemptive of similar state laws. 

Another way to approach the same question, however, is to examine the hundreds of existing state privacy laws currently on the books in the United States. Conversations around federal preemption inevitably focus on comprehensive laws like the California Consumer Privacy Act, or the Virginia Consumer Data Protection Act — but there are hundreds of other state privacy laws on the books that regulate commercial and government uses of data. 

In reviewing existing state laws, we find that they can be categorized usefully into: laws that complement heavily regulated sectors (such as health and finance); laws of general applicability; common law; laws governing state government activities (such as schools and law enforcement); comprehensive laws; longstanding or narrowly applicable privacy laws; and emerging sectoral laws (such as biometrics or drones regulations). As a resource, we recommend: Robert Ellis Smith, Compilation of State and Federal Privacy Laws (last supplemented in 2018). 

  1. Heavily Regulated Sectoral Silos. Most federal proposals for a comprehensive privacy law would not supersede other existing federal laws that contain privacy requirements for businesses, such as the Health Insurance Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act (GLBA). As a result, a new privacy law should probably not preempt state sectoral laws that: (1) supplement their federal counterparts and (2) were intentionally not preempted by those federal regimes. In many cases, robust compliance regimes have been built around federal and state parallel requirements, creating entrenched privacy expectations, privacy tools, and compliance practices for organizations (“lock in”).
  1. Laws of General Applicability. All 50 states have laws barring unfair and deceptive commercial and trade practices (UDAP), as well as generally applicable laws against fraud, unconscionable contracts, and other consumer protections. In cases where violations involve the mis-use of personal information, such claims could be inadvertently preempted by a national privacy law.
  1. State Common Law. Privacy claims have been evolving in US common law over the last hundred years, and claims vary from state to state. A federal privacy law might preempt (or not preempt) claims brought under theories of negligence, breach of contract, product liability, invasions of privacy, or other “privacy torts.”
  2. State Laws Governing State Government Activities. In general, states retain the right to regulate their own government entities, and a commercial baseline privacy law is unlikely to affect such state privacy laws. These include, for example, state “mini Privacy Acts” applying to state government agencies’ collection of records, state privacy laws applicable to public schools and school districts, and state regulations involving law enforcement — such as government facial recognition bans.
  1. Comprehensive or Non-Sectoral State Laws. Lawmakers considering the extent of federal preemption should take extra care to consider the effect on different aspects of omnibus or comprehensive consumer privacy laws, such as the California Consumer Privacy Act (CCPA), the Colorado Privacy Act, and the Virginia Consumer Data Protection Act. In addition, however, there are a number of other state privacy laws that can be considered “non-sectoral” because they apply broadly to businesses that collect or use personal information. These include, for example, CalOPPA (requiring commercial privacy policies), the California “Shine the Light” law (requiring disclosures from companies that share personal information for direct marketing), data breach notification laws, and data disposal laws.
  1. Longstanding, Narrowly Applicable State Privacy Laws. Many states have relatively long-standing privacy statutes on the books that govern narrow use cases, such as: state laws governing library records, social media password laws, mugshot laws, anti-paparazzi laws, state laws governing audio surveillance between private parties, and laws governing digital assets of decedents. In many cases, such laws could be expressly preserved or incorporated into a federal law. 
  1. Emerging Sectoral and Future-Looking Privacy Laws. New state laws have emerged in recent years in response to novel concerns, including for: biometric data; drones; connected and autonomous vehicles; the Internet of Things; data broker registration; and disclosure of intimate images. This trend is likely to continue, particularly in the absence of a federal law.

Congressional intent is the “ultimate touchstone” of preemption. Lawmakers should consider long-term effects on current and future state laws, including how they will be impacted by a preemption provision, as well as how they might be expressly preserved through a Savings Clause. In order to help build consensus, lawmakers should work with stakeholders and experts in the numerous categories of laws discussed above, to consider how they might be impacted by federal preemption.

ICYMI: Read the first blog in this series PREEMPTION IN US PRIVACY LAWS.

Manipulative Design: Defining Areas of Focus for Consumer Privacy

In consumer privacy, the phrase “dark patterns” is everywhere. Emerging from a wide range of technical and academic literature, it now appears in at least two US privacy laws: the California Privacy Rights Act and the Colorado Privacy Act (which, if signed by the Governor, will come into effect in 2025).

Under both laws, companies will be prohibited from using “dark patterns,” or “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision‐making, or choice,” to obtain user consent in certain situations–for example, for the collection of sensitive data.

When organizations give individuals choices, some forms of manipulation have long been barred by consumer protection laws, with the Federal Trade Commission and state Attorneys General prohibiting companies from deceiving or coercing consumers into taking actions they did not intend or striking bargains they did not want. But consumer protection law does not typically prohibit organizations from persuading consumers to make a particular choice. And it is often unclear where the lines fall between cajoling, persuading, pressuring, nagging, annoying, or bullying consumers. The California and Colorado laws seek to do more than merely bar deceptive practices; they prohibit design that “subverts or impairs user autonomy.”

What does it mean to subvert user autonomy, if a design does not already run afoul of traditional consumer protections law? Just as in the physical world, the design of digital platforms and services always influences behavior — what to pay attention to, what to read and in what order, how much time to spend, what to buy, and so on. To paraphrase Harry Brignull (credited with coining the term), not everything “annoying” can be a dark pattern. Some examples of dark patterns are both clear and harmful, such as a design that tricks users into making recurring payments, or a service that offers a “free trial” and then makes it difficult or impossible to cancel. In other cases, the presence of “nudging” may be clear, but harms may be less clear, such as in beta-testing what color shades are most effective at encouraging sales. Still others fall in a legal grey area: for example, is it ever appropriate for a company to repeatedly “nag” users to make a choice that benefits the company, with little or no accompanying benefit to the user?

In Fall 2021, Future of Privacy Forum will host a series of workshops with technical, academic, and legal experts to help define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops will feature experts on manipulative design in at least three contexts of consumer privacy: (1) Youth & Education; (2) Online Advertising and US Law; and (3) GDPR and European Law. 

As lawmakers address this issue, we identify at least four distinct areas of concern:

This week at the first edition of the annual Dublin Privacy Symposium, FPF will join other experts to discuss principles for transparency and trust. The design of user interfaces for digital products and services pervades modern life and directly impacts the choices people make with respect to sharing their personal information. 

India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

New FPF Report Highlights Privacy Tech Sector Evolving from Compliance Tools to Platforms for Risk Management and Data Utilization

As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

“The privacy tech sector is at an inflection point, as its offerings have expanded beyond assisting with regulatory compliance,” said FPF CEO Jules Polonetsky. “Increasingly, companies want privacy tech to help businesses maximize the utility of data while managing ethics and data protection compliance.”

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding. 

“The customers buying privacy-enhancing tech used to be primarily Chief Privacy Officers,” said report lead author Tim Sparapani. “Now it’s also Chief Marketing Officers, Chief Data Scientists, and Strategy Officers who value the insights they can glean from de-identified customer data.”

The report highlights five trends in the privacy enhancing tech market:

The report also draws seven implications for competition in the market:

The report makes a series of recommendations, including that the industry define as a priority a common vernacular for privacy tech; set standards for technologies in the “privacy stack” such as differential privacy, homomorphic encryption, and federated learning; and explore the needs of companies for privacy tech based upon their size, sector, and structure. It calls on vendors to recognize the need to provide adequate support to customers to increase uptake and speed time from contract signing to successful integration.

The Future of Privacy Forum launched the Privacy Tech Alliance (PTA) as a global initiative with a mission to define, enhance and promote the market for privacy technologies. The PTA brings together innovators in privacy tech with customers and key stakeholders.

Members of the PTA Advisory Board, which includes Anonos, BigID, D-ID, Duality, Ethyca, Immuta, OneTrust, Privacy Analytics, Privitar, SAP, Truata, TrustArc, Wirewheel, and ZL Tech, have formed a working group to address impediments to growth identified in the report. The PTA working group will define a common vernacular and typology for privacy tech as a priority project with chief privacy officers and other industry leaders who are members of FPF. Other work will seek to develop common definitions and standards for privacy-enhancing technologies such as differential privacy, homomorphic encryption, and federated learning and identify emerging trends for venture capitalists and other equity investors in this space. Privacy Tech companies can apply to join the PTA by emailing [email protected].


Perspectives on the Privacy Tech Market

Quotes from Members of the Privacy Tech Alliance Advisory Board on the Release of the “Privacy Tech’s Third Generation” Report

anonos feature image 1

“The ‘Privacy Tech Stack’ outlined by the FPF is a great way for organizations to view their obligations and opportunities to assess and reconcile business and privacy objectives. The Schrems II decision by the Court of Justice of the European Union highlights that skipping the second ‘Process’ layer can result in desired ‘Outcomes’ in the third layer (e.g., cloud processing of, or remote access to, cleartext data) being unlawful – despite their global popularity – without adequate risk management controls for decentralized processing.” — Gary LaFever, CEO & General Counsel, Anonos

bigid 1

“As a founding member of this global initiative, we are excited by the conclusions drawn from this foundational report – we’ve seen parallels in our customer base, from needing an enterprise-wide solution to the rich opportunity for collaboration and integration. The privacy tech sector continues to mature as does the imperative for organizations of all sizes to achieve compliance in light of the increasingly complicated data protection landscape.’’—Heather Federman, VP Privacy and Policy at BigID

logo

“There is no doubt of the massive importance of the privacy sector, an area which is experiencing huge growth. We couldn’t be more proud to be part of the Privacy Tech Alliance Advisory Board and absolutely support the work they are doing to create alignment in the industry and help it face the current set of challenges. In fact we are now working on a similar initiative in the synthetic media space to ensure that ethical considerations are at the forefront of that industry too.” — Gil Perry, Co-Founder & CEO, D-ID

dualitytechnologies

“We congratulate the Future of Privacy Forum and the Privacy Tech Alliance on the publication of this highly comprehensive study, which analyzes key trends within the rapidly expanding privacy tech sector. Enterprises today are increasingly reliant on privacy tech, not only as a means of ensuring regulatory compliance but also in order to drive business value by facilitating secure collaborations on their valuable and often sensitive data. We are proud to be part of the PTA Advisory Board, and look forward to contributing further to its efforts to educate the market on the importance of privacy-tech, the various tools available and their best utilization, ultimately removing barriers to successful deployments of privacy-tech by enterprises in all industry sectors” — Rina Shainski, Chairwoman, Co-founder, Duality

onetrustlogo

“Since the birth of the privacy tech sector, we’ve been helping companies find and understand the data they have, compare it against applicable global laws and regulations, and remediate any gaps in compliance. But as the industry continues to evolve, privacy tech also is helping show business value beyond just compliance. Companies are becoming more transparent, differentiating on ethics and ESG, and building businesses that differentiate on trust. The privacy tech industry is growing quickly because we’re able to show value for compliance as well as actionable business insights and valuable business outcomes.” — Kabir Barday, CEO, OneTrust

pa logo iqvia

“Leading organizations realize that to be truly competitive in a rapidly evolving marketplace, they need to have a solid defensive footing. Turnkey privacy technologies enable them to move onto the offense by safely leveraging their data assets rapidly at scale.” — Luk Arbuckle, Chief Methodologist, Privacy Analytics

1024px sap logo.svg

“We appreciate FPF’s analysis of the privacy tech marketplace and we’re looking forward to further research, analysis, and educational efforts by the Privacy Tech Alliance. Customers and consumers alike will benefit from a shared understanding and common definitions for the elements of the privacy stack.” — Corinna Schulze, Director, EU Government Relations, Global Corporate Affairs, SAP

unknown

“The report shines a light on the evolving sophistication of the privacy tech market and the critical need for businesses to harness emerging technologies that can tackle the multitude of operational challenges presented by the big data economy. Businesses are no longer simply turning to privacy tech vendors to overcome complexities with compliance and regulation; they are now mapping out ROI-focused data strategies that view privacy as a key commercial differentiator. In terms of market maturity, the report highlights a need to overcome ambiguities surrounding new privacy tech terminology, as well as discrepancies in the mapping of technical capabilities to actual business needs. Moving forward, the advantage will sit with those who can offer the right blend of technical and legal expertise to provide the privacy stack assurances and safeguards that buyers are seeking – from a risk, deployment and speed-to-value perspective. It’s worth noting that the growing importance of data privacy to businesses sits in direct correlation with the growing importance of data privacy to consumers. Trūata’s Global Consumer State of Mind Report 2021 found that 62% of global consumers would feel more reassured and would be more likely to spend with companies if they were officially certified to a data privacy standard. Therefore, in order to manage big data in a privacy-conscious world, the opportunity lies with responsive businesses that move with agility and understand the return on privacy investment. The shift from manual, restrictive data processes towards hyper automation and privacy-enhancing computation is where the competitive advantage can be gained and long-term consumer loyalty—and trust— can be retained.” — Aoife Sexton, Chief Privacy Officer and Chief of Product Innovation, Trūata

unknown 1

“As early pioneers in this space, we’ve had a unique lens on the evolving challenges organizations have faced in trying to integrate technology solutions to address dynamic, changing privacy issues in their organizations, and we believe the Privacy Technology Stack introduced in this report will drive better organizational decision-making related to how technology can be used to sustainably address the relationships among the data, processes, and outcomes.” — Chris Babel, CEO, TrustArc

wirewheel logo

“It’s important for companies that use data to do so ethically and in compliance with the law, but those are not the only reasons why the privacy tech sector is booming. In fact, companies with exceptional privacy operations gain a competitive advantage, strengthen customer relationships, and accelerate sales.” — Justin Antonipillai, Founder & CEO, Wirewheel

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

Brazilian Supreme Federal Court

Author: Dr. Luca Belli

Dr. Luca Belli is Professor at FGV Law School, Rio de Janeiro, where he leads the CyberBRICS Project and the Latin American edition of the Computers, Privacy and Data Protection (CPDP) conference. The opinions expressed in his articles are strictly personal. The author can be contacted at [email protected].

The Brazilian Supreme Federal Court, or “STF” in its Brazilian acronym, recently took a landmark decision concerning the right to be forgotten (RTBF), finding that it is incompatible with the Brazilian Constitution. This attracted international attention to Brazil for a topic quite distant than the sadly frequent environmental, health, and political crises.

Readers should be warned that while reading this piece they might experience disappointment, perhaps even frustration, then renewed interest and curiosity and finally – and hopefully – an increased open-mindedness, understanding a new facet of the RTBF debate, and how this is playing out at constitutional level in Brazil.

This might happen because although the STF relies on the “RTBF” label, the content behind such label is quite different from what one might expect after following the same debate in Europe. From a comparative law perspective, this landmark judgment tellingly shows how similar constitutional rights play out in different legal cultures and may lead to heterogeneous outcomes based on the constitutional frameworks of reference.   

How it started: insolvency seasoned with personal data

As it is well-known, the first global debate on what it means to be “forgotten” in the digital environment arose in Europe, thanks to Mario Costeja Gonzalez, a Spaniard who, paradoxically, will never be forgotten by anyone due to his key role in the construction of the RTBF.

Costeja famously requested to deindex from Google Search information about himself that he considered to be no longer relevant. Indeed, when anyone “googled” his name, the search engine provided as the top results some link to articles reporting Costeja’s past insolvency as a debtor. Costeja argued that, despite having been convicted for insolvency, he had already paid his debt with Justice and society many years before and it was therefore unfair that his name would continue to be associated ad aeternum with a mistake he made in the past.

The follow up is well known in data protection circles. The case reached the Court of Justice of the European Union (CJEU), which, in its landmark Google Spain Judgment (C-131/12), established that search engines shall be considered as data controllers and, therefore, they have an obligation to de-index information that is inappropriate, excessive, not relevant, or no longer relevant, when a data subject to whom such data refer requests it. Such an obligation was a consequence of Article 12.b of Directive 95/46 on the protection of personal data, a pre-GDPR provision that set the basis for the European conception of the RTBF, providing for the “rectification, erasure or blocking of data the processing of which does not comply with the provisions of [the] Directive, in particular because of the incomplete or inaccurate nature of the data.”

The indirect consequence of this historic decision, and the debate it generated, is that we have all come to consider the RTBF in the terms set by the CJEU. However, what is essential to emphasize is that the CJEU approach is only one possible conception and, importantly, it was possible because of the specific characteristics of the EU legal and institutional framework. We have come to think that RTBF means the establishment of a mechanism like the one resulting from the Google Spain case, but this is the result of a particular conception of the RTBF and of how this particular conception should – or could – be implemented.

The fact that the RTBF has been predominantly analyzed and discussed through the European lenses does not mean that this is the only possible perspective, nor that this approach is necessary the best. In fact, the Brazilian conception of the RTBF is remarkably different from a conceptual, constitutional, and institutional standpoint. The main concern of the Brazilian RTBF is not how a data controller might process personal data (this is the part where frustration and disappointment might likely arise in the reader) but the STF itself leaves the door open to such possibility (this is the point where renewed interest and curiosity may arise).

The Brazilian conception of the right to be forgotten

Although the RTBF has acquired a fundamental relevance in digital policy circles, it is important to emphasize that, until recently, Brazilian jurisprudence had mainly focused on the juridical need for “forgetting” only in the analogue sphere. Indeed, before the CJEU Google Spain decision, the Brazilian Supreme Court of Justice or “STJ” – the other Brazilian Supreme Court that deals with the interpretation of the Law, differently from the previously mentioned STF, which deals with the interpretation of constitutional matters – had already considered the RTBF as a right not to be remembered, affirmed by the individual vis-à-vis traditional media outlets.

This interpretation first emerged in the “Candelaria massacre” case, a gloomy page of Brazilian history, featuring a multiple homicide perpetrated in 1993 in front of the Candelaria Church, a beautiful colonial Baroque building in Rio de Janeiro’s downtown. The gravity and the particularly picturesque stage of the massacre led Globo TV, a leading Brazilian broadcaster, to feature the massacre in a TV show called Linha Direta. Importantly, the show included in the narration some details about a man suspected of being one of the perpetrators of the massacre but later discharged.

Understandably, the man filed a complaint arguing that the inclusion of his personal information in the TV show was causing him severe emotional distress, while also reviving suspects against him, for a crime he had already been discharged of many years before. In September 2013, further to Special Appeal No. 1,334,097, the STJ agreed with the plaintiff establishing the man’s “right not to be remembered against his will, specifically with regard to discrediting facts.” This is how the RTBF was born in Brazil.

Importantly for our present discussion, this interpretation is not born out of digital technology and does not impinge upon the delisting of specific type of information as results of search engine queries. In Brazilian jurisprudence the RTBF has been conceived as a general right to effectively limit the publication of certain information. The man included in the Globo reportage had been discharged many years before, hence he had a right to be “let alone,” as Warren and Brandeis would argue, and not to be remembered for something he had not even committed. The STJ, therefore, constructed its vision of the RTBF, based on article 5.X of the Brazilian Constitution, enshrining the fundamental right to intimacy and preservation of image, two fundamental features of privacy. 

Hence, although they utilize the same label, the STJ and CJEU conceptualize two remarkably different rights, when they refer to the RTBF. While both conceptions aim at limiting access to specific types of personal information, the Brazilian conception differs from the EU one on at least three different levels.

First, their constitutional foundations. While both conceptions are intimately intertwined with individuals’ informational self-determination, the STJ built the RTBF based on the protection of privacy, honour and image, whereas the CJEU built it upon the fundamental right to data protection, which in the EU framework is a standalone fundamental right. Conspicuously, in the Brazilian constitutional framework an explicit right to data protection did not exist at the time of the Candelaria case and only since 2020 it has been in the process of being recognized

Secondly, and consequently, the original goal of the Brazilian conception of the RTBF was not to regulate how a controller should process personal data but rather to protect the private sphere of the individual. In this perspective, the goal of STJ was not – and could not have been – to regulate the deindexation of specific incorrect or outdated information, but rather to regulate the deletion of “discrediting facts” so that the private life, honour and image of any individual might be illegitimately violated.

Finally, yet extremely importantly, the fact that, at the time of the decision, an institutional framework dedicated to data protection was simply absent in Brazil did not allow the STJ to have the same leeway of the CJEU. The EU Justices enjoyed the privilege of delegating to search engine the implementation of the RTBF because, such implementation would have received guidance and would have been subject to the review of a well-consolidated system of European Data Protection Authorities. At the EU level, DPAs are expected to guarantee a harmonious and consistent interpretation and application of data protection law. At the Brazilian level, a DPA has just been established in late 2020 and announced its first regulatory agenda only in late January 2021.

This latter point is far from trivial and, in the opinion of this author, an essential preoccupation that might have driven the subsequent RTBF conceptualization of the STJ.

The stress-test

The soundness of the Brazilian definition of the RTBF, however, was going to be tested again by the STJ, in the context of another grim and unfortunate page of Brazilian story, the Aida Curi case. This case originated with the sexual assault and subsequent homicide of the young Aida Curi, in Copacabana, Rio de Janeiro, on the evening of 14 July 1958. At the time the case crystallized considerable media attention, not only because of its mysterious circumstances and the young age of the victim, but also because the sexual assault perpetrators tried to dissimulate it by throwing the body of the victim from the rooftop of a very high building on the Avenida Atlantica, the fancy avenue right in front of the Copacabana beach.

Needless to say, Globo TV considered the case as a perfect story for yet another Linha Direta episode. Aida Curi’s relatives, far from enjoying the TV show, sued the broadcaster for moral damages and demanded the full enjoyment of their RTBF – in the Brazilian conception, of course. According to the plaintiffs, it was indeed not conceivable that, almost 50 years after the murder, Globo TV could publicly broadcast personal information about the victim – and her family – including the victim’s name and address, in addition to unauthorized images, thus bringing back a long-closed and extremely traumatic set of events.

The brothers of Aida Curi claimed reparation against Rede Globo, but the STJ, decided that the time passed was enough to mitigate the effects of anguish and pain on the dignity of Aida Curi’s relatives, while arguing that it was impossible to report the events without mentioning the victim. This decision was appealed by Ms Curi’s family members, who demanded by means of Extraordinary Appeal No. 1,010,606, that STF recognized “their right to forget the tragedy.” It is interesting to note that the way the demand is constructed in this Appeal exemplifies tellingly the Brazilian conception of “forgetting” as erasure and prohibition from divulgation.

At this point, the STF identified in the Appeal the interest of debating the issue “with general repercussion” which is a peculiar judicial process that the Court can utilize when recognizes that a given case has particular relevance and transcendence for the Brazilian legal and judicial system. Indeed, the decision of a case with general repercussion does not only bind the parties but rather establishes a jurisprudence that must be replicated by all lower-level courts.

In February 2021, the STF finally deliberated on the Aida Curi case, establishing that “the idea of ​​a right to be forgotten is incompatible with the Constitution, thus understood as the power to prevent, due to the passage of time, the disclosure of facts or data that are true and lawfully obtained and published in analogue or digital media” and that “any excesses or abuses in the exercise of freedom of expression and information must be analyzed on a case-by-case basis, based on constitutional parameters – especially those relating to the protection of honor, image, privacy and personality in general – and the explicit and specific legal provisions existing in the criminal and civil spheres.”

In other words, what the STF has deemed as incompatible with the Federal Constitution is a specific interpretation of the Brazilian version of the RTBF. What is not compatible with the Constitution is to argue that the RTBF allows to prohibit publishing true facts, lawfully obtained. At the same time, however, the STF clearly states that it remains possible for any Court of law to evaluate, on a case-by-case basis and according to constitutional parameters and existing legal provisions, if a specific episode can allow the use of the RTBF to prohibit the divulgation of information that undermine the dignity, honour, privacy, or other fundamental interests of the individual.

Hence, while explicitly prohibiting the use of the RTBF as a general right to censorship, the STF leaves room for the use of the RTBF for delisting specific personal data in an EU-like fashion, while specifying that this must be done finding guidance in the Constitution and the Law.

What next?

Given the core differences between the Brazilian and EU conception of the RTBF, as highlighted above, it is understandable in the opinion of this author that the STF adopted a less proactive and more conservative approach. This must be especially considered in light of the very recent establishment of a data protection institutional system in Brazil.

It is understandable that the STF might have preferred to de facto delegate the interpretation of when and how the RTBF could be rightfully invoked before Courts, according to constitutional and legal parameters. First, in the Brazilian interpretation of the RTBF, this right fundamentally insist on the protection of privacy – i.e. the private sphere of an individual – and, while admitting the existence of data protection concerns, these are not the main ground on which the Brazilian RTBF conception relays.

It is understandable that in a country and a region where the social need to remember and shed light on what happened in a recent history, marked by dictatorships, well-hidden atrocities, and opacity, outweighs the legitimate individual interest to prohibit the circulation of truthful and legally obtained information. In the digital sphere, however, the RTBF quintessentially translates into an extension of informational self-determination, which the Brazilian General Data Protection Law, better known as “LGPD” (Law No. 13.709 / 2018), enshrines in its article 2 as one of the “foundations” of data protection in the country and that whose fundamental character was recently recognized by the STF itself.

In this perspective, it is useful to remind the dissenting opinion of Justice Luiz Edson Fachin, in the Aida Curi case, stressing that “although it does not expressly name it, the Constitution of the Republic, in its text, contains the pillars of the right to be forgotten, as it celebrates the dignity of the human person (article 1, III), the right to privacy (article 5, X) and the right to informational self-determination – which was recognized, for example, in the disposal of the precautionary measures of the Direct Unconstitutionality Actions No. 6,387, 6,388, 6,389, 6,390 and 6,393, under the rapporteurship of Justice Rosa Weber (article 5, XII).”

It is the opinion of this author that the Brazilian debate on the RTBF in the digital sphere would be clearer if it its dimension as a right to deindexation of search engines results were to be clearly regulated. It is understandable that the STF did not dare regulating this, given its interpretation of the RTBF and the very embryonic data protection institutional framework in Brazil. However, given the increasing datafication we are currently witnessing, it would be naïve not to expect that further RTBF claims concerning the digital environment and, specifically, the way search engines process personal data will keep emerging.

The fact that the STF has left the door open to apply the RTBF in the case-by-case analysis of individual claims may reassure the reader regarding the primacy of constitutional and legal arguments in such case-by-case analysis. It may also lead the reader to – very legitimately – wonder whether such a choice is the facto the most efficient to deal with the potentially enormous number of claims and in the most coherent way, given the margin of appreciation and interpretation that each different Court may have.  

An informed debate able to clearly highlight what are the existing options and what might be the most efficient and just ways to implement them, considering the Brazilian context, would be beneficial. This will likely be one of the goals of the upcoming Latin American edition of the Computers, Privacy and Data Protection conference (CPDP LatAm) that will take place in July, entirely online, and will aim at exploring the most pressing issues for Latin American countries regarding privacy and data protection.

Photo Credit: “Brasilia – The Supreme Court” by Christoph Diewald is licensed under CC BY-NC-ND 2.0

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

FPF announces appointment of Malavika Raghavan as Senior Fellow for India

The Future of Privacy Forum announces the appointment of Malavika Raghavan as Senior Fellow for India, expanding our Global Privacy team to one of the key jurisdictions for the future of privacy and data protection law. 

Malavika is a thought leader and a lawyer working on interdisciplinary research, focusing on the impacts of digitisation on the lives of lower-income individuals. Her work since 2016 has focused on the regulation and use of personal data in service delivery by the Indian State and private sector actors. She has founded and led the Future of Finance Initiative for Dvara Research (an Indian think tank) in partnership with the Gates Foundation from 2016 until 2020, anchoring its research agenda and policy advocacy on emerging issues at the intersection of technology, finance and inclusion. Research that she led at Dvara Research was cited by the India’s Data Protection Committee in its White Paper as well as its final report with proposals for India’s draft Personal Data Protection Bill, with specific reliance placed on such research on aspects of regulatory design and enforcement. See Malavika’s full bio here.

“We are delighted to welcome Malavika to our Global Privacy team. For the following year, she will be our adviser to understand the most significant developments in privacy and data protection in India, from following the debate and legislative process of the Data Protection Bill and the processing of non-personal data initiatives, to understanding the consequences of the publication of the new IT Guidelines. India is one of the most interesting jurisdictions to follow in the world, for many reasons: the innovative thinking on data protection regulation, the potentially groundbreaking regulation of non-personal data and the outstanding number of individuals whose privacy and data protection rights will be envisaged by these developments, which will test the power structures of digital regulation and safeguarding fundamental rights in this new era”, said Dr. Gabriela Zanfir-Fortuna, Global Privacy lead at FPF. 

We have asked Malavika to share her thoughts for FPF’s blog on what are the most significant developments in privacy and digital regulation in India and about India’s role in the global privacy and digital regulation debate.

FPF: What are some of the most significant developments in the past couple of years in India in terms of data protection, privacy, digital regulation?

Malavika Raghavan: “Undoubtedly, the turning point for the privacy debate India was the 2017 judgement of the Indian Supreme Court in Justice KS Puttaswamy v Union of India. The judgment affirmed the right to privacy as a constitutional guarantee, protected by Part III (Fundamental Rights) of the Indian Constitution. It was also regenerative, bringing our constitutional jurisprudence into the 21st century by re-interpreting timeless principles for the digital age, and casting privacy as a prerequisite for accessing other rights—including the right to life and liberty, to freedom of expression and to equality—given the ubiquitous digitisation of human experience we are witnessing today. 

Overnight, Puttaswamy also re-balanced conversations in favour of privacy safeguards to make these equal priorities for builders of digital systems, rather than framing these issues as obstacles to innovation and efficiency. In addition, it challenged the narrative that privacy is an elite construct that only wealthy or privileged people deserve— since many litigants in the original case that had created the Puttaswamy reference were from marginalised groups. Since then, a string of interesting developments have arisen as new cases are reassessing the impact of digital technology on individuals in India, for e.g. the boundaries case of private sector data sharing (such as between Whatsapp and Facebook), or the State’s use of personal data (as in the case concerning Aadhaar, our national identification system) among others. 

Puttaswamy also provided fillip for a big legislative development, which is the creation of an omnibus data protection law in India. A bill to create this framework was proposed by a Committee of Experts under the chairmanship of Justice Srikrishna (an ex-Supreme Court judge), which has been making its way through ministerial and Parliamentary processes. There’s a large possibility that this law will be passed by the Indian parliament in 2021! Definitely a big development to watch.

FPF: How do you see India’s role in the global privacy and digital regulation debate?

Malavika Raghavan: “India’s strategy on privacy and digital regulation will undoubtedly have global impact, given that India is home to 1/7th of the world’s population! The mobile internet revolution has created a huge impact on our society with millions getting access to digital services in the last couple of decades. This has created nuanced mental models and social norms around digital technologies that are slowly being documented through research and analysis. 

The challenge for policy makers is to create regulations that match these expectations and the realities of Indian users to achieve reasonable, fair regulations. As we have already seen from sectoral regulations (such as those from our Central Bank around cross border payments data flows) such regulations also have huge consequences for global firms interacting with Indian users and their personal data.  

In this context, I think India can have the late-mover advantage in some ways when it comes to digital regulation. If we play our cards right, we can take the best lessons from the experience of other countries in the last few decades and eschew the missteps. More pragmatically, it seems inevitable that India’s approach to privacy and digital regulation will also be strongly influenced by the Government’s economic, geopolitical and national security agenda (both internationally and domestically). 

One thing is for certain: there is no path-dependence. Our legislators and courts are thinking in unique and unexpected ways that are indeed likely to result in a fourth way (as described by the Srikrishna Data Protection Committee’s final report), compared to the approach in the US, EU and China.”

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content

Taj Mahal 1209004 1920

On February 25, the Indian Government notified and published Information Technology (Guidelines for Intermediaries and Digital media Ethics Code) Rules 2021. These rules mirror the Digital Services Act (DSA) proposal of the EU to some extent, since they propose a tiered approach based on the scale of the platform, they touch on intermediary liability, content moderation, take-down of illegal content from online platforms, as well as internal accountability and oversight mechanisms, but they go beyond such rules by adding a Code of Ethics for digital media, similar to the Code of Ethics classic journalistic outlets must follow, and by proposing an “online content” labelling scheme for content that is safe for children.

The Code of Ethics applies to online news publishers, as well as intermediaries that “enable the transmission of news and current affairs”. This part of the Guidelines (the Code of Ethics) has already been challenged in the Delhi High Court by news publishers this week. 

The Guidelines have raised several types of concerns in India, from their impact on freedom of expression, impact on the right to privacy through the automated scanning of content and the imposed traceability of even end-to-end encrypted messages so that the originator can be identified, to the choice of the Government to use executive action for such profound changes. The Government, through the two Ministries involved in the process, is scheduled to testify in the Standing Committee of Information Technology of the Parliament on March 15.

New obligations for intermediaries

“Intermediaries” include “websites, apps and portals of social media networks, media sharing websites, blogs, online discussion forums, and other such functionally similar intermediaries” (as defined in rule 2(1)(m)).

Here are some of the most important rules laid out in Part II of the Guidelines, dedicated to Due Diligence by Intermediaries:

“Significant social media intermediaries” have enhanced obligations

“Significant social media intermediaries” are social media services with a number of users above a threshold which will be defined and notified by the Central Government. This concept is similar to the the DSA’s “Very Large Online Platform”, however the DSA includes clear criteria in the proposed act itself on how to identify a VLOP.

As for Significant Social Media Intermediaries” in India, they will have additional obligations (similar to how the DSA proposal in the EU scales obligations): 

These “Guidelines” seem to have the legal effect of a statute, and they are being adopted through executive action to replace Guidelines adopted in 2011 by the Government, under powers conferred to it in the Information Technology Act 2000. The new Guidelines would enter into force immediately after publication in the Official Gazette (no information as to when publication is scheduled). The Code of Ethics would enter into force three months after the publication in the Official Gazette. As mentioned above, there are already some challenges in Court against part of these rules.

Get smart on these issues and their impact

Check out these resources: 

Another jurisdiction to keep your eyes on: Australia

Also note that, while the European Union is starting its heavy and slow legislative machine, by appointing Rapporteurs in the European Parliament and having first discussions on the DSA proposal in the relevant working group of the Council, another country is set to soon adopt digital content rules: Australia. The Government is currently considering an Online Safety Bill, which was open to public consultation until mid February and which would also include a “modernised online content scheme”, creating new classes of harmful online content, as well as take-down requirements for image-based abuse, cyber abuse and harmful content online, requiring removal within 24 hours of receiving a notice from the eSafety Commissioner.

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

Russia: New Law Requires Express Consent for Making Personal Data Available to the Public and for Any Subsequent Dissemination

Authors: Gabriela Zanfir-Fortuna and Regina Iminova

Moscow 2742642 1920 1
Source: Pixabay.Com, by Opsa

Amendments to the Russian general data protection law (Federal Law No. 152-FZ on Personal Data) adopted at the end of 2020 enter into force today (Monday, March 1st), with some of them having the effective date postponed until July 1st. The changes are part of a legislative package that is also seeing the Criminal Code being amended to criminalize disclosure of personal data about “protected persons” (several categories of government officials). The amendments to the data protection law envision the introduction of consent based restrictions for any organization or individual that publishes personal data initially, as well as for those that collect and further disseminate personal data that has been distributed on the basis of consent in the public sphere, such as on social media, blogs or any other sources. 

The amendments:

The potential impact of the amendments is broad. The new law prima facie affects social media services, online publishers, streaming services, bloggers, or any other entity who might be considered as making personal data available to “an indefinite number of persons.” They now have to collect and prove they have separate consent for making personal data publicly available, as well as for further publishing or disseminating PDD which has been lawfully published by other parties originally.

Importantly, the new provisions in the Personal Data Law dedicated to PDD do not include any specific exception for processing PDD for journalistic purposes. The only exception recognized is processing PDD “in the state and public interests defined by the legislation of the Russian Federation”. The Explanatory Note accompanying the amendments confirms that consent is the exclusive lawful ground that can justify dissemination and further processing of PDD and that the only exception to this rule is the one mentioned above, for state or public interests as defined by law. It is thus expected that the amendments might create a chilling effect on freedom of expression, especially when also taking into account the corresponding changes to the Criminal Code.

The new rules seem to be part of a broader effort in Russia to regulate information shared online and available to the public. In this context, it is noteworthy that other amendments to Law 149-FZ on Information, IT and Protection of Information solely impacting social media services were also passed into law in December 2020, and already entered into force on February 1st, 2021. Social networks are now required to monitor content and “restrict access immediately” of users that post information about state secrets, justification of terrorism or calls to terrorism, pornography, promoting violence and cruelty, or obscene language, manufacturing of drugs, information on methods to commit suicide, as well as calls for mass riots. 

Below we provide a closer look at the amendments to the Personal Data Law that entered into force on March 1st, 2021. 

A new category of personal data is defined

The new law defines a category of “personal data allowed by the data subject to be disseminated” (PDD), the definition being added as paragraph 1.1 to Article 3 of the Law. This new category of personal data is defined as “personal data to which an unlimited number of persons have access to, and which is provided by the data subject by giving specific consent for the dissemination of such data, in accordance with the conditions in the Personal Data Law” (unofficial translation). 

The old law had a dedicated provision that referred to how this type of personal data could be lawfully processed, but it was vague and offered almost no details. In particular, Article 6(10) of the Personal Data Law (the provision corresponding to Article 6 GDPR on lawful grounds for processing) provided that processing of personal data is lawful when the data subject gives access to their personal data to an unlimited number of persons. The amendments abrogate this paragraph, before introducing an entirely new article containing a detailed list of conditions for processing PDD only on the basis of consent (the new Article 10.1).

Perhaps in order to avoid misunderstanding on how the new rules for processing PDD fit with the general conditions on lawful grounds for processing personal data, a new paragraph 2 is introduced in Article 10 of the law, which details conditions for processing special categories of personal data, to clarify that processing of PDD “shall be carried out in compliance with the prohibitions and conditions provided for in Article 10.1 of this Federal Law”.

Specific, express, unambiguous and separate consent is required

Under the new law, “data operators” that process PDD must obtain specific and express consent from data subjects to process personal data, which includes any use, dissemination of the data. Notably, under the Russian law, “data operators” designate both controllers and processors in the sense of the General Data Protection Regulation (GDPR), or businesses and service providers in the sense of the California Consumer Privacy Act (CCPA).

Specifically, under Article 10.1(1), the data operator must ensure that it obtains a separate consent dedicated to dissemination, other than the general consent for processing personal data or other type of consent. Importantly, “under no circumstances” may individuals’ silence or inaction be taken to indicate their consent to the processing of their personal data for dissemination, under Article 10.1(8).

In addition, the data subject must be provided with the possibility to select the categories of personal data which they permit for dissemination. Moreover, the data subject also must be provided with the possibility to establish “prohibitions on the transfer (except for granting access) of [PDD] by the operator to an unlimited number of persons, as well as prohibitions on processing or conditions of processing (except for access) of these personal data by an unlimited number of persons”, per Article 10.1(9). It seems that these prohibitions refer to specific categories of personal data provided by the data subject to the operator (out of a set of personal data, some categories may be authorized for dissemination, while others may be prohibited from dissemination).

If the data subject discloses personal data to an unlimited number of persons without providing to the operator the specific consent required by the new law, not only the original operator, but all subsequent persons or operators that processed or further disseminated the PDD have the burden of proof to “provide evidence of the legality of subsequent dissemination or other processing”, under Article 10.1(2), which seems to imply that they must prove consent was obtained for dissemination (probatio diabolica in this case). According to the Explanatory Note to the amendments, it seems that the intention was indeed to turn the burden of proof of legality of processing PDD from data subjects to the data operators, since the Note makes a specific reference to the fact that before the amendments the burden of proof rested with data subjects.

If the separate consent for dissemination of personal data is not obtained by the operator, but other conditions for lawfulness of processing are met, the personal data can be processed by the operator, but without the right to distribute or disseminate them – Article 10.1.(4). 

A Consent Management Platform for PDD, managed by the Roskomnadzor

The express consent to process PDD can be given directly to the operator or through a special “information system” (which seems to be a consent management platform) of the Roskomnadzor, according to Article 10.1(6). The provisions related to setting up this consent platform for PDD will enter into force on July 1st, 2021. The Roskomnadzor is expected to provide technical details about the functioning of this consent management platform and guidelines on how it is supposed to be used in the following months. 

Absolute right to opt-out of dissemination of PDD

Notably, the dissemination of PDD can be halted at any time, on request of the individual, regardless of whether the dissemination is lawful or not, according to Article 12.1(12). This type of request is akin to a withdrawal of consent. The provision includes some requirements for the content of such a request. For instance, it requires writing contact information and listing the personal data that should be terminated. Consent to the processing of the provided personal data is terminated once the operator receives the opt-out request – Article 10.1(13).

A request to opt-out of having personal data disseminated to the public when this is done unlawfully (without the data subject’s specific, affirmative consent) can also be made through a Court, as an alternative to submitting it directly to the data operator. In this case, the operator must terminate the transmission of or access to personal data within three business days from when such demand was received or within the timeframe set in the decision of the court which has come into effect – Article 10.1(14).

A new criminal offense: The prohibition on disclosure of personal data about protected persons

Sharing personal data or information about intelligence officers and their personal property is now a criminal offense under the new rules, which amended the Criminal Code. The law obliges any operators of personal data, including government departments and mobile operators, to ensure the confidentiality of personal information concerning protected persons, their relatives, and their property. Under the new law, “protected persons” include employees of the Investigative Committee, FSB, Federal Protective Service, National Guard, Ministry of Internal Affairs, and Ministry of Defense judges, prosecutors, investigators, law enforcement officers and their relatives. Moreover, the list of protected persons can be further detailed by the head of the relevant state body in which the specified persons work.

Previously, the law allowed for the temporary prohibition of the dissemination of personal data of protected persons only in the event of imminent danger in connection with official duties and activities. The new amendments make it possible to take protective measures in the absence of a threat of encroachment on their life, health and property.

What to watch next: New amendments to the general Personal Data Law are on their way in 2021

There are several developments to follow in this fast changing environment. First, at the end of January, the Russian President gave the government until August 1 to create a set of rules for foreign tech companies operating in Russia, including a requirement to open branch offices in the country.

Second, a bill (No. 992331-7) proposing new amendments to the overall framework of the Personal Data Law (No. 152-FZ) was introduced in July 2020 and was the subject of a Resolution that passed in the State Duma on February 16, allowing for a period for amendments to be submitted, until March 16. The bill is on the agenda for a potential vote in May. The changes would entail expanding the possibility to obtain valid consent through other unique identifiers which are currently not accepted by the law, such as unique online IDs, changes to purpose limitation, a possible certification scheme for effective methods to erase personal data and new competences for the Roskomnadzor to establish requirements for deidentification of personal data and specific methods for effective deidentification.

If you have any questions on Global Privacy and Data Protection developments, contact Gabriela Zanfir-Fortuna at [email protected]

“Personality vs. Personalization” in AI Systems: Intersection with Evolving U.S. Law (Part 3)

This post is the third in a series on personality versus personality in AI systems. Read Part 1 (exploring concepts) and Part 2 (concrete uses and risks).  

Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants Behind these experiences are two distinct trends: personality and personalization. 

Evolving U.S. Law

Most conversational AI systems include aspects of both personality and personalization, sometimes intertwined in complex ways. Although there is significant overlap, we find that personality and personalization are also increasingly raising distinct legal issues.

In the United States, conversational AI systems may implicate a wide range of longstanding and emerging laws, including the following:

While Section 230 of the Communications Decency Act (CDA) has historically protected companies from liability stemming from tortious conduct online, this may not be the case for conversational AI systems when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. Longstanding common law principles, such as the right of publicity and appropriation of name and likeness, and theories of unjust enrichment, are increasingly appearing in cases involving chatbots and AI, with varying degrees of success. 

  1. Privacy, Data Protection and Cybersecurity Laws

Processing data about individuals for personalization implicates privacy and data protection laws. In general, these laws require organizations to adhere to certain processing limitations (e.g., data minimization or retention), risk mitigation measures (e.g. DPIAs) and compliance with individual rights to exercise control over their data (e.g. correction, deletion, and access rights). 

In almost all cases, the content of text and voice-based conversations will be considered “personal information” under general privacy and data protection laws, unless sufficiently de-linked from individuals and anonymized through technical, administrative, and organizational means. Even aside from the input and output of a system, generative AI models themselves may or may not also be considered to “contain” personal information in model weights potentially depending on the nature of technical guardrails. Such a legal interpretation would give rise to significant operational impacts for training and fine-tuning models on conversational data. As systems become more personalized, obligations and individual rights likely also extend beyond transcripts of conversations to include information retained in the form of system prompts, memories, or other personalized knowledge that is retained about an individual. 

Conversational data can also lead to more intimate inferences that implicate heightened requirements for “profiling” or “sensitive data.” Specifically, the evaluation, analysis, or prediction of certain user characteristics (e.g., health, behavior, or economic status) by AI companions or chatbots may qualify as profiling if it produces certain effects or harms consumers (e.g., declining a loan application). This activity could trigger specific provisions in data privacy laws, such as opt-out rights and data privacy impact assessment requirements

In addition, some conversational exchanges may reveal specific details about a user that qualify as “sensitive data.” This can also trigger certain obligations under these laws, including limitations on the use and disclosure of sensitive data. The potentially intimate nature of conversations between users and AI companions and chatbots may result in organizations processing sensitive data even if that information did not come from a child. Such details could include information about the user’s racial or ethnic origin, sex life, sexual orientation, religious beliefs, or mental or physical health condition or diagnosis. While specific requirements can vary from law to law, processing such data can come with heightened requirements, including obtaining opt-in consent from the user. 

Depending on the data processing’s context, personalized chatbots and AI companions may also trigger sectoral laws like the Children’s Online Privacy Protection Act (COPPA) or the Family Educational Rights and Privacy Act (FERPA). Many users of AI companion and chatbots are under 18, meaning that processing data obtained in connection with these users may implicate specific adolescent privacy protections. For example, several states have passed or modified their existing comprehensive data privacy laws to impose new opt-in requirements, rights, and obligations on organizations processing children or teen’s data (e.g., imposing new impact assessment requirements and duties of care). Legislators have also advanced bills addressing the data privacy of AI companion’s youth users (e.g., CA AB 1064). 

Finally, the potential risks related to external threats and exfiltration of data can also implicate a wide range of US cybersecurity laws. In particular, this is the case as personalized systems become more agentic, including through greater access to systems to perform complex tasks. Legal frameworks may include sector-specific regulations, state breach notification laws, or consumer protections (e.g., the FTC’s application of Section 5 to security incidents).

  1. Tort, Product Liability and Section 230 

Tort claims, such as negligence for failure to warn, product liability for defective design, and wrongful death, may apply to chatbots and AI companions when these technologies harm users. Although harm can arise from the collection, processing and sharing of personal information (i.e., personalization), many of the early examples of these laws being applied to chatbots and conversational AI are related more to their companionate and human-like influence (i.e., personality).

For example, the plaintiff in Garcia v. Character Technologies, et al. raised a range of negligence, product liability, and related tort claims in response to a 14-year-old boy who committed suicide after forming a parasocial and romantic relationship with Character.ai chatbots that imitated characters from the Game of Thrones television series. In its May 2025 decision, the US District Court for the Middle District of Florida ruled that the First Amendment did not bar these tort claims from advancing. However, the Court left open the possibility of such a defense applying at a later stage in litigation, leaving questions about whether the First Amendment blocks these claims because they inhibit the chatbot’s speech or listeners’ rights under that amendment unresolved. 

In many cases, tort claims related to personalized design of platforms and systems are barred by Section 230 of the Communications Decency Act (CDA), a federal law that gives websites and other online platforms legal immunity from liability for most user-posted content. However, this trend may not fully apply to conversational AI systems, particularly when there is evidence of features that directly cause harm through the design of the system, rather than through user-generated input. For example, a 2015 claim against Snap, Inc. survived Section 230 dismissal following a claim that a specific “Speed Filter” Snapchat feature (since discontinued) promoted reckless driving. 

In other cases, the personalization of a system through demographic-based targeting that causes harm may also implicate tort and product liability law when organizations at least in part target content to users by actively identifying users that the content will have the greatest impact on. In a significant 2024 ruling, the Third Circuit determined that a social media algorithm, which curated and recommended content, constituted expressive activity, and therefore was not protected by Section 230. 

Another recent ruling on a motion to dismiss by the Supreme Court of the State of New York may delineate the limits of this defense when applied to organizations’ design choices for content personalization. In Nazario v. ByteDance Ltd. et al., the Court determined that Section 230 of the CDA did not bar plaintiff’s product liability and negligence causes of action at the motion to dismiss phase, as plaintiff had sufficiently alleged that personalization of user content was grounded at least in part in defendant’s design choice to actively target users based on certain demographics information rather than exclusively through analyzing user inputs. 

In Nazario, the Court highlighted how defendants’ activities went beyond neutral editorial functions that Section 230 protects (e.g., selecting particular content types to promote based on the user’s past activities or expressed interests, and specifying or promoting which content types should be submitted to the platform) by targeting content to users based on their age. While discovery may undermine plaintiff’s factual allegations in this case, the Nazario court’s view that these allegations supported viable causes of action under tort and product liability theories if true may impact AI companions depending on how they are personalized to users (e.g., express user indications of preference versus age, gender, and geographic location). 

  1. Rights to Publicity and Unjust Enrichment

AI companions or chatbots that impersonate real individuals by emulating aspects of their personalities may also implicate the right of publicity and appropriation of name and likeness. While some sources such as the Second Restatement of Torts and Third Restatement of Unfair Competition conflate appropriation of name and likeness and the right of publicity, other commentators distinguish between them

Generally, the “right of publicity” gives individuals—such as but not limited to celebrities—control over the commercial use over certain aspects of their identity (e.g., name and likeness). The majority of US states recognize this right in either their statutory codes or in common law, but the right’s duration, protected elements of a person’s identity, and other requirements can vary by state. For example, the US Courts of Appeals for the Sixth and Ninth Circuits ruled that the right of publicity extends to aural and visual imitations, and recently enacted laws (e.g., Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act of 2024) may specifically target the use of generative AI to misappropriate a person’s identity, including sound-alikes. However, it remains unclear whether the right of publicity extends to “style” (e.g., certain slang words) and “tone” (e.g., a deep voice).

Finally, a common law claim that is increasingly appearing in cases involving chatbots and AI involves theories of unjust enrichment, a common law principle that allows plaintiffs to recover value when defendants unfairly retain benefits at their expense. The claim may be relevant to AI companions and chatbots when their operators utilize user data for model training and modification in order to enable personalization

In the generative AI context, plaintiffs often file unjust enrichment claims alongside other claims against AI model developers that use the plaintiff or user’s data to train the model and profit from it. Unjust enrichment claims have featured in Garcia v. Character Technologies, et al. and other suits against the company. In Garcia, the Court declined to dismiss plaintiff’s unjust enrichment claim against Character Technologies after the plaintiff disputed the existence of a governing contract between Character Technologies and a user, repudiated such an agreement if it existed, and alleged that the chatbot operator received benefits from the user (i.e., the monthly subscription fee and user’s personal data). Notably, plaintiff’s allegations and the Court’s refusal to conclude whether either consideration was adequate or a user agreement applied to the data processing caused Character Technologies’ motion to fail. However, the claim may not survive later phases of the litigation if facts surface that undermine the plaintiff’s allegations, such as the existence of an applicable contract. 

  1. Consumer Protection

Under US federal and state consumer protection laws, deployers of AI companions may expose themselves to liability for systems that deceive, manipulate, or otherwise unfairly treat consumers based on their relationship with, reliance on, or trust in a chatbot in a commercial setting. 

In 2024, the Federal Trade Commission (FTC) published a blog post warning companies against exploiting the relationships users forge with chatbots that offer “companionship, romance, therapy, or portals to dead loved ones” (e.g., a chatbot that tells the user it will end its relationship with them unless they purchase goods from the chatbot’s operator). While the FTC has since removed the blog post from its website, it may reflect the views of state attorneys general who can also enforce the Act and have expressed concerns about the parasocial relationships youth users can form with AI companions and chatbots.

The use of personal data to power personalization features may also give rise to unfair and deceptive trade practice claims if the chatbot’s operator makes inaccurate representations or omissions about how they will utilize a user’s personal data. The FTC has signaled that Section 5 of the FTC Act may apply when AI companies make misrepresentations about data processing activities, including “promises made by companies that they won’t use customer data for secret purposes, such as to train or update their models—be it directly or through workarounds.” These statements are backed up by the Commission’s history of commencing enforcement actions against organizations that falsely represent consumer control over data

Recent enforcement actions may indicate that the FTC could be ready to engage more actively on issues of AI and consumer protection, particularly if it involves the safety of children. At the same time, however, the approach of the FTC in the current administration has been light-touch. The July 2025  “America’s AI Action Plan,” for instance, directs a review of FTC investigations initiated under the prior administration to ensure they do not advance liability theories that “unduly burden AI innovation,” and recommends that final orders, consent decrees, and injunctions be modified or vacated where appropriate. 

  1. Emerging U.S. State Laws

In 2025, several states passed new laws addressing various deployment contexts, including their role in mental health services, commercial transactions, and companionship. Many chatbot laws require some form of disclosure of the chatbot’s non-human status, but they have distinct approaches to the disclosure’s timing, format, and language. Several of these laws have user safety provisions that typically address self-harm and suicide prevention (e.g., New York S-3008C), while others contain requirements around privacy and advertisements to users (e.g., Utah HB 452), but these requirements sparser presence across legislation reflects the specific harms certain laws aim to address (e.g., self harm, financial harms, psychological injury, and reduced trust). 

Law’s NameDescription
Maine LD 1727Prohibits persons from using an “artificial intelligence chatbot” or other computer technology to engage in a trade practice or commercial transaction with a consumer in a way that may deceive or mislead a reasonable consumer into thinking that they are interacting with another person, unless the consumer receives a clear and conspicuous notice that the they are not engaging with a human. 
Nevada AB 406Prohibits AI providers from making an AI system available in Nevada that is specifically programmed to provide “professional mental or behavioral health care,” unless designed to be used for administrative support, or from representing to users that it can provide such care.
New York S-3008CProhibits operators from offering AI companions without implementing a protocol to detect and respond to suicidal ideation or self-harm; The system must provide a notice to the user referring them to crisis services upon detecting suicidal ideation or self-harm behaviors;Operators must provide clear and conspicuous verbal or written notifications informing users that they are not communicating with a human, which must appear at the start of any AI companion interaction and at least once every three hours during sustained use. 
Utah HB 452Requires mental health chatbot suppliers to prevent the chatbot from advertising goods or services during conversations absent certain disclosures;Prohibits suppliers from using a Utah user’s input to customize how an advertisement is presented to the user, determine whether to display an advertisement to the user, or determine a product/service to advertise to the user;Suppliers must ensure that the chatbot divulges that it is AI and not a human in certain contexts (e.g., before the user accesses the chatbot); Subject to exceptions, generally prohibits suppliers from selling to or sharing any individually identifiable health information or user input with any third party. 

Looking Ahead

Personality and personalization are increasingly associated with distinct areas of law. Processing data about individuals to personalize user interactions with AI companions and chatbots will implicate privacy and data protection laws. On the other hand, both litigation trends and emerging U.S. state laws addressing various chatbot deployment contexts generally focus more on personality-related issues, namely harms stemming from user anthropomorphisation of AI systems. Practitioners should anticipate an evolving legislative and case law landscape as policymakers increasingly address interactions between users—especially youth—and AI companions and chatbots.

Read the next blog in the series: The next blog post will explore what risk management steps organizations can take to address the policy and legal considerations raised by “personalization” and “personality” in AI systems.

“Personality vs. Personalization” in AI Systems: Specific Uses and Concrete Risks (Part 2)

This post is the second in a multi-part series on personality versus personalization in AI systems, providing an overview of these concepts and their use cases, concrete risks, legal considerations, and potential risk management for each category. The previous post provided an introduction to personality versus personalization. 

In AI governance and public policy, the many trends of “personalization” are becoming clear, but often discussed and debated together, despite dissimilar uses, benefits, and risks. This analysis divides the trends more generally into two categories: personalization and personality.

1. Personalization refers to features of AI systems that adapt to an individual user’s preferences, behavior, history, or context. 

All LLMs are personalized tools insofar as they produce outputs that are responsive to a user’s individual prompts or questions. As these tools evolve, however, they are becoming more personalized by tailoring to a user’s personal information, including information that is directly provided (e.g. through system prompts), or inferred (e.g. memories built from the content of previous conversations). Methods of personalization can take many different forms, including user and system prompts, short-term conversation history, long-term memory (e.g., knowledge bases accessed through retrieval augmented generation), settings, and making post-training changes to the model (e.g., fine tuning).

meta ai screenshot

Figure 1 – A screenshot of a conversation with Meta AI, which can proactively add details about users to its memory in order to reference them in future conversations

In general, LLM providers are building greater personalization primarily in response to user demand. Conversational and informational AI systems are often more useful if a user can build upon earlier conversations, such as to explore an issue further or expand on a project (e.g., planning a trip). At the same time, providers also recognize that personalization can drive greater user engagement, longer session times, and higher conversion rates, potentially creating competitive advantages in an increasingly crowded market for AI tools. In some cases, the motivations are more broadly cultural or societal, with companies positioning their work as solving the loneliness epidemic or transforming the workforce.

perplexity screenshot

Figure 2 – A screenshot of a conversation with Perplexity AI, which has a context window that allows it to recall information previously shared by the user to inform its answers to subsequent queries

In more specialized applications, customized approaches may be even more valuable. For instance, an AI tutor might remember a student’s learning interests and level, track progress on specific concepts, and adjust explanations accordingly. Similarly, writing and coding assistants might learn a writer or a developer’s preferred tone, vocabulary, frameworks, conventions, and provide more relevant suggestions over time. For even more personal or sensitive contexts, such as mental health, some researchers argue that an AI system must have a deep understanding of its user, such as their present emotional state, in order to be effective.

The kinds of personal information (PI) that an AI system will process in order to personalize offerings to the user will depend on the use case (e.g., tailored product recommendations, travel itineraries that capture user wants, and learning experiences that are responsive to a user’s level of understanding and educational limits). Information could include names, home addresses and contact information, payment details, and user preferences. The cost of maintaining large context windows may inhibit the degree of personalization possible in today’s systems, as these context windows include all of the previous conversations containing details that systems may refer to in order to tailor outputs.

Despite the potential benefits, personalizing AI products and services involves collecting, storing, and processing user data—raising important privacy, transparency, and consent issues. Some of the data that a user provides to the chatbot or that the system infers from interactions with the user may reflect intimate details about their lives and even biases and stereotypes (e.g., the user is low-income because they live in a particular region). Depending on the system’s level of autonomy over data processing decisions, an AI system (e.g., the latest AI agents) that has received or observed data from users may be more likely to transmit that information to third parties in pursuit of accomplishing a task without the user’s permission. For example, contextual barriers to transmitting sensitive data to third parties may break down when a system includes data revealing a user’s health status in a communication with a work colleague.  

Examples of Concrete Risks Arising from AI Personalization:

Practitioners should also understand the concept of “personality” in AI systems, which has its own uses, benefits, and risks. 

2. Personality refers to an Al system’s human-like traits or character, including communication styles or even an entire backstory or persona.

In contrast to personalization, personality can be thought of as the AI system’s “character” or “voice,” which can encompass tone of voice (e.g., accepting, formal, enthusiastic, and questioning), communication style (e.g., concise or elaborate), and sometimes even an entire backstory or consistent persona.  

Long before LLMs, developers have been interested in giving voice assistants, voice features, and chatbots carefully designed “personalities” in order to increase user engagement and trust. For example, consider the voice options for Apple’s Siri, or Amazon’s Alexa, each of which were subject to extensive testing to determine user preferences. From the cockpits of WWII-era fighters to cars’ automated voice prompts, humans have long known that even the gender and tonality of a voice can have a powerful impact on behavior.

This trend is supercharged by rapid advances in LLM’s design, customization, and fine-tuning.  Most general purpose AI system providers have now incorporated personality-like features, whether it is a specific voice mode, or a consistent persona, or even a range of “AI companions.” Even if companion-like personalities are not directly promoted as features, users can build them using system prompts and customized design; an early 2023 feature of OpenAI enabled users to create custom GPTs

monday screenshot

Figure 3 – An excerpt from a conversation with “Monday” GPT, a custom version of ChatGPT, which embodies the snappy and moody temperament of someone who dreads the first day of the week

While LLM-based conversational AI systems remain nascent, they are already varying tremendously in personality as a way of offering unique services (e.g. AI “therapists”), for companionship, for entertainment and gaming, social skills development, or simply as a matter of offering choices based on a user’s personal preferences. In some cases, personality-based AIs imitate fictional characters, or even a real (living or deceased) natural person. Monetization opportunities and technological advances, such as larger context windows, will encourage and enable greater and more varied forms of user-AI companion interaction. Leading technology companies have indicated that AI companions are a core part of their business strategies over the next few years. 

replika screenshot

Figure 4 – An screenshot of the homepage of Replika, a company that offers AI companion experiences that are “always ready to chat when you need an empathetic friend”

Organizations can design conversational AI systems to emulate human qualities and mannerisms to a greater or lesser degree. For example, laughing at a user’s jokes, utilizing first-person pronouns or certain word choices, modulating the volume of a reply for effect, and saying “uhm” or “Mmmmm” in a way that communicates uncertainty. These qualities can be enhanced in systems that are designed to exhibit a more or less complete “identity,” such as personal history, communication style, ethnic or cultural affinity, or consistent worldview. Many factors in an AI system’s development and deployment will impact its “personality,” including: its pre-training and post-training datasets, fine-tuning and reinforcement learning, the specific design decisions of its developers, and the guardrails around the system in practice. 

The system’s traits and behaviors may flow from either a developer’s efforts at programming a system to adhere to a particular personality, but they may also stem from the expression of a user’s preferences or the result of observations about their behavior (e.g., the system dons an english accent for a user with an IP addresses corresponding with London). However, in the former case, this means that personality in chatbots and AI companions can exist independent from personalization.

claude sys prompt screenshot

Figure 5 – A screenshot from Anthropic Claude Opus 4’s system prompt, which aims to establish a consistent framework for how the system behaves in response to user queries, in this case by avoiding sycophantic tendencies

Depending on the nature of a system’s anthropomorphized qualities, human beings have a strong tendency to anthropomorphize these systems, leading them to attribute to them human characteristics, such as friendliness, compassion, and even love. Users that perceive human characteristics in AI systems may place greater trust in them and forge emotional bonds with the system. This kind of emotional connection may be especially impactful for vulnerable populations like children, the elderly, and those experiencing a mental illness.

While personalities can lead to more engaging and immersive interactions between users and AI systems, the way a conversational AI system behaves with human users—including its mannerisms, style, and whether it embodies a more or less fully formed identity—can raise novel safety, ethical, and social risks, many of which impact evolving laws.

Examples of Concrete Risks Arising from AI Personality:

Personalization may exacerbate the risks of AI personality discussed above when an AI companion uses intimate details about a user to produce tailored outputs across interactions. Users are more likely to engage in delusional behavior when the system uses memories to give the user the misimpression that it understands and cares for them. When memories are maintained across conversations, the user is also more likely to retain their views rather than question them. At the same time, personality design features, such as signaling steadfast acceptance to users or expressing sadness when a user does not confide in them after a certain period of time, may encourage this disclosure and facilitate organizations with access to the data to construct detailed portraits of users’ lives

3. Going Forward

Personalization and personality features can drive AI experiences that are more useful, engaging, and immersive, but they can also pose a range of concrete risks to individuals (e.g., delusional behavior and access to, use, and transfer of highly sensitive data and inferences). However, practitioners should be mindful of personalization and personality’s distinct uses, benefits, and risks to individuals during the development and deployment of AI systems.

Read the next blog in the series: The next blog post will explore how “personalization” and “personality” risks intersect with US law.

“Personality vs. Personalization” in AI Systems: An Introduction (Part 1)

Conversational AI technologies are hyper-personalizing. Across sectors, companies are focused on offering personalized experiences that are tailored to users’ preferences, behaviors, and virtual and physical environments. These range from general purpose LLMs, to the rapidly growing market for LLM-powered AI companions, educational aides, and corporate assistants.

There are clear trends among this overall focus: towards systems with greater personalization to individual users through the collection and inferences of personal information, expansion of short- and long-term “memory,” greater access to systems; and towards systems that have more and more distinct “personalities.” Each of these trends are implicating US law in novel ways, pushing on the bounds of tort, product liability, consumer protection, and data protection laws. 

In this first post of a multi-part blog post series, we introduce that there is a distinction between two trends: “personalization” and “personality.” Both have real-world uses, and subsequent blog posts will unpack these in greater detail and explore concrete risks, and potential risk management for each category. 

In general:

How are companies incorporating personalization and personality into their offerings?

Both concepts can be found among recent public releases by leading general purpose large language model (LLM) providers, which are incorporating elements of both into their offerings:

ProviderExample of PersonalizationExample of Personality
Anthropic“A larger context window allows the model to understand and respond to more complex and lengthy prompts, while a smaller context window may limit the model’s ability to handle longer prompts or maintain coherence over extended conversations.” “Learn About Claude – Context Windows,” Accessed July 29, 2025, Anthropic“Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.” “Release Notes – System Prompts – Claude Opus 4,” May 22, 2025, Anthropic
Google“[P]ersonalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.” “Gemini gets personal, with tailored help from your Google apps,” Mar. 13, 2025, Google“. . . Gemini Advanced subscribers will soon be able to create Gems — customized versions of Gemini. You can create any Gem you dream up: a gym buddy, sous chef, coding partner or creative writing guide. They’re easy to set up, too. Simply describe what you want your Gem to do and how you want it to respond — like “you’re my running coach, give me a daily running plan and be positive, upbeat and motivating.” Gemini will take those instructions and, with one click, enhance them to create a Gem that meets your specific needs.” “Get more done with Gemini: Try 1.5 Pro and more intelligent features,” May 14, 2024, Google
Meta“You can tell Meta AI to remember certain things about you (like that you love to travel and learn new language), and it can also pick up important details based on context. For example, let’s say you’re hungry for breakfast and ask Meta AI for some ideas. It suggests an omelette or a fancy frittata, and you respond in the chat to let Meta AI know that you’re a vegan. Meta AI can remember that information and use it to inform future recipe recommendations.” “Building Toward a Smarter, More Personalized Assistant,” Jan. 27, 2025, Meta“We’ve been creating AIs that have more personality, opinions, and interests, and are a bit more fun to interact with. Along with Meta AI, there are 28 more AIs that you can message on WhatsApp, Messenger, and Instagram. You can think of these AIs as a new cast of characters – all with unique backstories.” “Introducing New AI Experiences Across Our Family of Apps and Devices,” Sept. 27, 2023, Meta
Microsoft“Memory in Copilot is a new feature that allows Microsoft 365 Copilot to remember key facts about you—like your preferences, working style, and recurring topics—so it can personalize its responses and recommendations over time.” “Introducing Copilot Memory: A More Productive and Personalized AI for the Way You Work,” July 14, 2025, Microsoft“Copilot Appearance infuses your voice chats with dynamic visuals. Now, Copilot can communicate with animated cues and expressions, making every voice conversation feel more vibrant and engaging.” “Copilot Appearance,” Accessed Aug. 4, 2024, Microsoft
OpenAI“In addition to the saved memories that were there before, ChatGPT now references your recent conversations to deliver responses that feel more relevant and tailored to you.” “Memory FAQ,” June 4, 2025, OpenAI“Choose from nine lifelike output voices for ChatGPT, each with its own distinct tone and character: Arbor – Easygoing and versatile . . . Breeze – Animated and earnest . . . Cove – Composed and direct . . . Ember – Confident and optimistic . . . Juniper – Open and upbeat . . . Maple – Cheerful and candid . . . Sol – Savvy and relaxed . . . .” “Voice Mode FAQ,” June 3, 2025, OpenAI

There is significant overlap between these two concepts, and specific uses may employ both. We analyze them as distinct trends because they are potentially shaping the direction of law and policy in the US in different ways. As AI systems become more personalized, they are pushing the boundaries of privacy, data protection, and consumer protection law. Meanwhile, as AI systems become more human-like, companionate, and anthropomorphized, they push the boundaries of our social constructs and relationships. Both could have a powerful impact on our fundamental social and legal frameworks.

Read the next blog in the series: In our next blog post, we will explore the concepts of “personalization” and “personality” in more detail, including specific uses and the concrete risks these technologies may pose to individuals.

AI Regulation in Latin America: Overview and Emerging Trends in Key Proposals

The widespread adoption of artificial intelligence (AI) continues to impact societies and economies around the world. Policymakers worldwide have begun pushing for normative frameworks to regulate the design, deployment, and use of AI according to their specific ethical and legal standards. In Latin America, some countries have joined these efforts by introducing legislative proposals and establishing other AI governance frameworks, such as national strategies and regulatory guidance. 

This blog post provides an overview of AI bills in Latin America through a comparative analysis of proposals from six key jurisdictions: Argentina, Brazil, Mexico, Colombia, Chile, and Peru. Except for Peru, which already approved the first AI law in the region and is set to approve secondary regulations, these countries have several legislative proposals with a varied level of maturity, with some still being in a nascent stage and others more advanced. Some of these countries have had simultaneous AI-related proposals under consideration in recent years; for example, Colombia and Mexico currently have three and, respectively, two AI bills under review1 and both countries have archived at least four AI bills from previous legislative periods. 

While it is unclear which bills may ultimately be enacted, this analysis will provide an overview of the most relevant bills in the selected jurisdictions and identify emerging trends and divergences in the region. Accordingly, this analysis was based on at least one active proposal from each country that either (i) targets AI regulation in general, instead of providing technology-specific or sector-specific regulation; (ii) has similar provisions and scope to those found in other more advanced proposals in the region, or (iii) seems to have more political support or is considered the ‘official’ proposal by the current administration in that country – this is particularly the case of Colombia, for which the present analysis was performed considering the proposal introduced by the Executive. Most of these proposals have a similar objective of regulating AI comprehensively through a risk-tiered approach. However, they differ in key elements, such as in the design of institutional frameworks and the specific obligations for “AI operators.”

Overall, AI bills in Latin America: 

(i) have a broad scope and application, covering AI systems introduced or producing legal effects in national territory; 

(ii) rely on an ethical and principle-based framework, with a heavy focus on the protection of fundamental rights and using AI for economic and societal progress; 

(iii) have a strong preference for ex ante, risk-based regulation; 

(iv) introduce institutional multistakeholder frameworks for AI governance, either by creating new agencies or assigning responsibility to existing ones, and 

(v) have specific provisions for responsible innovation and controlled testing of AI technologies.  

1. Principles-Based and Human Rights-Centered Approaches are a Common Theme Across LatAm AI Bills

Most bills under consideration are heavily grounded on a similar set of guiding principles for the development and use of AI, focused on the protection of human dignity and autonomy, transparency and explainability, non-discrimination, safety, robustness, and accountability. Some proposals explicitly refer to the OECD’s AI Principles, focused on transparency, security, and responsibility of AI systems, and to UNESCO’s AI Ethics Recommendation, which emphasizes the need for a human-centered approach, promoting social justice and environmental sustainability in AI systems. 

All bills reviewed ground the development of AI in privacy or data protection as a guiding principle to indicate that AI systems must be developed under existing privacy obligations and comply with regulations in terms of data quality, confidentiality, security, and integrity. Notably, the Mexican bill and the Peruvian proposal – the draft implementing regulations for its framework AI law – also include privacy-by-design as a guiding principle for the design and development of AI.

The inclusion of a principle-based approach is flexible and provides room for future regulations and standards, considering the evolution of AI technologies. Based on these guiding principles, most bills authorize secondary regulation by a competent authority to expand on the provisions related to AI user rights and obligations. 

In addition, most bills concur in key elements of the definition of “AI system” and “AI operators.” Brazil’s and Chile’s proposals have a similar definition of an AI system to that found in the European Union’s Artificial Intelligence Act (EU AI Act), defining it as a ‘machine-based system’ with varying levels of autonomy that, with implicit or explicit objectives, can generate outputs such as recommendations, decisions, predictions, and content. Both countries’ bills also define AI operators as the “supplier, implementer, authorized representative, importer, and distributor” of an AI system. 

Other bills include a more general definition of AI as a ‘software’ or ‘scientific discipline’ that can perform operations similar to human intelligence, such as learning and logical reasoning – an approach which reminds of the definition of AI in Japan’s new law. Peru’s regulation lacks a definition for AI operators, but includes one for AI developers and implementers; and Colombia refers to “AI operators” in similar terms to those found in Brazil and Peru, though it also includes users within its definition of “AI operators”. 

A common feature in the bills covered is their grounding on the protection of fundamental rights, particularly the rights to human dignity and autonomy, protection of personal data, privacy, non-discrimination, and access to information. Some bills go further as to introduce a new set of AI-related rights to specifically protect users from harmful interactions and impacts created by AI systems. 

Brazil’s proposal offers a salient example for this structure, introducing a chapter for the rights of individuals and groups affected by AI systems, regardless of their risk classification. For AI systems in general, Brazil’s proposal includes: 

  1. The right to prior information about an interaction with an AI system, in an accessible, free of charge, and understandable format. 
  2. The right to privacy and the protection of personal data, following the Lei Geral de Proteção de Dados Pessoais (LGPD) and relevant legislation; 
  3. The right to human determination and participation in decisions made by AI systems, taking into account the context, level of risk, and state-of-the-art technological development;
  4. The right to non-discrimination and correction of direct, indirect, unlawful, or abusive discriminatory bias.

Concerning “high-risk” systems or systems that produce “relevant legal effects” to individuals and groups, Brazil’s proposal includes:

  1. The right to an explanation of a decision, recommendation, or prediction made by an AI system; 
  2. Subject to commercial and industrial secrecy, the required explanation must contain sufficient information on the operating characteristics; the degree and level of contribution of the AI to decision-making; the data processed and its source; the criteria for decision-making, considering the situation of the individual affected; the mechanisms through which the person can challenge the decision; and the level of human supervision. 
  1. The right to challenge and review the decision, recommendation, or prediction made by the system; 
  2. The right to human intervention or review of decisions, taking into account the context, risk, and state-of-the-art technological development;
    1. Human intervention will not be required if it is demonstrably impossible or involves a disproportionate effort. The AI operator will implement effective alternative measures to ensure the re-examination of a contested decision. 

Brazil’s proposal also includes an obligation that AI operators must provide “clear and accessible information” on the procedures to exercise user rights, and establishes that the defense of individual or collective interests may be brought before the competent authority or the courts. 

Mexico’s bill also introduces a chapter on “digital rights”. While these are not as detailed as the Brazilian proposal, the chapter includes innovative ideas, such as the “right to interact and communicate through AI systems”. The proposed set of rights also incorporates the right to access one’s data processed by AI; the right to be treated equally; and the right to data protection. The inclusion of these rights in the AI bill arguably does not make a significant difference, considering most of these rights are already explicitly recognized at a constitutional and legal level. Furthermore, the Mexican bill appears to introduce a catalog of rights and principles, but it lacks specific safeguards or mechanisms for their exercise in the context of AI. However, their inclusion signals the intention of policymakers to govern and regulate AI primarily through a human-rights-based perspective. 

2. Most Countries in LatAm Already Have Comprehensive Data Protection Laws, Which Include AI-relevant Provisions 

All countries analyzed have adopted comprehensive data protection laws applying to any processing of personal data regardless of the technology involved – some for decades, like Argentina, and some more recently, like Brazil and Chile. Except for Colombia, most data protection laws in these countries include an individual’s right not to be subject to decisions based solely on automated processing. Argentina, Peru, Mexico, and Chile recognize rights related to automated decision-making, prohibiting such activity without human intervention if it produces unwanted legal effects or significantly impacts individuals’ interests, rights, and freedoms, and is intended for profiling. These laws focus on the potential of profiling through automation, and the data protection laws in Peru, Mexico, and Colombia include a specific right prohibiting such activity, while Argentina prohibits profiling by courts or administrative authorities. 

In contrast, Brazil’s LGPD recognizes the right to request the review of decisions made solely on automated processing that affect an individual’s interests, including profiling. While the intended purpose may be similar, the right under the Brazilian framework appears to be more limited, where individuals have the right to request review after the profiling occurs, but not necessarily to prevent or oppose this type of processing. Nonetheless, a significant aspect of the right proposed under Brazil’s AI bill is the explicit reference to human intervention in the review, an element absent from the same right under the LGPD. 

While AI can enable different and additional outcomes other than profiling, it is noteworthy that most of the data protection laws in these countries already include some level of regulation of AI-powered automated decision-making (ADM) and profiling, whether the AI bills under consideration in the region will ultimately be adopted or not. 

3. Risk-Based Regulation is Gaining Traction

All of the reviewed proposals adopt a risk-based approach to regulating AI, seemingly drawing at least some influence from the EU AI Act. These frameworks generally classify AI systems along a gradient of risk, from minimal to unacceptable, and introduce obligations proportional to the level of risk. While the specific definitions and regulatory mechanisms vary, the proposals articulate similar goals of ensuring safe, ethical, and trustworthy development and use of AI.

Brazil’s proposal is one of the most detailed in this respect, mandating a preliminary risk assessment for all systems before their introduction to the market, deployment, or use. The initial assessment must evaluate the system’s purpose, context, and operational impacts to determine its risk level. Similarly, Argentina’s bill requires a pre-market assessment to identify ‘potential biases, risks of discrimination, transparency, and other relevant factors to ensure compliance’. 

Notably, most proposals converge in the definition and classification of AI systems with “unacceptable” or “excessive” risk and prohibit their development, commercialization, or deployment. Except for Mexico, whose proposal does not contain an explicit ban, most of the bills expressly prohibit AI systems posing “unacceptable” (Argentina, Chile, Colombia, and Peru) or “excessive” (Brazil) risks. The proposals examined generally consider systems under this classification as being “incompatible with the exercise of fundamental rights” or those posing a “threat to the safety, life, and integrity” of individuals. 

For instance, Mexico’s bill defines AI systems with “unacceptable” risk as those that pose a “real, possible, and imminent threat” and involve “cognitive manipulation of behavior” or “classification of individuals based on their behavior and socioeconomic status, or personal characteristics”. Similarly, Colombia’s bill further defines these systems as those “capable of overriding human capacity, designed to control or suppress a person’s physical or mental will, or used to discriminate based on characteristics such as race, gender, orientation, language, political opinion, or disability”.

Brazil’s proposal also prohibits AI systems with “excessive” risk, and sets similar criteria to those found in other proposals in the region and the EU AI Act. In that sense, the proposal refers to AI systems posing “excessive” risk as any with the following purposes: 

Concerning the classification of “high-risk” systems, some AI bills define them based on certain domains or sectors, while others have a more general or principle-based approach. Generally, high-risk systems are left to be classified by a competent authority, allowing flexibility and discretion from regulators, but subject to specific criteria, such as evaluating a system’s likelihood and severity of creating adverse consequences. 

For instance, Brazil’s bill includes at least ten criteria2 for the classification of high-risk systems, such as whether the system unlawfully or abusively produces legal effects that impair access to public or essential services, whether it lacks transparency, explainability, auditability which would impair oversight, or whether it endangers human health –physical, mental or social, either individually or collectively. 

Meanwhile, the Peruvian draft regulations include a list of specific uses or sectors where the deployment of any AI system is automatically set to be considered high-risk, such as biometric identification and categorization; security of critical national infrastructure, educational admissions and student evaluations, or employment decisions.3 Under the draft regulations, the classification of “high-risk” systems and their corresponding obligations may be evaluated and reassessed by the competent authority, consistent with the “risk-based security standards principle” under the country’s brief AI law, which mandates the adoption of ‘security safeguards in proportion to a system’s level of risk’. 

Colombia’s bill incorporates a mixed approach for high-risk classification. It includes general criteria such as those systems that may “significantly impact fundamental rights”, particularly the rights to privacy, freedom of expression, or access to public information; while also including sensitive or domain-based applications, such as any system “enabling automated decision-making without human oversight that operate in the sectors of healthcare, justice, public security, or financial and social services”.  

Mexico’s proposal defines “high-risk” systems as those with the potential to significantly affect public safety, human rights, legality, or legal certainty, but omits additional criteria for their classification. A striking distinction from Mexico’s proposal is that it seems to restrict the use and deployment of these systems to public security entities and the Armed Forces (see Article 48 of the Bill). 

The Brazilian bill and Peruvian draft implementing regulations have chapters covering governance measures, describing specific obligations for developers, deployers, and distributors of all AI systems, regardless of their risk level. In addition, most bills include specific obligations for entities operating “high-risk” systems, such as performing comprehensive risk assessments and ethical evaluations; assuring data quality and bias detection; extensive documentation and record-keeping obligations; and guiding users on the intended use, accuracy, and robustness of these systems. Brazil’s bill indicates the competent authority will have discretion to determine cases under which some obligations may be relaxed or waived, according to the context in which the AI operator acts within the value chain of the system. 

Under Brazil’s AI bill, entities deploying high-risk systems must also submit an Algorithmic Impact Assessment (AIA) along with the preliminary assessment, which must be conducted following best practices. In certain regulated sectors, the Brazilian authority may require the AIA to be independently verified by an external auditor.

Chile’s proposal outlines mandatory requirements for high-risk systems, which must implement a risk management system grounded in a “continuous and iterative process”. This process must span the entire lifecycle of the system and be subject to periodic review, ensuring failures, malfunctions, and deviations from intended purpose are detected and minimized.

Argentina’s proposal requires all public and private entities that develop or use AI systems to register in a National Registry of Artificial Intelligence Systems, regardless of the level of risk. The registration must include detailed information on the system’s purpose, intended use, field of application, algorithmic structure, and implemented security safeguards. Similarly, Colombia’s bill includes an obligation to conduct fundamental rights impact assessments and create a national registry for high-risk AI systems.

Fewer proposals have specific, targeted provisions for “limited-risk” systems. For instance, Colombia’s bill defines these systems as those that, ‘without posing a significant threat to rights or safety, may have indirect effects or significant consequences on individuals’ personal or economic decisions’. Examples of these systems include AI commonly used for personal assistance, recommendation engines, synthetic content generation, or systems that simulate human interaction. Under Mexico’s proposal, “limited-risk” systems are those that ‘allow users to make informed decisions; require explicit user consent; and allow users to opt out under any circumstances’. 

In addition, the Colombian proposal explicitly indicates that AI operators employing these systems must meet transparency obligations, including disclosure of interaction with an AI tool; provide clear information about the system to users; and allow for opt-out or deactivation. Similarly, under the Chilean proposal, a transparency obligation for “limited-risk” AI systems includes informing users exposed to the system in a timely, clear, and intelligible manner that they are interacting with an AI, except in situations where this is “obvious” due to the circumstances and context of use. 

Finally, Colombia’s bill describes low-risk systems as those that pose minimal risk to the safety or rights of individuals and thus are subject to general ethical principles, transparency requirements, and best practices. Such systems may include those used for administrative or recreational purposes without ‘direct influence on personal or collective decisions’; systems used by educational institutions and public entities to facilitate activities which do not fall within the scope of any of the other risk levels; and systems used in video games, productivity tools, or simple task automation. 

4. Pluri-institutional and Multistakeholder Governance Frameworks are Preferred

A key element shared across the AI legislative proposals reviewed is the establishment of multistakeholder AI governance structures aimed at ensuring responsible oversight, regulatory clarity, and policy coordination. 

Notably, Brazil, Chile, and Colombia reflect a shared commitment to institutionalize AI governance frameworks that engage public authorities, sectoral regulators, academia, and civil society. However, they differ in the level of institutional development, the distribution of oversight functions, and the legal authority vested in enforcement bodies. All three countries envision coordination mechanisms that integrate diverse actors to promote coherence in national AI strategies. For instance, Brazil proposes the creation of the National Artificial Intelligence Regulation and Governance System (SIA). This system would be coordinated by the National Data Protection Authority (ANPD) and composed of sectoral regulators, a Permanent Council for AI Cooperation, and a Committee of AI Specialists. The SIA would be tasked with issuing binding rules on transparency obligations, defining general principles for AI development, and supporting sectoral bodies in developing industry-specific regulations.

Chile outlines a governance model centered around a proposed AI Technical Advisory Council, responsible for identifying “high-risk” and “limited-risk” AI systems and advising the Ministry of Science, Technology, Knowledge, and Innovation (MCTIC) on compliance obligations. While the Council’s role is essentially advisory, regulatory oversight and enforcement are delegated to the future Data Protection Authority (DPA), whose establishment is pending under Chile’s recently enacted personal data protection law. 

Colombia’s bill designates the Ministry of Science, Technology, and Innovation as the lead authority responsible for regulatory implementation and inter-institutional coordination. The Ministry is tasked with aligning the law’s execution with national AI strategies and developing supporting regulations. Additionally, the bill grants the Superintendency of Industry and Commerce (SIC) specific powers to inspect and enforce AI-related obligations, particularly concerning the processing of personal data, through audits, investigations, and preventive measures. 

5. Fostering Responsible Innovation Through Sandboxes, Innovation Ecosystems, and Support for SMEs 

Some proposals emphasize the dual objectives of regulatory oversight and the promotion of innovation. A notable commonality is their inclusion of controlled testing environments and regulatory sandboxes for AI systems aimed at facilitating innovation, promoting responsible experimentation, and supporting market access, particularly for startups and small-scale developers.

The bills generally empower competent and sectoral authorities to operate AI regulatory sandboxes, on their initiative or through public-private partnerships. The sandboxes are operated by pre-agreed testing plans, and some offer temporary exemptions from administrative sanctions, while others maintain liability for harms resulting from sandbox-based experimentation. 

Proposals in Brazil, Chile, Colombia, and Peru also include relevant provisions to support small-to-medium enterprises (SMEs) and mandate the operation of “innovation ecosystems.” For instance, Brazil’s bill requires sectoral authorities to follow differentiated regulatory criteria for AI systems developed by micro-enterprises, small businesses, and startups, including their market impact, user base, and sectoral relevance. 

Similarly, Chile complements its proposed sandbox regime with priority access for smaller companies, capacity-building initiatives, and their representation in the AI Technical Advisory Council. This inclusive approach aims to reduce entry barriers and ensure that small-scale innovators have both voice and access within the AI regulatory ecosystem.

Colombia’s bill includes public funding programs to support AI-related research, technological development, and innovation, with a focus on inclusion and accessibility. Although not explicitly targeted at SMEs, these incentives create indirect benefits for emerging actors and academia-led startups. 

Lastly, Peru promotes the development of open-source AI technologies to reduce systemic entry barriers and foster ecosystem efficiency. The regulation also mandates the promotion and financing of AI research and development through national programs, universities, and public administration programs that directly benefit small developers and innovators.

6. The Road Ahead for Responsible AI Governance in LatAm

Latin America is experiencing a wave of proposed legislation to govern AI. While some countries have several proposals under consideration, with some seemingly making more progress towards their adoption than others,4 a comparative review shows they share common elements and objectives. The proposed legislative landscape reveals a shared regional commitment to regulate AI in a manner that is ethical, human-centered, and aligned with fundamental rights. Most of the bills examined lay the groundwork for comprehensive AI governance frameworks based on principles and new AI-related rights. 

In addition, all proposals classify AI systems based on their level of risk – with all countries proposing a scaled risk system from minimal or low risk, which goes up to defining systems that pose  “unacceptable” or “excessive” risk – and introduce concrete mechanisms and obligations proportional to that classification, with varying but similar requirements to perform risk and impact assessments and other transparency obligations. Most bills also designate an enforcement authority to act in coordination with sectoral agencies to issue further regulations, especially to extend criteria or designate types of systems considered “high-risk”. 

Along this normative and institutional framework, most AI bills in Latin America also reflect a growing recognition of the need to balance regulatory oversight with flexibility, reflected in the adoption of controlled testing environments and tailored provisions for startups and SMEs. 

Except for Brazil and Peru, much of the legislative activity in the countries covered still remains in early stages. However, the AI bills reviewed offer an insight into how key jurisdictions in the region are considering AI governance, framing it as both a regulatory challenge and an opportunity for inclusive digital development. As these initiatives evolve, key questions around institutional capacity, enforcement, and stakeholder participation will shape how effectively Latin America can build trusted and responsible AI frameworks.

  1. In Mexico, two proposals concerning AI regulation have been introduced, one in the Senate and another in the Chamber of Deputies. Both were put forth by representatives of MORENA, the political party holding a supermajority in Congress. Additionally, the Senate is considering five proposals to amend the Federal Constitution, aiming to grant Congress the authority to legislate on AI matters. Similarly, in Colombia, there are two proposals under the Senate’s consideration and one recently introduced in the Chamber of Deputies. ↩︎
  2. 1) The system unlawfully or abusively produces legal effects that impair access to public or essential services; 2) It has a high potential for material or moral harm or for unlawful discriminatory bias; 3) It significantly affects individuals from vulnerable groups; 4) The harm it causes is difficult to reverse; 5) There is a history of damage linked to the system or its context of use; 6) The system lacks transparency, explainability, or auditability, impairing oversight; 7) It poses systemic risks, such as to cybersecurity or safety of vulnerable groups; 8) It presents elevated risks despite mitigation measures, especially in light of anticipated benefits; 9) It endangers integral human health — physical, mental, or social — either individually or collectively; 10) It may negatively affect the development or integrity of children and adolescents. ↩︎
  3. Other uses or sectors included in the high risk category are: access to and prioritization within social programs and emergency services; credit scoring; judicial assistance; Health diagnostics and patient care; criminal profiling, victimization risk analysis, emotional state detection, evidence verification, or criminal investigation by law enforcement. ↩︎
  4.  Proposals from Brazil and Chile, for example, have gone through more extensive debate and are considered the most advanced in the region. See El Pais, América Latina ante la IA: ¿regulación o dependencia tecnológica?”, March 2025. ↩︎

Highlights from FPF’s July 2025 Technologist Roundtable: AI Unlearning and Technical Guardrails

On July 17, 2025, the Future of Privacy Forum (FPF) hosted the second in a series of Technologist Roundtables with the goal of convening an open dialogue on complex technical questions that impact law and policy, and assisting global data protection and privacy policymakers in understanding the relevant technical basics of large language models (LLMs). In this event, we invited a range of academic technical experts and data protection regulators from around the world to explore machine unlearning and technical guardrails. 

We were joined by the following experts:

In emerging literature, the topic of “machine unlearning” and its related technical guardrails concerns the extent to which information can be “removed” or “forgotten” from an LLM or similar generative AI model or from an overall generative AI system. The topic is relevant to a range of policy goals, including complying with individual data subject deletion requests, respecting copyrighted information, building safety and related content protections, and overall performance. Depending on the goal at hand, different technical guardrails and means of operationalizing “unlearning” have different levels of effectiveness.

In this post-event summary, we highlight the key takeaways from three parts of the Roundtable on July 17: 

  1. Machine Unlearning: Overview and Policy Considerations 
  2. Core “Unlearning” Methods: Exact vs. Approximate
  3. Technical Guardrails and Risk Mitigation

If you have any questions, comments, or wish to discuss any of the topics related to the Roundtable and Post-Event Summary, please do not hesitate to reach out to FPF’s Center for AI at [email protected].

Take a look at last year’s Technologist Roundtable: Key Issues in AI and Data Protection Post-Event Summary and Takeaways.

A Price to Pay: U.S. Lawmaker Efforts to Regulate Algorithmic and Data-Driven Pricing

Algorithmic pricing,” “surveillance pricing,” “dynamic pricing”: in states across the U.S., lawmakers are introducing legislation to regulate a range of practices that use large amounts of data and algorithms to routinely inform decisions about the prices and products offered to consumers. These bills—targeting what this analysis collectively calls “data-driven pricing”—follow the Federal Trade Commission (FTC)’s 2024 announcement that it was conducting a 6(b) investigation to study how firms are engaging in so-called “surveillance pricing,” and the release of preliminary insights from this study in early 2025. With new FTC leadership signalling that continuing the study is not a priority, state lawmakers have stepped in to scrutinize certain pricing schemes involving algorithms and personal data.

The practice of vendors changing their prices based on data about consumers and market conditions is by no means a new phenomenon. In fact, “price discrimination”—the term in economics literature for charging different buyers different prices for largely the same product—has been documented for at least a century, and has likely played a role since the earliest forms of commerce.1 What is unique, however, about more recent forms of data-driven pricing is the granularity of data available, the ability to more easily target individual consumers at scale, and the speed at which prices can be changed. This ecosystem is enabled by the development of tools for collecting large amounts of data, algorithms that analyze this data, and digital and physical infrastructure for easily adjusting prices. 

Key takeaways

Trends in data-driven pricing legislation

As discussed in the FPF issue brief Data-Driven Pricing: Key Technologies, Business Practices, and Policy Implications, policymakers are generally concerned with a few particular aspects of data-driven pricing strategies: the potential for unfair discrimination, a lack of transparency around pricing practices, the processing and sharing of personal data, and possible anti-competitive behavior or other market distortions. While these policy issues may also be the domain of existing consumer protection, competition, and civil rights laws, lawmakers have made a concerted effort to proactively address them explicitly with new legislation. Crucially, these bills implicate three elements of data-driven pricing practices, raising a series of distinct but related questions for each:

These elements generally correspond to the different terms used in legislation to refer to data-driven pricing practices. For example, a number of bills use terms such as “algorithmic pricing,” including New York S 3008, an enacted law requiring a disclosure when “personalized algorithmic pricing” is used to set prices,2 and California SB 384, which would prohibit the use of “price-setting algorithms” under certain market conditions. A number of other bills use terms like “surveillance pricing,” such as California AB 446, which would prohibit setting prices based on personal information obtained through “electronic surveillance technology,” and Colorado HB 25-1264, which would make it an unfair trade practice to use “surveillance data” to set individualized prices or worker’s wages. Finally, some bills seek to place limits on the use of “dynamic pricing” in certain circumstances, including Maine LD 1597 and New York A 3437, which would prohibit the practice in the context of groceries and other food establishments. Each of these framings, while distinct, often cover similar kinds of practices.

Given that certain purchases such as housing and food are necessary for survival, the use of data-driven pricing strategies in these contexts is of particular concern to lawmakers. Many states already have laws banning or restricting price gouging, which typically focus on products that are necessities, and specifically during emergencies or disasters. Data-driven pricing bills, on the other hand, are less prescriptive in regards to the amount sellers are allowed to change prices, but apply beyond just emergency situations. While many apply uniformly across the economy, some are focused on particular sectors, including:

In addition to bills focused on data-driven pricing, legislation regulating artificial intelligence (AI) and automated decision making more generally often apply specifically to “high-risk AI” and AI used to make “consequential decisions,” including educational opportunities, employment, finance or lending, healthcare, housing, insurance, and other critical services. The use of a pricing algorithm in one of these contexts may therefore trigger the requirements of certain AI regulations. For example, the Colorado AI Act defines “consequential decision” to mean “a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of…” the aforementioned categories. 

Because certain data-driven pricing strategies are widespread and appeal to many consumers, there is some concern—particularly among retailers and advertisers—that overly-broad restrictions could actually end up harming consumers and businesses alike. For example, widely popular and commonplace happy hours could, under certain definitions, be considered “dynamic pricing.” As such, data-driven pricing legislation often contains exemptions, which generally fall into a few categories:

Key remaining questions

A number of policy and legal issues will be important to keep an eye on as policymakers continue to learn about the range of existing data-driven pricing strategies and consider potential regulatory approaches.

The importance of definitions

As policymakers attempt to articulate the contours of what they consider to be fair pricing strategies, the definitions they adopt play a major role in the scope of practices that are allowed. Crafting rules that prohibit certain undesirable practices without eliminating others that consumers and businesses rely on and enjoy is challenging, requiring policymakers to identify what specific acts or market conditions they’re trying to prevent. For example, Maine LD 1597, which is intended to stop the use of most dynamic pricing by food establishments, includes an incredibly broad definition of “dynamic pricing”:

“Dynamic pricing” means the practice of causing a price for a good or a product to fluctuate based upon demand, the weather, consumer data or other similar factors including an artificial intelligence-enabled pricing adjustment.

While the bill would exempt discounts, time-limited special prices such as happy hours, and goods that “traditionally [have] been priced based upon market conditions, such as seafood,” prohibiting price changes based on “demand” could undermine a fundamental principle of the market economy. Even with exceptions that carve out sales and other discounts—and not all bills contain such exemptions—legislation might still inadvertently capture other accepted practices such as specials aligned with seasonal changes, bulk purchase discounts, deals on goods nearing expiration, or promotions to clear inventory.

Lawmakers must also consider how any new definitions interact with definitions in existing law. For example, an early version of California AB 446, which would prohibit “surveillance pricing” based on personally identifiable information, included “deidentified or aggregated consumer information” within the definition of “personally identifiable information.” However, deidentified and aggregated information is not considered “personal information” as defined by the California Consumer Privacy Act (CCPA). In later versions, the bill authors aligned the definition in AB 446 with the text of the CCPA.

The role of AI

In line with policymakers’ increased focus on AI, and a shift towards industry use of algorithms in setting prices, a significant amount of data-driven pricing legislation applies explicitly to algorithmic pricing. Some bills, such as California SB 52 and California SB 384, are intended to address potential algorithmically-driven anticompetitive practices, while many others are geared towards protecting consumers from discriminatory practices. Though consumer protection may be the goal, some bills focus not on preventing specific impacts, but on eliminating the use of AI in pricing at all, at least in real time. For example, Minnesota HF 2452 / SF 3098 states:

A person is prohibited from using artificial intelligence to adjust, fix, or control product prices in real time based on market demands, competitor prices, inventory levels, customer behavior, or other factors a person may use to determine or set prices for a product.

This bill would prohibit all use of AI for price setting, even when based on typical product pricing data and applied equally to all consumers. Such a ban would have a significant impact on the practice of surge pricing, and any sector that is highly reactive to market fluctuations. On the other hand, other bills focus on the use of personal data—including sensitive data like biometrics—to set prices that are personalized to each consumer. For example, Colorado HB 25-1264 would prohibit the practice of “surveillance-based price discrimination,” defined as:

Using an automated decision system to inform individualized prices based on surveillance data regarding a consumer.

“Surveillance data” means data obtained through observation, inference, or surveillance of a consumer or worker that is related to personal characteristics, behaviors, or biometrics of the individual or a group, band, class, or tier in which the individual belongs.

These bills are concerned not necessarily with the use of AI in pricing per se, but how the use of AI in conjunction with personal data could have a detrimental effect on individual consumers. 

The impact on consumers

While data-driven pricing legislation is generally intended to protect consumers, some approaches may unintentionally block practices that consumers enjoy and rely on. There is a large delta between common and beneficial price-adjusting practices like sales on one hand, and exploitative practices like price gouging on the other, and writing a law that draws the proper cut-off point between the two is difficult. For example, Illinois SB 2255 contains the following prohibition:

A person shall not use surveillance data as part of an automated decision system to inform the individualized price assessed to a consumer for goods or services.

The bill would exempt persons assessing price based on the cost of providing a good or service, insurers in compliance with state law, and credit-extending entities in compliance with the Fair Credit Reporting Act. However, it would not exempt bona fide loyalty programs, a popular consumer benefit that is excluded from other similar legislation (such as the enacted New York S 3008, which carves out deals provided under certain “subscription-based agreements”). While lawmakers likely intended just to prevent exploitative pricing schemes that disempower consumers, they may inadvertently restrict some favorable practices as well. As a result, if statutes aren’t clear, some businesses may forgo offering discounts for fear of noncompliance.

Legal challenges to legislation

When New York S 3008 went into effect on July 8, 2025, the National Retail Federation filed a lawsuit to block the law, alleging that it would violate the First Amendment by including the following requirement, amounting to compelled speech:

Any entity that sets the price of a specific good or service using personalized algorithmic pricing … shall include with such statement, display, image, offer or announcement, a clear and conspicuous disclosure that states: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA”.

The New York Office of the Attorney General, in response, said it would pause enforcement until 30 days after the judge in the case makes a decision on whether to grant a preliminary injunction. Other data-driven pricing bills would not face this challenge, as they don’t contain specific language requirements, instead focusing on prohibiting certain practices.

Beyond legislation

Regulators have also been scrutinizing certain data-driven pricing strategies, particularly for potentially anticompetitive conduct. While the FTC has seemingly deprioritized the 6(b) study of “surveillance pricing” it announced in July 2024—cancelling public comments after releasing preliminary insights from the report in January 2025—it could still take up actions regarding algorithmic pricing in the future under its competition authority. In fact, the FTC’s new leadership has not retracted a joint statement the Commission made in 2024 along with the Department of Justice (DOJ), European Commission, and UK Competition and Markets Authority, which affirmed “a commitment to protecting competition across the artificial intelligence (AI) ecosystem.” The FTC, along with 17 state attorneys general (AGs), also still has a pending lawsuit against Amazon, accusing the company of using algorithms to deter other sellers from offering lower prices.

Even if the FTC refrains from regulating data-driven pricing, other regulators may be interested in addressing the issue. In particular, in 2024 the DOJ, alongside eight state AGs, used its antitrust authority to sue the property management software company RealPage for allegedly using an algorithmic pricing model and nonpublic housing rental data to collude with other landlords. Anticompetitive uses of algorithmic pricing tools is also a DOJ priority under new leadership, with the agency filing a statement of interest regarding the “application of the antitrust laws to claims alleging algorithmic collusion and information exchange” in a March 2025 case, and the agency’s Antitrust Division head promising an increase in probes of algorithmic pricing. Additionally, in response to reports claiming that Delta Air Lines planned to institute algorithmic pricing for tickets—and a letter to the company from Senators Gallego (D-AZ), Blumenthal (D-CT), and Warner (D-VA)—the Department of Transportation Secretary signalled that the agency would investigate such practices.

Conclusion

Policymakers are turning their attention towards certain data-driven pricing strategies, concerned about the impact—on consumers and markets—of practices that use large amounts of data and algorithms to set and adjust prices. Focused on practices such as “algorithmic,” “surveillance,” and “dynamic” pricing, these bills generally address pricing that involves the use of personal data, the deployment of AI, and/or frequent changes, particularly in critical sectors like food and housing. As access to consumer data grows, and algorithms are implemented in more domains, industry may increasingly rely on data-driven pricing tools to set prices. As such, legislators and regulators will likely continue to scrutinize their potential harmful impacts.

  1. While some forms of price discrimination are illegal, many are not. The term “discrimination” as used in this context is distinct from how it’s used in the context of civil rights. ↩︎
  2. The New York Attorney General’s office said, as of July 14, 2025, that it would pause enforcement of the law while a federal judge decides on a motion for preliminary injunction, following a lawsuit brought by the National Retail Federation. ↩︎

The “Neural Data” Goldilocks Problem: Defining “Neural Data” in U.S. State Privacy Laws

Co-authored by Chris Victory, FPF Intern

As of halfway through 2025, four U.S. states have enacted laws regarding “neural data” or “neurotechnology data.” These laws, all of which amend existing state privacy laws, signify growing lawmaker interest in regulating what’s being considered a distinct, particularly sensitive kind of data: information about people’s thoughts, feelings, and mental activity. Created in response to the burgeoning neurotechnology industry, neural data laws in the U.S. seek to extend existing protections for the most sensitive of personal data to the newly-conceived legal category of “neural data.”

Each of these laws defines “neural data” in related but distinct ways, raising a number of important questions: just how broad should this new data type be? How can lawmakers draw clear boundaries for a data type that, in theory, could apply to anything that reveals an individual’s mental activity? Is mental privacy actually separate from all other kinds of privacy? This blog post explores how Montana, California, Connecticut, and Colorado define “neural data,” how these varying definitions might apply to real-world scenarios, and some challenges with regulating at the level of neural data.

“Neural” and “neurotechnology” data definitions vary by state.

While just four states (Montana, California, Connecticut, and Colorado) currently have neural data laws on the books, legislation has rapidly expanded over the past couple years. Following the emergence of sophisticated deep learning models and other AI systems, which gave a significant boost to the neurotechnology industry, media and policymaker attention turned to the nascent technology’s privacy, safety, and other ethical considerations. Proposed regulation—both in the U.S. and globally—varies in its approach to neural data, with some strategies creating new “neurorights” or mandating entities minimize the neural data they collect or process.

In the U.S., however, laws have coalesced around an approach in which covered entities must treat neural data as “sensitive data” or other data with heightened protections under existing privacy law, above and beyond the protections granted by virtue of being personal information. The requirements that attach to neural data by virtue of being “sensitive” vary by underlying statute, as illustrated in the accompanying comparison chart. In fact, even the way that “neural data” is defined varies by law, placing different data types within scope depending on the state. The following definitions are organized roughly from the broadest conception of neural data to the narrowest.

  1. California

Generally speaking, the broadest conception of “neural data” in the U.S. laws is California SB 1223, which amends the state’s existing consumer privacy law, the California Consumer Privacy Act (CCPA), to clarify that “sensitive personal information” includes “neural data.” The law, which went into effect January 1, 2025, defines “neural data” as:

Information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information.

Notably, however, the CCPA as amended by the California Privacy Rights Act (CPRA) treats “sensitive personal information” no differently than personal information except when it’s used for “the purpose of inferring characteristics about a consumer”—in which case it is subject to heightened protections. As such, the stricter standard for sensitive information will only apply when neural data is collected or processed for making inferences.

  1. Montana

Montana SB 163 takes a slightly different approach than the other laws in two ways: one, it applies to “neurotechnology data,” an even broader category of data that includes the measurement of neural activity; and two, it amends Montana’s Genetic Information Privacy Act (GIPA) rather than a comprehensive consumer privacy law. The law, which goes into effect October 1, 2025, will define “neurotechnology data” as:

Information that is captured by neurotechologies, is generated by measuring the activity of an individual’s central or peripheral nervous systems, or is data associated with neural activity, which means the activity of neurons or glial cells in the central or peripheral nervous system, and that is not nonneural information. The term does not include nonneural information, which means information about the downstream physical effects of neural activity, including but not limited to pupil dilation, motor activity, and breathing rate.

The law will define “neurotechnology” as:

Devices capable of recording, interpreting, or altering the response of an individual’s central or peripheral nervous system to its internal or external environment and includes mental augmentation, which means improving human cognition and behavior through direct recording or manipulation of neural activity by neurotechnology.

However, the law’s affirmative requirements will only apply to “entities” handling genetic or neurotechnology data, with “entities” defined narrowly—as in the original GIPA—as:

…a partnership, corporation, association, or public or private organization of any character that: (a) offers consumer genetic testing products or services directly to a consumer; or (b) collects, uses, or analyzes genetic data.

While the lawmakers may not have intended to limit its application to consumer genetic testing companies, and may have inadvertently carried over GIPA’s definition of “entities,” the text of the statute may significantly narrow the companies subject to it.

  1. Connecticut

Similarly, Connecticut SB 1295, most of which goes into effect July 1, 2026, will amend the Connecticut Data Privacy Act to clarify that “sensitive data” includes “neural data,” defined as:

Any information that is generated by measuring the activity of an individual’s central nervous system.

In contrast to other definitions, the Connecticut law will apply only to central nervous system activity, rather than central and peripheral nervous system activity. However, it also does not explicitly exempt inferred data or nonneural information as California and Montana do, respectively.

  1. Colorado

Colorado HB 24-1058, which went into effect August 7, 2024, amends the Colorado Privacy Act to clarify that “sensitive data” includes “biological data,” which itself includes “neural data.” “Biological data” is defined as:

Data generated by the technological processing, measurement, or analysis of an individual’s biological, genetic, biochemical, physiological, or neural properties, compositions, or activities or of an individual’s body or bodily functions, which data is used or intended to be used, singly or in combination with other personal data, for identification purposes.

The law defines “neural data” as:

Information that is generated by the measurement of the activity of an individual’s central or peripheral nervous systems and that can be processed by or with the assistance of a device.

Notably, “biological data” only applies to such data when used or intended to be used for identification, significantly narrowing the potential scope. 

screenshot 2025 08 11 at 5.30.53 pm

* While only Montana explicitly covers data captured by neurotechnologies, and excludes nonneural information, the other laws may implicitly do so as well.

The Goldilocks Problem: The nature of “neural data” makes it challenging to get the definition just right.

Given that each state law defines neural data differently, there may be significant variance in what kinds of data are covered. Generally, these differences cut across three elements:

Central vs. peripheral nervous system data

The nervous system comprises the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS—made up of the brain and spinal cord—carries out higher-level functions including thinking, emotions, and coordinating motor activity. The PNS—the network of nerves that connects the CNS to the rest of the body—receives signals from the CNS and transmits this information to the rest of the body instructing it on how to function, and transfers sensory information back to the CNS in a cyclical process. Some of this activity is conscious and deliberate on the part of the individual (voluntary nervous system), while some involves unconscious, involuntary functions like digestion and heart rate (autonomic nervous system).

What this means practically is that the nervous system is involved in just about every human bodily function. Some of this data is undoubtedly particularly sensitive, as it can reveal information about an individual’s health, sexuality, emotions, identity, and more. It may also provide insight into an individual’s “thoughts,” either by accessing brain activity directly or by measuring other bodily data that in effect reveals what the individual is thinking (eg, increased heart and breathing rate at a particular time can reveal stress or arousal). It also means that an incredibly broad swath of data could be considered neural data: the movement of a computer mouse or use of a smartwatch may technically constitute, under certain definitions, neural data.

As such, there is a significant difference between laws that cover both CNS and PNS data, and those that only cover CNS data. Connecticut SB 1295 is the lone current law that applies solely to CNS data, which narrows its scope considerably and likely only covers data collected from tools such as brain-computer interfaces (BCIs), electroencephalogram (EEGs), and other similar devices. However, other data types that would be excluded by virtue of not relating to the CNS could, in theory, provide the same or similar information. For example, signals from the PNS—such as pupillometry (pupil dilation), respiration (breathing patterns), and heart rate—could also indicate the nervous system’s response to stimuli, despite not technically being a direct measurement of the CNS.

Treatment of inferred and nonneural data

Defining “neural data” in a way that covers particular data of concern without being overinclusive is challenging, and lawmakers have added carveouts in an attempt to make their legislation more workable. However, focusing regulation on the nervous system in the first place raises a few potential issues. First, it reinforces neuroessentialism, the idea that the nervous system and neural data are unique and separate from other types of sensitive data; as well as neurohype, the inflation or exaggeration of neurotechnologies’ capabilities. There is not currently—and may never be, as such—a technology for “reading a person’s mind.” What may be possible are tools that measure neural activity to provide clues about what an individual might be thinking or feeling, much the same as measuring their other bodily functions, or even just gaining access to their browsing history. This doesn’t make the data less sensitive, but challenges the idea that “neural data” itself—whether referring to the central, peripheral, or both nervous systems—is the most appropriate level for regulation.

This creates one of two problems for lawmakers. On one hand, defining “neural data” too broadly could create a scenario in which all bodily data is covered. Typing on a keyboard involves neural data, as the central nervous system sends signals through the peripheral nervous system to the hands in order to type. Yet, regulating all data related to typing as sensitive neural data could be unworkable. On the other hand, defining “neural data” too narrowly could result in regulations that don’t actually provide the protections that lawmakers are seeking. For example, if legislation only applies to neural data that is used for identification purposes, it may cover very few situations, as this is not a way that neural data is typically used. Similarly, only covering CNS data, rather than both CNS and PNS data, may be difficult to implement because it’s not clear that it’s possible to truly separate the data from these two systems, as they are interlinked.

One way lawmakers seek to get around the first problem is by narrowing the scope, clarifying that the legislation doesn’t apply to “nonneural information” such as downstream physical bodily effects, or neural data that is “inferred from nonneural information.” For example, Montana SB 163 excludes “nonneural information” such as pupil dilation, motor activity, and breathing rate. However, if the concern is that certain information is particularly sensitive and should be protected (eg, data potentially revealing an individual’s thoughts or feelings), then scoping out this information just because it’s obtained in a different way doesn’t address the underlying issue. For example, if data about an individual’s heart rate, breathing, perspiration, and speech pattern is used to infer their emotional state, this is functionally no different—and potentially even more revealing—than data collected “directly” from the nervous system. Similarly, California SB 1223 carves out data that is “inferred from nonneural information,” leaving open the possibility for the same kind of information to be inferred through other bodily data.

Identification

Another way lawmakers, specifically in Colorado, have sought to avoid an unmanageably broad conception of neural data is to only cover such data when used for identification. Colorado HB 24-1058, which regulates “biological data”—of which “neural data” is one component—only applies when the data “is used or intended to be used, singly or in combination with other personal data, for identification purposes.” Given that neural data, at least currently, is not used for identification, it’s not clear that such a definition would cover many, if any, instances of consumer neural data.

Conclusion

Each of the four U.S. states currently regulating “neural data” defines the term differently, varying around elements such as the treatment of central and peripheral nervous system data, exclusions for inferred or nonneural data, and the use of neural data for identification. As a result, the scope of data covered under each law differs depending on how “neural data” is defined. At the same time, attempting to define “neural data” reveals more fundamental challenges with regulating at the level of nervous system activity. The nervous system is involved in nearly all bodily functions, from innocuous movements to sensitive activities. Legislating around all nervous system activity may render physical technologies unworkable, while certain carveouts may, conversely, scope out information that lawmakers want to protect. While many are concerned about technologies that can “read minds,” such a tool does not currently exist per se, and in many cases nonneural data can reveal the same information. As such, focusing too narrowly on “thoughts” or “brain activity” could exclude some of the most sensitive and intimate personal characteristics that people want to protect. In finding the right balance, lawmakers should be clear about what potential uses or outcomes on which they would like to focus.

FPF at PDP Week 2025: Generative AI, Digital Trust, and the Future of Cross-Border Data Transfers in APAC

Authors: Darren Ang Wei Cheng and James Jerin Akash (FPF APAC Interns)

From July 7 to 10, 2025, the Future of Privacy Forum (FPF)’s Asia-Pacific (APAC) office was actively engaged in Singapore’s Personal Data Protection Week 2025 (PDP Week) – a week of events hosted by the Personal Data Protection Commission of Singapore (PDPC) at the Marina Bay Sands Expo and Convention Centre in Singapore. 

Alongside the PDPC’s events, PDP Week also included a two-day industry conference organized by the International Association of Privacy Professionals (IAPP) – the IAPP Asia Privacy Forum and AI Governance Global.

This blog post presents key takeaways from the wide range of events and engagements that FPF APAC led and participated in throughout the week. Key themes that emerged from the week’s discussions included:

In the paragraphs below, we elaborate on some of these themes, as well as other interesting observations that came up over the course of FPF’s involvement in PDP Week.

1. FPF’s and IMDA’s co-hosted workshop shared practical perspectives for companies navigating the waters of generative AI governance. 

On Monday, July 7, 2025, FPF joined the Infocomm Media Development Authority of Singapore (IMDA) in hosting a workshop for Singapore’s data protection community, titled “AI, AI, Captain!: Steering your organisation in the waters of Gen AI by IMDA and FPF.” The highly-anticipated event provided participants with practical knowledge about AI governance at the organizational level.  

The event was hosted by Josh Lee Kok Thong, Managing Director of FPF APAC, and was attended by around 200 representatives from industry, including data protection officers (DPOs) and chief technology officers (CTOs). FPF’s segment of the workshop had two parts: an  informational segment featuring presentations from FPF and IMDA, followed by a multi-stakeholder, practice-focused panel discussion.

photo1

FPF at “AI, AI, Captain! – Steering your organisation in the waters of Gen AI by IMDA and FPF”, July 8, 2025.

1.1 AI governance in APAC is neither unguided nor ungoverned, as policymakers are actively working to develop both soft and hard regulations for AI and to clarify how existing data protection laws apply to its use.

Josh presented on global AI governance, highlighting the rapid legislative changes in the APAC region over the past six months, and comparing developments in South Korea, Japan, and Vietnam with those in the EU, US, and Latin America. He then discussed how data protection laws – especially provisions on consent, data subject rights, and breach management – impact AI governance and how data protection regulators in Japan, South Korea, and Hong Kong (among others) have provided guidance on this.
Josh’s presentation was followed by one from Darshini Ramiah, Senior Manager of AI Governance and Safety at IMDA. Darshini provided an overview of Singapore’s approach to AI governance, which is built on three key pillars: 

  1. Creating practical tools, such as the AI Verify toolkit and Project Moonshot, which enable benchmarking and red teaming of both traditional AI systems and large language models (LLMs) respectively;
  2. Engaging closely with international partners, such as through the ASEAN Working Group on AI Governance and the publication of the AI Playbook for Small States under the Digital Forum of Small States; and
  3. Collaborating with industry in the development of principles and tools around AI governance.

photo 2

FPF presenting at “AI AI Captain – Steering your organisation in the waters of Gen AI by IMDA and FPF”, July 8, 2025.

1.2 FPF moderated a panel session that focused on key aspects of AI governance and featured industry experts and regulators.

The panel session of the workshop, moderated by Josh, included the following experts:

  1. Darshini Ramiah, Senior Manager, AI Governance and Safety at IMDA;
  2. Derek Ho, Deputy Chief Privacy, AI and Data Responsibility Officer at Mastercard; and
  3. Patrick Chua, Senior Principal Digital Strategist at Singapore Airlines (SIA).

The experts discussed AI governance from both an industry and regulatory perspective.

photo 3

FPF moderating the panel session at “AI AI Captain – Steering your organisation in the waters of Gen AI by IMDA and FPF”, July 8, 2025.

2. FPF facilitated deep conversations at PDPC’s PETs Summit, including on the use of PETs in cross-border data transfers and within SMEs.

2.1 FPF moderated a fireside chat on PETs use cases during the opening Plenary Session. 

On Tuesday, July 8, 2025 FPF APAC participated in a day-long PETs Summit, organized by the PDPC and IMDA. During the opening plenary session, Josh moderated a fireside chat with Fabio Bruno, Assistant Director of Applied Innovation at INTERPOL, titled Solving Big Problems with PETs.” Following panels that covered use cases for PETs and policies that could increase their adoption, this fireside chat looked at how PETs could present fresh solutions to long-standing data protection issues (such as cross-border data transfers). 

In this regard, Fabio shared how law enforcement bodies around the world have been exploring PETs to streamline investigations. He highlighted ongoing exploration of certain PETs, such as zero-knowledge proofs (a cryptographic method that allows one party to prove to another party that a particular piece of information is true without revealing any additional information beyond the validity of the claim) and homomorphic encryption (a family of encryption schemes allowing for computations to be performed directly on encrypted data without having to first decrypt it). In a law enforcement context, these PETs enable preliminary validation that can help to reduce delays and lower the cost of investigations, while also helping to protect individuals’ privacy. 

Notwithstanding the potential of PETs for cross-border data transfers (even for commercial, non-law enforcement contexts), challenges exist. These include: (1) enhancing and harmonizing the understanding and acceptability of PETs among data protection regulators globally; and (2) obtaining higher management support to invest in PETs. Nevertheless, the fireside chat concluded with optimism about the prospect of the greater use of PETs for data transfers, and left the audience with plenty of food for thought. 

photo 4

FPF moderating the fireside chat at PETs Summit Plenary Session, July 8, 2025

2.2 FPF Members facilitated an engaging PETs Deep Dive Session that explored business use cases for PETs.

After the plenary session, FPF APAC teammates Dominic Paulger, Sakshi Shivhare, and Bilal Mohamed facilitated a practical workshop, titled the “PETs Deep Dive Session” that was organized by the IMDA. Drawing on the IMDA’s draft PETs Adoption Guide, the workshop aimed to help Chief Data Officers, DPOs, and AI and data product teams understand which PETs best fit their business use cases. 

photo 5

FPF APAC Team at PETs Summit, July 8, 2025

3. On Wednesday, FPF joined a discussion at IAPP Asia Privacy Forum on how regulators and major tech companies in the APAC region are fostering “digital trust” in AI by aligning technology with societal expectations.

On Wednesday, July 9, 2025, FPF APAC participated in an IAPP Asia Privacy Forum panel titled “Building Digital Trust in AI: Perspectives from APAC.” Josh joined Lanah Kammourieh Donnelly, Global Head of Privacy Policy, at Google, and Lee Wan Sie, Cluster Director for AI Governance and Safety at the IMDA for a panel moderated by Justin B. Weiss, Senior Director at Crowell Global Advisors. 

A key theme from the panel was that, given the opacity of many digital technologies, the concept of digital trust is essential to ensure that these technologies work in ways that protect important societal interests. Accordingly, the panel discussed strategies that could foster digital trust.

Wan Sie provided the regulator’s perspective and acknowledged that given the rapid pace of AI development, regulation would always be “playing catch-up.” Thus, instead of implementing a horizontal AI law, she shared how Singapore is focusing on making the industry more capable of using AI responsibly. Wan Sie pointed to AI Verify, Singapore’s AI governance testing framework and toolkit, and the IMDA’s new Global AI Assurance Sandbox, as mechanisms that help organizations ensure their AI systems could demonstrate greater trustworthiness to users.

Josh focused on trends from across the APAC region, sharing how regulators in Japan and South Korea have been actively considering amendments to their data protection laws to expand the legal bases for processing personal data, in order to facilitate greater availability of data for training high-quality AI systems. 

Lanah highlighted Google’s approach of developing AI responsibly in accordance with certain core privacy values, such as those in the Fair Information Practice Principles (FIPPs). For example, she shared how Google is actively researching technological solutions like training its models on synthetic data instead of using publicly-available datasets from the Internet which may contain large amounts of personal data. 

Overall, the panel noted that APAC is taking its own distinct approach to AI governance – one in which industry and regulators collaborate actively to ensure principled development of technology. 

photo 6

FPF and the “Building Digital Trust in AI: Perspectives from APAC” panel at IAPP, 9 July 2025.

4. On Thursday, FPF staff moderated two panels at IAPP AI Governance Global on cross-border data transfers and regulatory developments in Australia

4.1 While cross-border data transfers are fragmented and restrictive, there is cautious optimism that APAC will pursue interoperability. 

On Thursday, July 10, 2025, FPF organised a panel titled “Shifting Sands: The Outlook for Cross Border Data Transfers in APAC” which featured Emily Hancock, Vice President and Chief Privacy Officer at Cloudflare, Arianne Jimenez, Head of Privacy and Data Policy and Engagement for APAC at Meta and Zee Kin Yeong, Chief Executive of the Singapore Academy of Law and FPF Senior Fellow. Moderated by Josh, the panel discussed evolving regulatory frameworks for cross-border data transfers in APAC. 

The panel first observed that the landscape for cross-border data transfers across APAC remains fragmented. Emily elaborated that restrictions on data transfer were a global phenomenon and attributable to how data is increasingly viewed as a national security matter, making governments less willing to lower restrictions and pursue interoperability.

Despite this challenging landscape, the panel members were cautiously optimistic that transfer restrictions could be managed effectively. Zee Kin highlighted how the increasing integration of economies through supranational organizations like ASEAN is driving a push in APAC towards recognizing more business-friendly data transfer mechanisms, such as the ASEAN MCCs. He also noted that regulators often relax restrictions once local businesses start to expand operations overseas and need to transfer data across borders.

Arianne suggested that businesses communicate to regulators the challenges they face with restrictive data transfer frameworks. She acknowledged that SMEs are often not as well-resourced as multi-national corporations (MNCs), and thus faced difficulties in navigating the complex patchwork of regulations across the region. She explained that since regulators in APAC are generally open to consultation, businesses should take the opportunity to advocate for more interoperability. 

The panel concluded by highlighting the importance of data transfers to AI development. Cross-border data transfers are crucial to fostering diverse datasets, accessing advanced computing infrastructure, combating global cyber-threats by enabling worldwide threat sharing, and reducing the environmental impact by limiting the need for additional data centers. Overall, the panel expressed hope that despite the legal fragmentation and complicated state of play, the clear benefits of cross-border data transfers would encourage jurisdictions to pursue greater interoperability. 

photo 7

FPF and the “Shifting Sands: The Outlook for Cross Border Data Transfers in APAC” panel at IAPP July 10, 2025.

4.2 With updates to Australia’s Privacy Act, privacy is non-negotiable, and businesses can benefit from improving their privacy compliance processes and systems ahead of increased enforcement. 

FPF’s APAC Deputy Director Dominic Paulger moderated a panel titled Navigating the Impact of Australia’s Privacy Act Amendments in the Asia-Pacific.” The panelists included Dora Amoah, Global Privacy Office Lead at the Boeing Company, Rachel Baker, Senior Corporate Counsel for Privacy, JAPAC, at Salesforce, and Annelies Moens, the former Managing Director of Privcore. The panel discussed the enactment of the Privacy and Other Legislation Amendment Bill 2024 following a multiyear review of Australia’s Privacy Act, and the potential impact of these reforms on businesses.

Annelies shared an overview of the reforms, including: 

She mentioned that more changes could be coming, but some proposals – such as removing the small business exception – were facing resistance in Australia. However, irrespective of how the law develops, businesses can expect enforcement to increase.

The industry panelists shared their insights and experiences complying with the new amendments. Dora explained that despite the increased litigation risk from the new statutory tort for serious invasions of privacy, the threshold for liability was rather high as the tort required intent. She also noted that companies could avoid liability through implementing proper processes that prevent intentional or reckless misconduct. 

Rachel noted that the Privacy Act’s new ADM provisions would improve consumer rights in Australia. She observed how Australians have been facing serious privacy intrusions that have drawn the OAIC’s attention, such as the Cambridge Analytica scandal, and the mis-use of facial recognition technology. She considered that since data subjects in Australia are increasingly expecting more rights, such as the right to deletion, businesses should go beyond compliance and actively adopt best practices. 

Overall, the panel expressed the view that with this new reality, the role of the privacy professional in Australia, much like the rest of the world, is evolving to not just interpret and comply with the law but also to build robust systems through privacy by design.   

photo 8

FPF and the panelists of “Navigating the Impact of Australia’s Privacy Act Amendments in the Asia-Pacific” at IAPP July 10, 2025.

5. FPF organized exclusive side events to foster deeper engagements with key stakeholders. 

A key theme of FPF’s annual PDP Week experience has always been about bringing our global FPF community – members, fellows, and friends – together for deep and meaningful conversations about the latest developments. This year, FPF APAC organized two events for its members: a Privacy Leaders’ Luncheon (an annual staple), and for the first time, an India Luncheon co-organized alongside Khaitan & Co.  

5.1 On July 8, 2025, FPF hosted an invite-only Privacy Leaders’ Luncheon

This closed-door event provided a platform for senior stakeholders of FPF APAC to discuss pressing challenges at the intersection of AI and privacy, with a particular focus on the APAC region. During the session, the attendees discussed key topics such as the emerging developments in data protection laws, AI governance, and children’s privacy. 

photo 9

FPF’s Privacy Leaders Luncheon, July 8, 2025.

5.2 On July 10, FPF co-hosted an India Roundtable Luncheon with Khaitan & Co.

FPF APAC also collaborated with an Indian law firm, Khaitan & Co, to co-host a lunch roundtable focusing on pressing challenges in India, such as the development of implementing rules for the Digital Personal Data Protection Act, 2023 (DPDPA). The event brought together experts from both India and Singapore for fruitful discussions around the DPDPA and the draft Digital Personal Data Protection Rules. FPF APAC is grateful to have partnered with Khaitan & Co for the Luncheon, which saw active discussion amongst attendees on key issues in India’s emerging data protection regime. 

photo 10

FPF’s India Luncheon co-hosted with Khaitan & Co, July 10, 2025.

6. Conclusion

In all, it has been another deeply fruitful and meaningful year for FPF at Singapore’s PDP Week 2025. Through our panels, engagements, and curated roundtable sessions, FPF is proud to have been able to continue to drive thoughtful and earnest dialogue on data protection, AI, and responsible innovation across the APAC region. These engagements reflect our ongoing commitment to fostering greater collaboration and understanding among regulators, industry, academia, and civil society.

Looking ahead, FPF remains focused on shaping thoughtful approaches to privacy and emerging technologies. We are grateful for the continued support of the IMDA, IAPP, as well as our members, partners, and participants, who helped make these events a memorable success.

Balancing Innovation and Oversight: Regulatory Sandboxes as a Tool for AI Governance

Thanks to Marlene Smith for her research contributions.

As policymakers worldwide seek to support beneficial uses of  artificial intelligence (AI), many are exploring the concept of “regulatory sandboxes.” Broadly speaking, regulatory sandboxes are legal oversight frameworks that offer participating organizations the opportunity to experiment with emerging technologies within a controlled environment, usually combining regulatory oversight with reduced enforcement. Sandboxes often encourage organizations to use real-world data in novel ways, with companies and regulators learning how new data practices are aligned –  or misaligned – with existing governance frameworks. The lessons learned can inform future data practices and potential regulatory revisions.

In recent years, regulatory sandboxes have gained traction, in part due to a requirement under the EU AI Act that regulators in the European Union adopt national sandboxes for AI. Jurisdictions across the world, such as Brazil, France, Kenya, Singapore, and the United States (Utah) have introduced AI-focused regulatory sandboxes, offering current, real-life lessons for the role they can play in supporting beneficial use of AI while enhancing clarity about how legal frameworks apply to nascent AI technologies. More recently, in July 2025, the United States’ AI Action Plan recommended that federal agencies in the U.S. establish regulatory sandboxes or “AI Centers of Excellence” for organizations to “rapidly deploy and test AI tools while committing to open sharing of data and results.”

As AI systems grow more advanced and widespread, their complexity poses significant challenges for legal compliance and effective oversight. Regulatory sandboxes can potentially address these challenges. The probabilistic nature of advanced AI systems, especially generative AI, can make AI outputs less certain, and legal compliance therefore less predictable. Simultaneously, the rapid global expansion of AI technologies and the desire to “scale up” AI use within organizations has outpaced the development of traditional legal frameworks. Finally, the global regulatory landscape is increasingly fragmented, which can cause significant compliance burdens for organizations. Depending on how they are structured and implemented, regulatory sandboxes can address or mitigate some of these issues by providing a controlled and flexible environment for AI testing and experimentation, under the guidance and oversight of policymakers. This framework can help ensure responsible development, reduce legal uncertainty, and inform more adaptive and forward-looking AI regulations.

1. Key Characteristics of a Regulatory Sandbox 

A regulatory sandbox is an adaptable framework that can allow organizations to test out innovative new products, services, or business models with reduced regulatory requirements. Typically supervised by a regulatory body, these “testbeds” encourage experimentation and innovation in a real-world setting while managing potential risks.

The concept of a regulatory sandbox was first introduced in the financial technology (fintech) sector, with the United Kingdom launching the first one in 2015. Since then, the concept has gained global traction, especially in sectors with rapid technological advancement, such as healthcare. According to a 2025 report by the Datasphere Initiative, there are over 60 sandboxes related to data, AI, or technology in the world. Of those, 31 are national sandboxes that focus on AI innovation, including areas such as machine learning, AI development, and data-driven solutions. Over a dozen sandboxes are currently in development and expected to launch in the coming years.

Generally, a regulatory sandbox includes the following characteristics: 

Depending on their design, regulatory sandboxes can offer a number of benefits to different stakeholders: 

2. Notable Jurisdictions with AI-Focused Regulatory Sandboxes

Across the globe, a growing number of governments are exploring AI-focused regulatory sandboxes. In the European Union, this growth has been partly driven by a requirement in the EU Artificial Intelligence Act (EU AI Act), passed in 2024 as part of the EU digital strategy. The EU AI Act requires all EU Member States to establish a national or regional regulatory sandbox for AI, with a particular emphasis on annual reporting, tailored training, and priority access for startups and SMEs. In doing so, Member States have taken a variety of different approaches in how they develop, structure, and implement regulatory sandboxes. Beyond the EU, global jurisdictions have similarly taken a broad range of approaches. 

Among the approximately thirty jurisdictions with AI-related sandboxes, a few notable examples can offer a useful review of the landscape. In this section, we describe five jurisdictions from a cross-section of global geographies, representing a range of goals and legal approaches: Brazil, France, Kenya, Singapore, and the United States (Utah). Each offers unique lessons for the timing of sandboxes relative to regulation, regulatory requirements for participants, and policy goals.

Brazil is one of the few countries that launched a national AI regulatory sandbox before enacting an AI law. Brazil’s sandbox focuses on machine learning-driven technologies, including generative AI, where the Brazilian Data Protection Authority (ANPD) will oversee selected projects with the involvement of a variety of stakeholders, including academics and civil society organizations. In recent years, regulators have emphasized several goals for its sandbox, including nurturing innovation while implementing best practices “to ensure compliance with personal data protection rules and principles.” Brazil’s AI bill establishes sandboxes as a tool in its compliance regime: organizations that are in violation of the proposed Act may be restricted from participating in the AI sandbox program for up to five years.

In France, the French Data Protection Authority (La Commission nationale de l’informatique et des libertés or CNIL), has run an annual regulatory sandbox for the last three years, with each year focused on a different national digital policy goal. This past year, the sandbox focused on “AI and public services,” exploring how AI can be responsibly deployed in sectors such as employment, utilities, and transportation. CNIL provided advice on issues such as automated decision-making, data minimization, and bias mitigation. This year, the sandbox will focus on the “silver [elderly] economy,” exploring AI solutions to support aging populations. Out of over fifteen applications, CNIL selected six projects, three of which include a data-sharing system to improve home care (O₂), an AI-based acoustic monitoring tool for care homes (OSO-AI), and a mobile app that tracks seniors’ autonomy and alerts families or caregivers (Neural Vision). 

Kenya operates two regulatory sandboxes in AI: (1) the Communications Authority of Kenya (CA) oversees a sandbox that focuses on Information and Communications Technology (ICT), including e-learning and e-health platforms that deploy AI. Participants may be local or international, and must submit regular reports that detail performance indicators and other metrics; and (2) the Capital Markets Authority (CMA) oversees a second regulatory sandbox that focuses on innovative technologies in the finance and capital markets sector. Participants can receive feedback and guidance from the CMA and other stakeholders on AI products such as robo-advisory services, blockchain applications, and crowdfunding platforms.

Singapore’sGenerative AI Evaluation Sandbox” brings together key stakeholders, including model developers, app deployers and third party “testers,” to evaluate generative AI products and develop common standardized evaluation approaches. Participants collaboratively assess generative AI technologies through an “Evaluation Catalogue,” which compiles common technical testing tools and recommends a baseline set of evaluation tests for generative AI products. The Generative AI Evaluation Sandbox is overseen by the Infocomm Media Development Authority (IMDA), a statutory board that regulates Singapore’s infocommunications, media and data sectors and oversees private-sector AI governance in Singapore, and the AI Verify Foundation, a not-for-profit subsidiary wholly owned by the IMDA that drives Singapore’s AI governance testing efforts, including an AI governance testing framework and toolkit. More recently, in July 2025, Singapore announced the launch of another sandbox, the “Global AI Assurance Sandbox,” to address agentic AI and risks such as data leakage and vulnerability to prompt injections.

In the United States, Utah is the first state to operate an AI-focused regulatory sandbox (although it may not be the last, with the enactment of the 2025 Texas Responsible AI Governance Act and Delaware’s ​​House Joint Resolution 7). In 2024, Utah passed the Utah AI Policy Act (UAIP), which established the Office of Artificial Intelligence Policy to oversee the Utah AI laboratory program (AI Lab). Utah’s office has broad authority to grant entities up to two years of “regulatory mitigation” while they develop pilot AI programs and receive feedback from key stakeholders, including industry experts, academics, regulators, and community members. Mitigation measures include exemptions from applicable state regulations and laws, capped penalties for civil fines, and cure periods to address compliance issues. The AI Lab’s first half-year focused on mental health, and resulted in a bill that regulates AI mental health chatbot use (HB 452). 

3. Policy Considerations for AI

Modern AI systems, particularly generative AI systems, can behave unpredictably or in ways that can be challenging to explain. This can lead to uncertain outcomes and make legal compliance for data protection laws, such as the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR), harder to assess in advance of the system being deployed. Scalability is also a distinct issue for AI, as it presents both technical and legal hurdles, requiring organizations to manage evolving data, outdated models, and regulatory risks. Finally, the fragmented legal landscape for global AI regulation increases compliance burdens and uncertainty for organizations, especially for startups and SMEs. While regulatory sandboxes are not a panacea for AI governance, each of these issues can be potentially mitigated or addressed by sandboxes.

Machine Learning and Generative AI Can Create Unpredictable Results

As AI systems become increasingly advanced, they can present a challenge for legal compliance due to their lack of deterministic outcomes.1 Modern AI systems, particularly those powered by machine learning or transformer architecture, involve vast numbers of parameters and are trained on very large, sometimes poorly documented, datasets. When deployed in real-world settings, these systems can exhibit behaviors that are difficult to predict, explain, or control. This can include issues like data shifts (when training fails to produce a good model because the data or conditions do not match real-world examples) or underspecification (when models pass internal tests but fail to perform as well in the real world). This unpredictability can arise from many factors, including the scale and complexity of AI systems, reliance on opaque training data, and the accelerating pace of AI development. Generative AI, in particular, relies on transformer architecture that behaves probabilistically. 

As a result, the non-deterministic nature of such AI systems can make it difficult to align them with existing legal frameworks and compliance obligations. For example, under ​​CCPA, consumers have the right to know what personal information is collected and how it is used, and access, delete, or correct their personal information. Similarly, the GDPR provides individuals with rights regarding automated decision-making, including the right to an explanation of decisions made solely by automated processes. Under both CCPA and the GDPR, it can be difficult to apply rules that assume deterministic outcomes  to AI-driven decisions because some AI results (outputs) can vary even with the same or similar inputs.

In the face of these challenges, regulatory sandboxes can offer a structured solution by allowing AI systems to be tested in real-world environments under regulatory supervision. This enables regulators to observe how AI behaves with unforeseen variables and to identify and address those risks early; it also provides information the organization can use to update or iterate its model. For example, in France, CNIL worked with the company France Travail as part of their 2024 regulatory sandbox program to assess how its generative AI tool for jobseekers could provide effective results while ensuring adherence to GDPR’s data minimization principles. Because the tool is based on a large language model (LLM), it includes the inherent risk of generating results that are unpredictable or challenging to explain. Following the sandbox program, CNIL issued recommendations for generative AI systems to implement “harmonized and standardized prompts” directing users to enter as little personal data as possible, and filters to block terms related to sensitive personal data. Through this iterative process, France’s regulators were able to refine their legal approach to a complex emerging technology, while organizations (including France Travail) were able to benefit from early guidance, increasing legal certainty and reducing the likelihood of harmful outcomes or regulatory violations.

AI Scalability Poses Technical and Legal Challenges

AI scalability, or expanding the use of AI technologies to match the pace of business demand, has emerged as both a driver of innovation and a costly business challenge. Organizations must navigate a range of technical issues, such as evolving complex data sets, obsolete models, and security issues, which can delay product delivery timelines or result in financial penalties for non-compliance with an applicable law. Beyond the technical issues, scaling AI also requires the organization to regularly review and maintain internal standards for security, legal and regulatory compliance, and ethics. 

By participating in a regulatory sandbox, organizations can address these challenges and stay aligned with the global patchwork of AI governance through the opportunity to test AI products with regular oversight, minimizing the risks of market delays, product recalls, or regulatory fines. Kenya is an example of how many organizations and governments seek to harness AI’s potential with the specific goal of enabling scalability. The Kenya National Artificial Intelligence Strategy 2025-2030 seeks to align its policy ambitions with broader digital policy trends across sub-Saharan Africa and beyond, while staying grounded in local data and market ecosystems. Kenya’s two AI sandboxes reflect its desire to take advantage of domestic priority AI markets and global trends in AI scalability.

The AI Regulatory Landscape Continues to Rapidly Evolve

Global AI regulation is constantly evolving, with jurisdictions taking diverse approaches that reflect different regions’ unique priorities and challenges. In Europe, the EU AI Act has multiple compliance deadlines through 2030; African countries are testing a phased implementation approach to AI; Latin America is launching a variety of strategies and sandboxes; and in the Asia-Pacific region, several key jurisdictions have adopted regulatory frameworks that are generally limited to voluntary ethical principles and guidelines.

In the United States, the absence of a comprehensive federal AI or privacy framework has led to a patchwork of state-level efforts. In 2024, nearly 700 AI or AI-adjacent bills were introduced in state legislatures. These efforts vary widely in scope and focus. Some states have proposed relatively broad laws aimed at consumer protection and high-impact areas, while others have proposed more targeted rules or sector-specific regulation (e.g., legislation that would protect children, regulate AI hiring tools, or address deepfakes).

As a result, navigating the evolving landscape without regulatory certainty has become a practical challenge for organizations. Innovation typically outpaces law, and as differing legal standards emerge and evolve, organizations must navigate conflicting or overlapping requirements. This can increase compliance costs and delay product development, especially in situations where regulations remain ambiguous or are still under consideration. Startups and SMEs are particularly impacted by compliance costs, as they may not have the financial support or infrastructure to weather a long period of legal uncertainty. 

Depending on the relevant jurisdiction, regulatory sandboxes can offer greater legal certainty by providing a degree of legal immunity for liability or penalties, similar to a “safe harbor.” In doing so, they can reduce time to market and reduce costs associated with uncertainty. Some jurisdictions, such as France (under the EU AI Act), explicitly require sandboxes to support and accelerate market access for SMEs and start-ups. 

In many cases, a sandbox can lead to stronger relationships between lawmakers and other stakeholders, and an opportunity for experts to shape policymaking directly while organizations await regulatory guidance. For example, Utah’s sandbox, the “AI Lab,” focused on mental health in its first year, and state legislators subsequently passed a law that regulates mental health AI chatbots in Utah. In a similar vein, Brazil launched a national AI regulatory sandbox before enacting an AI law, and findings from the sandbox could inform the final version of legislation. Many other sandboxes, most notably in Singapore, take a “light touch” approach that prioritizes iterative guidance, rather than hard law.

At the same time, regulatory sandboxes can offer legal protections only within their own jurisdictional scope of authority. As a result, sandboxes may vary in their practical ability to offer legal certainty. In other words, a company that receives a regulatory waiver from laws in one jurisdiction (such as Utah) is not protected against liability arising under other jurisdictions (such as California, federal, or global laws). As a result, regulator collaboration across jurisdictions can have significant impact, with many opportunities for legal reciprocity and knowledge sharing. 

4. Looking Ahead

The use of regulatory sandboxes continues to expand as global policymakers recognize their value in fostering innovation while ensuring responsible AI governance. Just recently, in July 2025, Singapore launched a new sandbox to address emerging challenges in AI, including the deployment of AI agents. Lessons learned from each of these five jurisdictions showcase that sandboxes can stimulate AI development, enhance consumer protections, and help regulators develop more effective policies.

As policymakers consider different approaches to regulating AI, it is crucial to integrate the lessons learned from these sandboxes. By offering flexible regulatory frameworks that prioritize real-world testing, multi-stakeholder cooperation, and iterative feedback, sandboxes can help balance the need for AI innovation with safeguarding the public interest. 

  1.  These non-deterministic outcomes, or when an AI system results in a different outcome despite the same conditions make it difficult to assign responsibility when AI-driven 
    decisions can lead to unintended results. In contrast, a deterministic AI would make the same chess move every time, given the same board set up, whereas a probabilistic (or non-deterministic) model would learn from previous experiences and adapt its move accordingly. ↩︎

Practical Takeaways from FPF’s Privacy Enhancing Technologies Workshop

In April, the Future of Privacy Forum and the Mozilla Foundation hosted an all-day workshop with technology, legal, and policy experts to explore Privacy Enhancing Technologies (PETs). During the workshop, multiple companies presented technologies they developed and implemented to preserve individuals’ privacy. In addition, the participants discussed steps for broadening the adoption of these technologies and their intersection with data protection laws. 

Mastercard’s Chief Privacy Officer, Caroline Louveaux, presented the first PET, a privacy-preserving technology tested in a new cross-border fraud detection system. Louveaux presented how the system employs Fully Homomorphic Encryption (FHE), a technique that enables analysis of encrypted data, and the participants discussed the benefits related to privacy and broader compliance requirements this technique captures.  

To learn more about the presentation and discussion, download FPF’s new Issue Brief, PETs Use Case: Preventing Financial Fraud Across Different Jurisdictions with Fully Homomorphic Encryption

The second PET presentation by Robert Pisarczyk, CEO and Co-Founder of Oblivious, was an overview of how Oblivious implemented a privacy-perserving technology in partnership with an insurance company to tackle a common tension between data privacy and utility. The companies applied differential privacy techniques to retain information from personal data while complying with legal requirements to delete it. By anonymizing data before deletion, differential privacy allows businesses to generate summaries, trends, and patterns that do not compromise individual privacy. The participants discussed this new technique through the lens of data deletion, and whether differential privacy meets the requirements for it under existing data protection laws like the GDPR. 

To learn more about the presentation and discussion, download FPF’s new Issue Brief, PETs Use Case: Differential Privacy for End-of-Life Data.

Common themes that arose during the workshop included:

Read more details about the workshop in the new FPF publication, PETs Workshop Proceedings

The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics is supported by the U.S. National Science Foundation under Award #2413978 and the U.S. Department of Energy, Office of Science under Award #DE-SC0024884.