Future of Privacy Forum Launches the FPF Center for Artificial Intelligence

The FPF Center for Artificial Intelligence will serve as a catalyst for AI policy and compliance leadership globally, advancing responsible data and AI practices for public and private stakeholders

Today, the Future of Privacy Forum (FPF) launched the FPF Center for Artificial Intelligence, established to better serve policymakers, companies, non-profit organizations, civil society, and academics as they navigate the challenges of AI policy and governance. The Center will expand FPF’s long-standing AI work, introduce large-scale novel research projects, and serve as a source for trusted, nuanced, nonpartisan, and practical expertise. 

FPF’s Center work will be international as AI continues to deploy globally and rapidly. Cities, states, countries, and international bodies are already grappling with implementing laws and policies to manage the risks.“Data, privacy, and AI are intrinsically interconnected issues that we have been working on at FPF for more than 15 years, and we remain dedicated to collaborating across the public and private sectors to promote their ethical, responsible, and human-centered use,” said Jules Polonetsky, FPF’s Chief Executive Officer. “But we have reached a tipping point in the development of the technology that will affect future generations for decades to come. At FPF, the word Forum is a core part of our identity. We are a trusted convener positioned to build bridges between stakeholders globally, and we will continue to do so under the new Center for AI, which will sit within FPF.”

The Center will help the organization’s 220+ members navigate AI through the development of best practices, research, legislative tracking, thought leadership, and public-facing resources. It will be a trusted evidence-based source of information for policymakers, and it will collaborate with academia and civil society to amplify relevant research and resources. 

“Although AI is not new, we have reached an unprecedented moment in the development of the technology that marks a true inflection point. The complexity, speed and scale of data processing that we are seeing in AI systems can be used to improve people’s lives and spur a potential leapfrogging of societal development, but with that increased capability comes associated risks to individuals and to institutions,” said Anne J. Flanagan, Vice President for Artificial Intelligence at FPF. “The FPF Center for AI will act as a collaborative force for shared knowledge between stakeholders to support the responsible development of AI, including its fair, safe, and equitable use.”

The Center will officially launch at FPF’s inaugural summit DC Privacy Forum: AI Forward. The in-person and public-facing summit will feature high-profile representatives from the public and private sectors in the world of privacy, data and AI. 

FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers. 

See the full list of founding FPF Center for AI Leadership Council members here.

I am excited about the launch of the Future of Privacy Forum’s new Center for Artificial Intelligence and honored to be part of its leadership council. This announcement builds on many years of partnership and collaboration between Workday and FPF to develop privacy best practices and advance responsible AI, which has already generated meaningful outcomes, including last year’s launch of best practices to foster trust in this technology in the workplace.  I look forward to working alongside fellow members of the Council to support the Center’s mission to build trust in AI and am hopeful that together we can map a path forward to fully harness the power of this technology to unlock human potential.

Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday

I’m honored to be a founding member of the Leadership Council of the Future of Privacy Forum’s new Center for Artificial Intelligence. AI’s impact transcends borders, and I’m excited to collaborate with a diverse group of experts around the world to inform companies, civil society, policymakers, and academics as they navigate the challenges and opportunities of AI governance, policy, and existing data protection regulations.

Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden

“As we enter this era of AI, we must require the right balance between allowing innovation to flourish and keeping enterprises accountable for the technologies they create and put on the market. IBM believes it will be crucial that organizations such as the Future of Privacy Forum help advance responsible data and AI policies, and we are proud to join others in industry and academia as part of the Leadership Council.”

Learn more about the FPF Center for AI here.

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. 

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.

FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance

FPF’s Youth and Education team has developed a checklist and accompanying policy brief to help schools vet generative AI tools for compliance with student privacy laws. Vetting Generative AI Tools for Use in Schools is a crucial resource as the use of generative AI tools continues to increase in educational settings. It’s critical for school leaders to understand how existing federal and state student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) apply to the complexities of machine learning systems to protect student privacy. With these resources, FPF aims to provide much-needed clarity and guidance to educational institutions grappling with these issues.

Click here to access the checklist and policy brief.

“AI technology holds immense promise in enhancing educational experiences for students, but it must be implemented responsibly and ethically,” said David Sallay, the Director for Youth & Education Privacy at the Future of Privacy Forum. “With our new checklist, we aim to empower educators and administrators with the knowledge and tools necessary to make informed decisions when selecting generative AI tools for classroom use while safeguarding student privacy.”

The checklist, designed specifically for K -12 schools, outlines key considerations when incorporating generative AI into a school or district’s edtech vetting checklist. 

These include: 

By prioritizing these steps, educational institutions can promote transparency and protect student privacy while maximizing the benefits of technology-driven learning experiences for students. 

The in-depth policy brief outlines the relevant laws and policies a school should consider, the unique compliance considerations of generative AI tools (including data collection, transparency and explainability, product improvement, and high-risk decision-making), and their most likely use cases (student, teacher, and institution-focused).

The brief also encourages schools and districts to update their existing edtech vetting policies to address the unique considerations of AI technologies (or to create a comprehensive policy if one does not already exist) instead of creating a separate vetting process for AI. It also highlights the role that state legislatures can play in ensuring the efficiency of school edtech vetting and oversight and calls on vendors to be proactively transparent with schools about their use of AI.

li live promo

Check out the LinkedIn Live with CEO Jules Polonetsky and Youth & Education Director David Sallay about the Checklist and Policy Brief.

To read more of the Future of Privacy Forum’s youth and student privacy resources, visit www.StudentPrivacyCompass.org

FPF Releases “The Playbook: Data Sharing for Research” Report and Infographic

Today, the Future of Privacy Forum (FPF) published “The Playbook: Data Sharing for Research,” a report on best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report.

Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific insights and drive progress in public health, education, social science, and a myriad of other fields for the betterment of the broader society. Academic researchers use this data to consider consumer, commercial, and scientific questions at a scale they cannot reach using conventional research data-gathering techniques alone. This data also helped researchers answer questions on topics ranging from bias in targeted advertising and the influence of misinformation on election outcomes to early diagnosis of diseases through data collected by fitness and health apps.

The playbook addresses vital steps for data management, sharing, and program execution between companies and researchers. Creating a data-sharing ecosystem that positively advances scientific research requires a better understanding of the established risks, opportunities to address challenges, and the diverse stakeholders involved in data-sharing decisions. This report aims to encourage safe, responsible data-sharing between industries and researchers.

“Corporate data sharing connects companies with research institutions, by extension increasing the quantity and quality of research for social good,” said Shea Swauger, Senior Researcher for Data Sharing and Ethics. “This Playbook showcases the importance, and advantages, of having appropriate protocols in place to create safe and simple data sharing processes.”

In addition to the Playbook, FPF created a companion infographic summarizing the benefits, challenges, and opportunities of data sharing for research outlined in the larger report.

research data sharing infographic

As a longtime advocate for facilitating the privacy-protective sharing of data by industry to the research community, FPF is proud to have created this set of best practices for researchers, institutions, policymakers, and data-holding companies. In addition to the Playbook, the Future of Privacy Forum has also opened nominations for its annual Award for Research Data Stewardship.

“Our goal with these initiatives is to celebrate the successful research partnerships transforming how corporations and researchers interact with each other,” Swauger said. “Hopefully, we can continue to engage more audiences and encourage others to model their own programs with solid privacy safeguards.”

Shea Swauger, Senior Researcher for Data Sharing and Ethics, Future of privacy Forum

Established by FPF in 2020 with support from The Alfred P. Sloan Foundation, the Award for Research Data Stewardship recognizes excellence in the privacy-protective stewardship of corporate data shared with academic researchers. The call for nominations is open and closes on Tuesday, January 17, 2023. To submit a nomination, visit the FPF site.

FPF has also launched a newly formed Ethics and Data in Research Working Group; this group receives late-breaking analyses of emerging US legislation affecting research and data, meets to discuss the ethical and technological challenges of conducting research, and collaborates to create best practices to protect privacy, decrease risk, and increase data sharing for research, partnerships, and infrastructure. Learn more and join here

FPF Testifies Before House Subcommittee on Energy and Commerce, Supporting Congress’s Efforts on the “American Data Privacy and Protection Act” 

This week, FPF’s Senior Policy Counsel Bertram Lee testified before the U.S. House Energy and Commerce Subcommittee on Consumer Protection and Commerce hearing, “Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security” regarding the bipartisan, bicameral privacy discussion draft bill, “American Data Privacy and Protection Act” (ADPPA). FPF has a history of supporting the passage of a comprehensive federal consumer privacy law, which would provide businesses and consumers alike with the benefit of clear national standards and protections.

Lee’s testimony opened by applauding the Committee on its efforts towards comprehensive federal privacy legislation and emphasized the “time is now” for its passage. As it is written, the ADPPA would address gaps in the sectoral approach to consumer privacy, establish strong national civil rights protections, and establish new rights and safeguards for the protection of sensitive personal information. 

“The ADPPA is more comprehensive in scope, inclusive of civil rights protections, and provides individuals with more varied enforcement mechanisms in comparison to some states’ current privacy regimes,” Lee said in his testimony. “It also includes corporate accountability mechanisms, such as the requiring privacy designations, data security offices, and executive certifications showing compliance, which is missing from current states’ laws. Notably, the ADPPA also requires ‘short-form’ privacy notices to aid consumers of how their data will be used by companies and their rights — a provision that is not found in any state law.” 

Lee’s testimony also provided four recommendations to strengthen the bill, which include: 

Many of the recommendations would ensure that the legislation gives individuals meaningful privacy rights and places clear obligations on businesses and other organizations that collect, use and share personal data. The legislation would expand civil rights protections for individuals and communities harmed by algorithmic discrimination as well as require algorithmic assessments and evaluations to better understand how these technologies can impact communities. 

The submitted testimony and a video of the hearing can be found on the House Committee on Energy & Commerce site.

Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework

The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield. 

This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached. 

The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.

This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases. 

After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement. 

In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.

Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.

Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.

The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.

The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”. 

Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.

One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.

Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side. 

Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.  

What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.

Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.

In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.

Privacy Best Practices for Rideshare Drivers Using Dashcams

FPF & Uber Publish Guide Highlighting Privacy Best Practices for Drivers who Record Video and Audio on Rideshare Journeys

FPF and Uber have created a guide for US-based rideshare drivers who install “dashcams” – video cameras mounted on a vehicle’s dashboard or windshield. Many drivers install dashcams to improve safety, security, and accountability; the cameras can capture crashes or other safety-related incidents outside and inside cars. Dashcam footage can be helpful to drivers, passengers, insurance companies, and others when adjudicating legal claims. At the same time, dashcams can pose substantial privacy risks if appropriate safeguards are not in place to limit the collection, use, and disclosure of personal data. 

Dashcams typically record video outside a vehicle. Many dashcams also record in-vehicle audio and some record in-vehicle video. Regardless of the particular device used, ride-hail drivers who use dashcams must comply with applicable audio and video recording laws.

The guide explains relevant laws and provides practical tips to help drivers be transparent, limit data use and sharing, retain video and audio-only for practical purposes, and use strict security controls. The guide highlights ways that drivers can employ physical signs, in-app notices, and other means to ensure passengers are informed about dashcam use and can make meaningful choices about whether to travel in a dashcam-equipped vehicle. Drivers seeking advice concerning specific legal obligations or incidents should consult legal counsel.

Privacy best practices for dashcams include: 

  1. Give individuals notice that they are being recorded
    • Place recording notices inside and on the vehicle.
    • Mount the dashcam in a visible location.
    • Consider, in some situations, giving an oral notification that recording is taking place.
    • Determine whether the ride sharing service provides recording notifications in the app, and utilize those in-app notices.
  2. Only record audio and video for defined, reasonable purposes
    • Only keep recordings for as long as needed for the original purpose.
    • Inform passengers as to why video and/or audio is being recorded.
  3. Limit sharing and use of recorded footage
    • Only share video and audio with third parties for relevant reasons that align with the original reason for recording.
    • Thoroughly review the rideshare service’s privacy policy and community guidelines if using an app-based rideshare service, and be aware that many rideshare companies maintain policies against widely disseminating recordings.
  4. Safeguard and encrypt recordings and delete unused footage
    • Identify dashcam vendors that provide the highest privacy and security safeguards.
    • Carefully read the terms and conditions when buying dashcams to understand the data flows.

Uber will be making these best practices available to drivers in their app and website. 

Many ride-hail drivers use dashcams in their cars, and the guidance and best practices published today provide practical guidance to help drivers implement privacy protections. But driver guidance is only one aspect of ensuring individuals’ privacy and security when traveling. Dashcam manufacturers must implement privacy-protective practices by default and provide easy-to-use privacy options. At the same time, ride-hail platforms must provide drivers with the appropriate tools to notify riders, and carmakers must safeguard drivers’ and passengers’ data collected by OEM devices.

In addition, dashcams are only one example of increasingly sophisticated sensors appearing in passenger vehicles as part of driver monitoring systems and related technologies. Further work is needed to apply comprehensive privacy safeguards to emerging technologies across the connected vehicle sector, from carmakers and rideshare services to mobility services providers and platforms. Comprehensive federal privacy legislation would be a good start. And in the absence of Congressional action, FPF is doing further work to identify key privacy risks and mitigation strategies for the broader class of driver monitoring systems that raise questions about technologies beyond the scope of this dashcam guide.

12th Annual Privacy Papers for Policymakers Awardees Explore the Nature of Privacy Rights & Harms

The winners of the 12th annual Future of Privacy (FPF) Privacy Papers for Policymakers Award ask big questions about what should be the foundational elements of data privacy and protection and who will make key decisions about the application of privacy rights. Their scholarship will inform policy discussions around the world about privacy harms, corporate responsibilities, oversight of algorithms, and biometric data, among other topics.

“Policymakers and regulators in many countries are working to advance data protection laws, often seeking in particular to combat discrimination and unfairness,” said FPF CEO Jules Polonetsky. “FPF is proud to highlight independent researchers tackling big questions about how individuals and society relate to technology and data.”

This year’s papers also explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and among international data protection authorities. The winning papers will be presented at a virtual event on February 10, 2022. 

The winners of the 2022 Privacy Papers for Policymakers Award are:

From the record number of nominated papers submitted this year, these six papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were selected based on the research and solutions that are relevant for policymakers and regulators in the U.S. and abroad.

In addition to the winning papers, FPF has selected two papers for Honorable Mention: Verification Dilemmas and the Promise of Zero-Knowledge Proofs by Kenneth Bamberger, University of California, Berkeley – School of Law; Ran Canetti, Boston University, Department of Computer Science, Boston University, Faculty of Computing and Data Science, Boston University, Center for Reliable Information Systems and Cybersecurity; Shafi Goldwasser, University of California, Berkeley – Simons Institute for the Theory of Computing; Rebecca Wexler, University of California, Berkeley – School of Law; and Evan Zimmerman, University of California, Berkeley – School of Law; and A Taxonomy of Police Technology’s Racial Inequity Problems by Laura Moy, Georgetown University Law Center.

FPF also selected a paper for the Student Paper Award, A Fait Accompli? An Empirical Study into the Absence of Consent to Third Party Tracking in Android Apps by Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford. The Student Paper Award Honorable Mention was awarded to Yeji Kim, University of California, Berkeley – School of Law, for her paper, Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent.

The winning authors will join FPF staff to present their work at a virtual event with policymakers from around the world, academics, and industry privacy professionals. The event will be held on February 10, 2022, from 1:00 – 3:00 PM EST. The event is free and open to the general public. To register for the event, visit https://bit.ly/3qmJdL2.

Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release

The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users.

“We have a prime opportunity now to implement strong privacy and human rights protections as brain-computer interfaces become more widely used,” said Jeremy Greenberg, Policy Counsel at the Future of Privacy Forum. “Among other uses, these technologies have tremendous potential to treat people with diseases and conditions like epilepsy or paralysis and make it easier for people with disabilities to communicate, but these benefits can only be fully realized if meaningful privacy and ethical safeguards are in place.”

Brain-computer interfaces are computer-based systems that are capable of directly recording, processing, analyzing, or modulating human brain activity. The sensitivity of data that BCIs collect and the capabilities of the technology raise concerns over consent, as well as the transparency, security, and accuracy of the data. The report offers a number of policy and technical solutions to mitigate the risks of BCIs and highlights their positive uses.

“Emerging innovations like neurotechnology hold great promise to transform healthcare, education, transportation, and more, but they need the right guardrails in place to protect individuals’ privacy,” said IBM Chief Privacy Officer Christina Montgomery. “Working together with the Future of Privacy Forum, the IBM Policy Lab is pleased to release a new framework to help policymakers and businesses navigate the future of neurotechnology while safeguarding human rights.”

FPF and IBM have outlined several key policy recommendations to mitigate the privacy risks associated with BCIs, including:

FPF and IBM have also included several technical recommendations for BCI devices, including:

FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.

Read FPF’s four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.

FPF Launches Asia-Pacific Region Office, Global Data Protection Expert Clarisse Girot Leads Team

The Future of Privacy Forum (FPF) has appointed Clarisse Girot, PhD, LLM, an expert on Asian and European privacy legislation, to lead its new FPF Asia-Pacific office based in Singapore as Director. This new office expands FPF’s international reach in Asia and complements FPF’s offices in the U.S., Europe, and Israel, as well as partnerships around the globe.
 
Dr. Clarisse Girot is a privacy professional with over twenty years of experience in the privacy and data protection fields. Since 2017, Clarisse has been leading the Asian Business Law Institute’s (ABLI) Data Privacy Project, focusing on the regulations on cross-border data transfers in 14 Asian jurisdictions. Prior to her time at ABLI, Clarisse served as the Counsellor to the President of the French Data Protection Authority (CNIL) and Chair of the Article 29 Working Party. She previously served as head of CNIL’s Department of European and International Affairs, where she sat on the Article 29 Working Party, the group of EU Data Protection Authorities, and was involved in major international cases in data protection and privacy.
 
“Clarisse is joining FPF at an important time for data protection in the Asia-Pacific region. The two most populous countries in the world, India, and China, are introducing general privacy laws, and established data protection jurisdictions, like Singapore, Japan, South Korea, and New Zealand, have recently updated their laws,” said FPF CEO Jules Polonetsky. “Her extensive knowledge of privacy law will provide vital insights for those interested in compliance with regional privacy frameworks and their evolution over time.”
 
FPF Asia-Pacific will focus on several priorities by the end of the year including hosting an event at this year’s Singapore Data Protection Week. The office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and legal professionals. Rajah & Tann Singapore LLP is supporting the work of the FPF Asia-Pacific office.
 
“The FPF global team will greatly benefit from the addition of Clarisse. She will advise FPF staff, advisory board members, and the public on the most significant privacy developments in the Asia-Pacific region, including data protection bills and cross-border data flows,” said Gabriela Zanfir-Fortuna, Director for Global Privacy at FPF. “Her past experience in both Asia and Europe gives her a unique ability to confront the most complex issues dealing with cross-border data protection.”
 
As over 140 countries have now enacted a privacy or data protection law, FPF continues to expand its international presence to help data protection experts grapple with the challenges of ensuring responsible uses of data. Following the appointment of Malavika Raghavan as Senior Fellow for India in 2020, the launch of the FPF Asia-Pacific office further expands FPF’s international reach.
 
Dr. Gabriela Zanfir-Fortuna leads FPF’s international efforts and works on global privacy developments and European data protection law and policy. The FPF Europe office is led by Dr. Rob van Eijk, who prior to joining FPF worked at the Dutch Data Protection Authority as Senior Supervision Officer and Technologist for nearly ten years. FPF has created thriving partnerships with leading privacy research organizations in the European Union, such as Dublin City University and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB). FPF continues to serve as a leading voice in Europe on issues of international data flows, the ethics of AI, and emerging privacy issues. FPF Europe recently published a report comparing the regulatory strategy for 2021-2022 of 15 Data Protection Authorities to provide insights into the future of enforcement and regulatory action in the EU.
 
Outside of Europe, FPF has launched a variety of projects to advance tech policy leadership and scholarship in regions around the world, including Israel and Latin America. The work of the Israel Tech Policy Institute (ITPI), led by Managing Director Limor Shmerling Magazanik, includes publishing a report on AI Ethics in Government Services and organizing an OECD workshop with the Israeli Ministry of Health on access to health data for research.
 
In Latin America, FPF has partnered with the leading research association Data Privacy Brasil, provided in-depth analysis on Brazil’s LGPD privacy legislation and various data privacy cases decided in the Brazilian Supreme Court. FPF recently organized a panel during the CPDP LatAm Conference which explored the state of Latin American data protection laws alongside experts from Uber, the University of Brasilia, and the Interamerican Institute of Human Rights.
 

Read Dr. Girot’s Q&A on the FPF blog. Stay updated: Sign up for FPF Asia-Pacific email alerts.
 

FPF and Leading Health & Equity Organizations Issue Principles for Privacy & Equity in Digital Contact Tracing Technologies

With support from the Robert Wood Johnson Foundation, FPF engaged leaders within the privacy and equity communities to develop actionable guiding principles and a framework to help bolster the responsible implementation of digital contact tracing technologies (DCTT). Today, seven privacy, civil rights, and health equity organizations signed on to these guiding principles for organizations implementing DCTT.

“We learned early in our Privacy and Pandemics initiative that unresolved ethical, legal, social, and equity issues may challenge the responsible implementation of digital contact tracing technologies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “So we engaged leaders within the civil rights, health equity, and privacy communities to create a set of actionable principles to help guide organizations implementing digital contact tracing that respects individual rights.”

Contact tracing has long been used to monitor the spread of various infectious diseases. In light of COVID-19, governments and companies began deploying digital exposure notification using Bluetooth and geolocation data on mobile devices to boost contact tracing efforts and quickly identify individuals who may have been exposed to the virus. However, as DCTT begins to play an important role in public health, it is important to take necessary steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that might accompany its implementation today and in the future. Governance efforts that seek to better understand these risks will be better able to bolster public trust in DCTT technologies. 

“LGBT Tech is proud to have participated in the development of the Principles and Framework alongside FPF and other organizations. We are heartened to see that the focus of these principles is on historically underserved and under-resourced communities everywhere, like the LGBTQ+ community. We believe the Principles and Framework will help ensure that the needs and vulnerabilities of these populations are at the forefront during today’s pandemic and future pandemics.”

Carlos Gutierrez, Deputy Director, and General Counsel, LGBT Tech

“If we establish practices that protect individual privacy and equity, digital contact tracing technologies could play a pivotal role in tracking infectious diseases,” said Dr. Rachele Hendricks-Sturrup, Research Director at the Duke-Margolis Center for Health Policy. “These principles allow organizations implementing digital contact tracing to take ethical and responsible approaches to how their technology collects, tracks, and shares personal information.”

FPF, together with Dialogue on Diversity, the National Alliance Against Disparities in Patient Health (NADPH), BrightHive, and LGBT Tech, developed the principles, which advise organizations implementing DCTT to commit to the following actions:

  1. Be Transparent About How Data Is Used and Shared. 
  1. Apply Strong De-Identification Techniques and Solutions. 
  1. Empower Users Through Tiered Opt-in/Opt-out Features and Data Minimization. 
  1. Acknowledge and Address Privacy, Security, and Nondiscrimination Protection Gaps. 
  1. Create Equitable Access to DCTT. 
  1. Acknowledge and Address Implicit Bias Within and Across Public and Private Settings.
  1. Democratize Data for Public Good While Employing Appropriate Privacy Safeguards. 
  1. Adopt Privacy-By-Design Standards That Make DCTT Broadly Accessible. 

Additional supporters of these principles include the Center for Democracy and Technology and Human Rights First.

To learn more and sign on to the DCTT Principles visit fpf.org/DCTT.

Support for this program was provided by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the Foundation.

Navigating Preemption through the Lens of Existing State Privacy Laws

This post is the second of two posts on federal preemption and enforcement in United States federal privacy legislation. See Preemption in US Privacy Laws (June 14, 2021).

In drafting a federal baseline privacy law in the United States, lawmakers must decide to what extent the law will override state and local privacy laws. In a previous post, we discussed a survey of 12 existing federal privacy laws passed between 1968-2003, and the extent to which they are preemptive of similar state laws. 

Another way to approach the same question, however, is to examine the hundreds of existing state privacy laws currently on the books in the United States. Conversations around federal preemption inevitably focus on comprehensive laws like the California Consumer Privacy Act, or the Virginia Consumer Data Protection Act — but there are hundreds of other state privacy laws on the books that regulate commercial and government uses of data. 

In reviewing existing state laws, we find that they can be categorized usefully into: laws that complement heavily regulated sectors (such as health and finance); laws of general applicability; common law; laws governing state government activities (such as schools and law enforcement); comprehensive laws; longstanding or narrowly applicable privacy laws; and emerging sectoral laws (such as biometrics or drones regulations). As a resource, we recommend: Robert Ellis Smith, Compilation of State and Federal Privacy Laws (last supplemented in 2018). 

  1. Heavily Regulated Sectoral Silos. Most federal proposals for a comprehensive privacy law would not supersede other existing federal laws that contain privacy requirements for businesses, such as the Health Insurance Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act (GLBA). As a result, a new privacy law should probably not preempt state sectoral laws that: (1) supplement their federal counterparts and (2) were intentionally not preempted by those federal regimes. In many cases, robust compliance regimes have been built around federal and state parallel requirements, creating entrenched privacy expectations, privacy tools, and compliance practices for organizations (“lock in”).
  1. Laws of General Applicability. All 50 states have laws barring unfair and deceptive commercial and trade practices (UDAP), as well as generally applicable laws against fraud, unconscionable contracts, and other consumer protections. In cases where violations involve the mis-use of personal information, such claims could be inadvertently preempted by a national privacy law.
  1. State Common Law. Privacy claims have been evolving in US common law over the last hundred years, and claims vary from state to state. A federal privacy law might preempt (or not preempt) claims brought under theories of negligence, breach of contract, product liability, invasions of privacy, or other “privacy torts.”
  2. State Laws Governing State Government Activities. In general, states retain the right to regulate their own government entities, and a commercial baseline privacy law is unlikely to affect such state privacy laws. These include, for example, state “mini Privacy Acts” applying to state government agencies’ collection of records, state privacy laws applicable to public schools and school districts, and state regulations involving law enforcement — such as government facial recognition bans.
  1. Comprehensive or Non-Sectoral State Laws. Lawmakers considering the extent of federal preemption should take extra care to consider the effect on different aspects of omnibus or comprehensive consumer privacy laws, such as the California Consumer Privacy Act (CCPA), the Colorado Privacy Act, and the Virginia Consumer Data Protection Act. In addition, however, there are a number of other state privacy laws that can be considered “non-sectoral” because they apply broadly to businesses that collect or use personal information. These include, for example, CalOPPA (requiring commercial privacy policies), the California “Shine the Light” law (requiring disclosures from companies that share personal information for direct marketing), data breach notification laws, and data disposal laws.
  1. Longstanding, Narrowly Applicable State Privacy Laws. Many states have relatively long-standing privacy statutes on the books that govern narrow use cases, such as: state laws governing library records, social media password laws, mugshot laws, anti-paparazzi laws, state laws governing audio surveillance between private parties, and laws governing digital assets of decedents. In many cases, such laws could be expressly preserved or incorporated into a federal law. 
  1. Emerging Sectoral and Future-Looking Privacy Laws. New state laws have emerged in recent years in response to novel concerns, including for: biometric data; drones; connected and autonomous vehicles; the Internet of Things; data broker registration; and disclosure of intimate images. This trend is likely to continue, particularly in the absence of a federal law.

Congressional intent is the “ultimate touchstone” of preemption. Lawmakers should consider long-term effects on current and future state laws, including how they will be impacted by a preemption provision, as well as how they might be expressly preserved through a Savings Clause. In order to help build consensus, lawmakers should work with stakeholders and experts in the numerous categories of laws discussed above, to consider how they might be impacted by federal preemption.

ICYMI: Read the first blog in this series PREEMPTION IN US PRIVACY LAWS.

Manipulative Design: Defining Areas of Focus for Consumer Privacy

In consumer privacy, the phrase “dark patterns” is everywhere. Emerging from a wide range of technical and academic literature, it now appears in at least two US privacy laws: the California Privacy Rights Act and the Colorado Privacy Act (which, if signed by the Governor, will come into effect in 2025).

Under both laws, companies will be prohibited from using “dark patterns,” or “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision‐making, or choice,” to obtain user consent in certain situations–for example, for the collection of sensitive data.

When organizations give individuals choices, some forms of manipulation have long been barred by consumer protection laws, with the Federal Trade Commission and state Attorneys General prohibiting companies from deceiving or coercing consumers into taking actions they did not intend or striking bargains they did not want. But consumer protection law does not typically prohibit organizations from persuading consumers to make a particular choice. And it is often unclear where the lines fall between cajoling, persuading, pressuring, nagging, annoying, or bullying consumers. The California and Colorado laws seek to do more than merely bar deceptive practices; they prohibit design that “subverts or impairs user autonomy.”

What does it mean to subvert user autonomy, if a design does not already run afoul of traditional consumer protections law? Just as in the physical world, the design of digital platforms and services always influences behavior — what to pay attention to, what to read and in what order, how much time to spend, what to buy, and so on. To paraphrase Harry Brignull (credited with coining the term), not everything “annoying” can be a dark pattern. Some examples of dark patterns are both clear and harmful, such as a design that tricks users into making recurring payments, or a service that offers a “free trial” and then makes it difficult or impossible to cancel. In other cases, the presence of “nudging” may be clear, but harms may be less clear, such as in beta-testing what color shades are most effective at encouraging sales. Still others fall in a legal grey area: for example, is it ever appropriate for a company to repeatedly “nag” users to make a choice that benefits the company, with little or no accompanying benefit to the user?

In Fall 2021, Future of Privacy Forum will host a series of workshops with technical, academic, and legal experts to help define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops will feature experts on manipulative design in at least three contexts of consumer privacy: (1) Youth & Education; (2) Online Advertising and US Law; and (3) GDPR and European Law. 

As lawmakers address this issue, we identify at least four distinct areas of concern:

This week at the first edition of the annual Dublin Privacy Symposium, FPF will join other experts to discuss principles for transparency and trust. The design of user interfaces for digital products and services pervades modern life and directly impacts the choices people make with respect to sharing their personal information. 

India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

New FPF Report Highlights Privacy Tech Sector Evolving from Compliance Tools to Platforms for Risk Management and Data Utilization

As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

“The privacy tech sector is at an inflection point, as its offerings have expanded beyond assisting with regulatory compliance,” said FPF CEO Jules Polonetsky. “Increasingly, companies want privacy tech to help businesses maximize the utility of data while managing ethics and data protection compliance.”

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding. 

“The customers buying privacy-enhancing tech used to be primarily Chief Privacy Officers,” said report lead author Tim Sparapani. “Now it’s also Chief Marketing Officers, Chief Data Scientists, and Strategy Officers who value the insights they can glean from de-identified customer data.”

The report highlights five trends in the privacy enhancing tech market:

The report also draws seven implications for competition in the market:

The report makes a series of recommendations, including that the industry define as a priority a common vernacular for privacy tech; set standards for technologies in the “privacy stack” such as differential privacy, homomorphic encryption, and federated learning; and explore the needs of companies for privacy tech based upon their size, sector, and structure. It calls on vendors to recognize the need to provide adequate support to customers to increase uptake and speed time from contract signing to successful integration.

The Future of Privacy Forum launched the Privacy Tech Alliance (PTA) as a global initiative with a mission to define, enhance and promote the market for privacy technologies. The PTA brings together innovators in privacy tech with customers and key stakeholders.

Members of the PTA Advisory Board, which includes Anonos, BigID, D-ID, Duality, Ethyca, Immuta, OneTrust, Privacy Analytics, Privitar, SAP, Truata, TrustArc, Wirewheel, and ZL Tech, have formed a working group to address impediments to growth identified in the report. The PTA working group will define a common vernacular and typology for privacy tech as a priority project with chief privacy officers and other industry leaders who are members of FPF. Other work will seek to develop common definitions and standards for privacy-enhancing technologies such as differential privacy, homomorphic encryption, and federated learning and identify emerging trends for venture capitalists and other equity investors in this space. Privacy Tech companies can apply to join the PTA by emailing [email protected].


Perspectives on the Privacy Tech Market

Quotes from Members of the Privacy Tech Alliance Advisory Board on the Release of the “Privacy Tech’s Third Generation” Report

anonos feature image 1

“The ‘Privacy Tech Stack’ outlined by the FPF is a great way for organizations to view their obligations and opportunities to assess and reconcile business and privacy objectives. The Schrems II decision by the Court of Justice of the European Union highlights that skipping the second ‘Process’ layer can result in desired ‘Outcomes’ in the third layer (e.g., cloud processing of, or remote access to, cleartext data) being unlawful – despite their global popularity – without adequate risk management controls for decentralized processing.” — Gary LaFever, CEO & General Counsel, Anonos

bigid 1

“As a founding member of this global initiative, we are excited by the conclusions drawn from this foundational report – we’ve seen parallels in our customer base, from needing an enterprise-wide solution to the rich opportunity for collaboration and integration. The privacy tech sector continues to mature as does the imperative for organizations of all sizes to achieve compliance in light of the increasingly complicated data protection landscape.’’—Heather Federman, VP Privacy and Policy at BigID

logo

“There is no doubt of the massive importance of the privacy sector, an area which is experiencing huge growth. We couldn’t be more proud to be part of the Privacy Tech Alliance Advisory Board and absolutely support the work they are doing to create alignment in the industry and help it face the current set of challenges. In fact we are now working on a similar initiative in the synthetic media space to ensure that ethical considerations are at the forefront of that industry too.” — Gil Perry, Co-Founder & CEO, D-ID

dualitytechnologies

“We congratulate the Future of Privacy Forum and the Privacy Tech Alliance on the publication of this highly comprehensive study, which analyzes key trends within the rapidly expanding privacy tech sector. Enterprises today are increasingly reliant on privacy tech, not only as a means of ensuring regulatory compliance but also in order to drive business value by facilitating secure collaborations on their valuable and often sensitive data. We are proud to be part of the PTA Advisory Board, and look forward to contributing further to its efforts to educate the market on the importance of privacy-tech, the various tools available and their best utilization, ultimately removing barriers to successful deployments of privacy-tech by enterprises in all industry sectors” — Rina Shainski, Chairwoman, Co-founder, Duality

onetrustlogo

“Since the birth of the privacy tech sector, we’ve been helping companies find and understand the data they have, compare it against applicable global laws and regulations, and remediate any gaps in compliance. But as the industry continues to evolve, privacy tech also is helping show business value beyond just compliance. Companies are becoming more transparent, differentiating on ethics and ESG, and building businesses that differentiate on trust. The privacy tech industry is growing quickly because we’re able to show value for compliance as well as actionable business insights and valuable business outcomes.” — Kabir Barday, CEO, OneTrust

pa logo iqvia

“Leading organizations realize that to be truly competitive in a rapidly evolving marketplace, they need to have a solid defensive footing. Turnkey privacy technologies enable them to move onto the offense by safely leveraging their data assets rapidly at scale.” — Luk Arbuckle, Chief Methodologist, Privacy Analytics

1024px sap logo.svg

“We appreciate FPF’s analysis of the privacy tech marketplace and we’re looking forward to further research, analysis, and educational efforts by the Privacy Tech Alliance. Customers and consumers alike will benefit from a shared understanding and common definitions for the elements of the privacy stack.” — Corinna Schulze, Director, EU Government Relations, Global Corporate Affairs, SAP

unknown

“The report shines a light on the evolving sophistication of the privacy tech market and the critical need for businesses to harness emerging technologies that can tackle the multitude of operational challenges presented by the big data economy. Businesses are no longer simply turning to privacy tech vendors to overcome complexities with compliance and regulation; they are now mapping out ROI-focused data strategies that view privacy as a key commercial differentiator. In terms of market maturity, the report highlights a need to overcome ambiguities surrounding new privacy tech terminology, as well as discrepancies in the mapping of technical capabilities to actual business needs. Moving forward, the advantage will sit with those who can offer the right blend of technical and legal expertise to provide the privacy stack assurances and safeguards that buyers are seeking – from a risk, deployment and speed-to-value perspective. It’s worth noting that the growing importance of data privacy to businesses sits in direct correlation with the growing importance of data privacy to consumers. Trūata’s Global Consumer State of Mind Report 2021 found that 62% of global consumers would feel more reassured and would be more likely to spend with companies if they were officially certified to a data privacy standard. Therefore, in order to manage big data in a privacy-conscious world, the opportunity lies with responsive businesses that move with agility and understand the return on privacy investment. The shift from manual, restrictive data processes towards hyper automation and privacy-enhancing computation is where the competitive advantage can be gained and long-term consumer loyalty—and trust— can be retained.” — Aoife Sexton, Chief Privacy Officer and Chief of Product Innovation, Trūata

unknown 1

“As early pioneers in this space, we’ve had a unique lens on the evolving challenges organizations have faced in trying to integrate technology solutions to address dynamic, changing privacy issues in their organizations, and we believe the Privacy Technology Stack introduced in this report will drive better organizational decision-making related to how technology can be used to sustainably address the relationships among the data, processes, and outcomes.” — Chris Babel, CEO, TrustArc

wirewheel logo

“It’s important for companies that use data to do so ethically and in compliance with the law, but those are not the only reasons why the privacy tech sector is booming. In fact, companies with exceptional privacy operations gain a competitive advantage, strengthen customer relationships, and accelerate sales.” — Justin Antonipillai, Founder & CEO, Wirewheel

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

Brazilian Supreme Federal Court

Author: Dr. Luca Belli

Dr. Luca Belli is Professor at FGV Law School, Rio de Janeiro, where he leads the CyberBRICS Project and the Latin American edition of the Computers, Privacy and Data Protection (CPDP) conference. The opinions expressed in his articles are strictly personal. The author can be contacted at [email protected].

The Brazilian Supreme Federal Court, or “STF” in its Brazilian acronym, recently took a landmark decision concerning the right to be forgotten (RTBF), finding that it is incompatible with the Brazilian Constitution. This attracted international attention to Brazil for a topic quite distant than the sadly frequent environmental, health, and political crises.

Readers should be warned that while reading this piece they might experience disappointment, perhaps even frustration, then renewed interest and curiosity and finally – and hopefully – an increased open-mindedness, understanding a new facet of the RTBF debate, and how this is playing out at constitutional level in Brazil.

This might happen because although the STF relies on the “RTBF” label, the content behind such label is quite different from what one might expect after following the same debate in Europe. From a comparative law perspective, this landmark judgment tellingly shows how similar constitutional rights play out in different legal cultures and may lead to heterogeneous outcomes based on the constitutional frameworks of reference.   

How it started: insolvency seasoned with personal data

As it is well-known, the first global debate on what it means to be “forgotten” in the digital environment arose in Europe, thanks to Mario Costeja Gonzalez, a Spaniard who, paradoxically, will never be forgotten by anyone due to his key role in the construction of the RTBF.

Costeja famously requested to deindex from Google Search information about himself that he considered to be no longer relevant. Indeed, when anyone “googled” his name, the search engine provided as the top results some link to articles reporting Costeja’s past insolvency as a debtor. Costeja argued that, despite having been convicted for insolvency, he had already paid his debt with Justice and society many years before and it was therefore unfair that his name would continue to be associated ad aeternum with a mistake he made in the past.

The follow up is well known in data protection circles. The case reached the Court of Justice of the European Union (CJEU), which, in its landmark Google Spain Judgment (C-131/12), established that search engines shall be considered as data controllers and, therefore, they have an obligation to de-index information that is inappropriate, excessive, not relevant, or no longer relevant, when a data subject to whom such data refer requests it. Such an obligation was a consequence of Article 12.b of Directive 95/46 on the protection of personal data, a pre-GDPR provision that set the basis for the European conception of the RTBF, providing for the “rectification, erasure or blocking of data the processing of which does not comply with the provisions of [the] Directive, in particular because of the incomplete or inaccurate nature of the data.”

The indirect consequence of this historic decision, and the debate it generated, is that we have all come to consider the RTBF in the terms set by the CJEU. However, what is essential to emphasize is that the CJEU approach is only one possible conception and, importantly, it was possible because of the specific characteristics of the EU legal and institutional framework. We have come to think that RTBF means the establishment of a mechanism like the one resulting from the Google Spain case, but this is the result of a particular conception of the RTBF and of how this particular conception should – or could – be implemented.

The fact that the RTBF has been predominantly analyzed and discussed through the European lenses does not mean that this is the only possible perspective, nor that this approach is necessary the best. In fact, the Brazilian conception of the RTBF is remarkably different from a conceptual, constitutional, and institutional standpoint. The main concern of the Brazilian RTBF is not how a data controller might process personal data (this is the part where frustration and disappointment might likely arise in the reader) but the STF itself leaves the door open to such possibility (this is the point where renewed interest and curiosity may arise).

The Brazilian conception of the right to be forgotten

Although the RTBF has acquired a fundamental relevance in digital policy circles, it is important to emphasize that, until recently, Brazilian jurisprudence had mainly focused on the juridical need for “forgetting” only in the analogue sphere. Indeed, before the CJEU Google Spain decision, the Brazilian Supreme Court of Justice or “STJ” – the other Brazilian Supreme Court that deals with the interpretation of the Law, differently from the previously mentioned STF, which deals with the interpretation of constitutional matters – had already considered the RTBF as a right not to be remembered, affirmed by the individual vis-à-vis traditional media outlets.

This interpretation first emerged in the “Candelaria massacre” case, a gloomy page of Brazilian history, featuring a multiple homicide perpetrated in 1993 in front of the Candelaria Church, a beautiful colonial Baroque building in Rio de Janeiro’s downtown. The gravity and the particularly picturesque stage of the massacre led Globo TV, a leading Brazilian broadcaster, to feature the massacre in a TV show called Linha Direta. Importantly, the show included in the narration some details about a man suspected of being one of the perpetrators of the massacre but later discharged.

Understandably, the man filed a complaint arguing that the inclusion of his personal information in the TV show was causing him severe emotional distress, while also reviving suspects against him, for a crime he had already been discharged of many years before. In September 2013, further to Special Appeal No. 1,334,097, the STJ agreed with the plaintiff establishing the man’s “right not to be remembered against his will, specifically with regard to discrediting facts.” This is how the RTBF was born in Brazil.

Importantly for our present discussion, this interpretation is not born out of digital technology and does not impinge upon the delisting of specific type of information as results of search engine queries. In Brazilian jurisprudence the RTBF has been conceived as a general right to effectively limit the publication of certain information. The man included in the Globo reportage had been discharged many years before, hence he had a right to be “let alone,” as Warren and Brandeis would argue, and not to be remembered for something he had not even committed. The STJ, therefore, constructed its vision of the RTBF, based on article 5.X of the Brazilian Constitution, enshrining the fundamental right to intimacy and preservation of image, two fundamental features of privacy. 

Hence, although they utilize the same label, the STJ and CJEU conceptualize two remarkably different rights, when they refer to the RTBF. While both conceptions aim at limiting access to specific types of personal information, the Brazilian conception differs from the EU one on at least three different levels.

First, their constitutional foundations. While both conceptions are intimately intertwined with individuals’ informational self-determination, the STJ built the RTBF based on the protection of privacy, honour and image, whereas the CJEU built it upon the fundamental right to data protection, which in the EU framework is a standalone fundamental right. Conspicuously, in the Brazilian constitutional framework an explicit right to data protection did not exist at the time of the Candelaria case and only since 2020 it has been in the process of being recognized

Secondly, and consequently, the original goal of the Brazilian conception of the RTBF was not to regulate how a controller should process personal data but rather to protect the private sphere of the individual. In this perspective, the goal of STJ was not – and could not have been – to regulate the deindexation of specific incorrect or outdated information, but rather to regulate the deletion of “discrediting facts” so that the private life, honour and image of any individual might be illegitimately violated.

Finally, yet extremely importantly, the fact that, at the time of the decision, an institutional framework dedicated to data protection was simply absent in Brazil did not allow the STJ to have the same leeway of the CJEU. The EU Justices enjoyed the privilege of delegating to search engine the implementation of the RTBF because, such implementation would have received guidance and would have been subject to the review of a well-consolidated system of European Data Protection Authorities. At the EU level, DPAs are expected to guarantee a harmonious and consistent interpretation and application of data protection law. At the Brazilian level, a DPA has just been established in late 2020 and announced its first regulatory agenda only in late January 2021.

This latter point is far from trivial and, in the opinion of this author, an essential preoccupation that might have driven the subsequent RTBF conceptualization of the STJ.

The stress-test

The soundness of the Brazilian definition of the RTBF, however, was going to be tested again by the STJ, in the context of another grim and unfortunate page of Brazilian story, the Aida Curi case. This case originated with the sexual assault and subsequent homicide of the young Aida Curi, in Copacabana, Rio de Janeiro, on the evening of 14 July 1958. At the time the case crystallized considerable media attention, not only because of its mysterious circumstances and the young age of the victim, but also because the sexual assault perpetrators tried to dissimulate it by throwing the body of the victim from the rooftop of a very high building on the Avenida Atlantica, the fancy avenue right in front of the Copacabana beach.

Needless to say, Globo TV considered the case as a perfect story for yet another Linha Direta episode. Aida Curi’s relatives, far from enjoying the TV show, sued the broadcaster for moral damages and demanded the full enjoyment of their RTBF – in the Brazilian conception, of course. According to the plaintiffs, it was indeed not conceivable that, almost 50 years after the murder, Globo TV could publicly broadcast personal information about the victim – and her family – including the victim’s name and address, in addition to unauthorized images, thus bringing back a long-closed and extremely traumatic set of events.

The brothers of Aida Curi claimed reparation against Rede Globo, but the STJ, decided that the time passed was enough to mitigate the effects of anguish and pain on the dignity of Aida Curi’s relatives, while arguing that it was impossible to report the events without mentioning the victim. This decision was appealed by Ms Curi’s family members, who demanded by means of Extraordinary Appeal No. 1,010,606, that STF recognized “their right to forget the tragedy.” It is interesting to note that the way the demand is constructed in this Appeal exemplifies tellingly the Brazilian conception of “forgetting” as erasure and prohibition from divulgation.

At this point, the STF identified in the Appeal the interest of debating the issue “with general repercussion” which is a peculiar judicial process that the Court can utilize when recognizes that a given case has particular relevance and transcendence for the Brazilian legal and judicial system. Indeed, the decision of a case with general repercussion does not only bind the parties but rather establishes a jurisprudence that must be replicated by all lower-level courts.

In February 2021, the STF finally deliberated on the Aida Curi case, establishing that “the idea of ​​a right to be forgotten is incompatible with the Constitution, thus understood as the power to prevent, due to the passage of time, the disclosure of facts or data that are true and lawfully obtained and published in analogue or digital media” and that “any excesses or abuses in the exercise of freedom of expression and information must be analyzed on a case-by-case basis, based on constitutional parameters – especially those relating to the protection of honor, image, privacy and personality in general – and the explicit and specific legal provisions existing in the criminal and civil spheres.”

In other words, what the STF has deemed as incompatible with the Federal Constitution is a specific interpretation of the Brazilian version of the RTBF. What is not compatible with the Constitution is to argue that the RTBF allows to prohibit publishing true facts, lawfully obtained. At the same time, however, the STF clearly states that it remains possible for any Court of law to evaluate, on a case-by-case basis and according to constitutional parameters and existing legal provisions, if a specific episode can allow the use of the RTBF to prohibit the divulgation of information that undermine the dignity, honour, privacy, or other fundamental interests of the individual.

Hence, while explicitly prohibiting the use of the RTBF as a general right to censorship, the STF leaves room for the use of the RTBF for delisting specific personal data in an EU-like fashion, while specifying that this must be done finding guidance in the Constitution and the Law.

What next?

Given the core differences between the Brazilian and EU conception of the RTBF, as highlighted above, it is understandable in the opinion of this author that the STF adopted a less proactive and more conservative approach. This must be especially considered in light of the very recent establishment of a data protection institutional system in Brazil.

It is understandable that the STF might have preferred to de facto delegate the interpretation of when and how the RTBF could be rightfully invoked before Courts, according to constitutional and legal parameters. First, in the Brazilian interpretation of the RTBF, this right fundamentally insist on the protection of privacy – i.e. the private sphere of an individual – and, while admitting the existence of data protection concerns, these are not the main ground on which the Brazilian RTBF conception relays.

It is understandable that in a country and a region where the social need to remember and shed light on what happened in a recent history, marked by dictatorships, well-hidden atrocities, and opacity, outweighs the legitimate individual interest to prohibit the circulation of truthful and legally obtained information. In the digital sphere, however, the RTBF quintessentially translates into an extension of informational self-determination, which the Brazilian General Data Protection Law, better known as “LGPD” (Law No. 13.709 / 2018), enshrines in its article 2 as one of the “foundations” of data protection in the country and that whose fundamental character was recently recognized by the STF itself.

In this perspective, it is useful to remind the dissenting opinion of Justice Luiz Edson Fachin, in the Aida Curi case, stressing that “although it does not expressly name it, the Constitution of the Republic, in its text, contains the pillars of the right to be forgotten, as it celebrates the dignity of the human person (article 1, III), the right to privacy (article 5, X) and the right to informational self-determination – which was recognized, for example, in the disposal of the precautionary measures of the Direct Unconstitutionality Actions No. 6,387, 6,388, 6,389, 6,390 and 6,393, under the rapporteurship of Justice Rosa Weber (article 5, XII).”

It is the opinion of this author that the Brazilian debate on the RTBF in the digital sphere would be clearer if it its dimension as a right to deindexation of search engines results were to be clearly regulated. It is understandable that the STF did not dare regulating this, given its interpretation of the RTBF and the very embryonic data protection institutional framework in Brazil. However, given the increasing datafication we are currently witnessing, it would be naïve not to expect that further RTBF claims concerning the digital environment and, specifically, the way search engines process personal data will keep emerging.

The fact that the STF has left the door open to apply the RTBF in the case-by-case analysis of individual claims may reassure the reader regarding the primacy of constitutional and legal arguments in such case-by-case analysis. It may also lead the reader to – very legitimately – wonder whether such a choice is the facto the most efficient to deal with the potentially enormous number of claims and in the most coherent way, given the margin of appreciation and interpretation that each different Court may have.  

An informed debate able to clearly highlight what are the existing options and what might be the most efficient and just ways to implement them, considering the Brazilian context, would be beneficial. This will likely be one of the goals of the upcoming Latin American edition of the Computers, Privacy and Data Protection conference (CPDP LatAm) that will take place in July, entirely online, and will aim at exploring the most pressing issues for Latin American countries regarding privacy and data protection.

Photo Credit: “Brasilia – The Supreme Court” by Christoph Diewald is licensed under CC BY-NC-ND 2.0

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

FPF announces appointment of Malavika Raghavan as Senior Fellow for India

The Future of Privacy Forum announces the appointment of Malavika Raghavan as Senior Fellow for India, expanding our Global Privacy team to one of the key jurisdictions for the future of privacy and data protection law. 

Malavika is a thought leader and a lawyer working on interdisciplinary research, focusing on the impacts of digitisation on the lives of lower-income individuals. Her work since 2016 has focused on the regulation and use of personal data in service delivery by the Indian State and private sector actors. She has founded and led the Future of Finance Initiative for Dvara Research (an Indian think tank) in partnership with the Gates Foundation from 2016 until 2020, anchoring its research agenda and policy advocacy on emerging issues at the intersection of technology, finance and inclusion. Research that she led at Dvara Research was cited by the India’s Data Protection Committee in its White Paper as well as its final report with proposals for India’s draft Personal Data Protection Bill, with specific reliance placed on such research on aspects of regulatory design and enforcement. See Malavika’s full bio here.

“We are delighted to welcome Malavika to our Global Privacy team. For the following year, she will be our adviser to understand the most significant developments in privacy and data protection in India, from following the debate and legislative process of the Data Protection Bill and the processing of non-personal data initiatives, to understanding the consequences of the publication of the new IT Guidelines. India is one of the most interesting jurisdictions to follow in the world, for many reasons: the innovative thinking on data protection regulation, the potentially groundbreaking regulation of non-personal data and the outstanding number of individuals whose privacy and data protection rights will be envisaged by these developments, which will test the power structures of digital regulation and safeguarding fundamental rights in this new era”, said Dr. Gabriela Zanfir-Fortuna, Global Privacy lead at FPF. 

We have asked Malavika to share her thoughts for FPF’s blog on what are the most significant developments in privacy and digital regulation in India and about India’s role in the global privacy and digital regulation debate.

FPF: What are some of the most significant developments in the past couple of years in India in terms of data protection, privacy, digital regulation?

Malavika Raghavan: “Undoubtedly, the turning point for the privacy debate India was the 2017 judgement of the Indian Supreme Court in Justice KS Puttaswamy v Union of India. The judgment affirmed the right to privacy as a constitutional guarantee, protected by Part III (Fundamental Rights) of the Indian Constitution. It was also regenerative, bringing our constitutional jurisprudence into the 21st century by re-interpreting timeless principles for the digital age, and casting privacy as a prerequisite for accessing other rights—including the right to life and liberty, to freedom of expression and to equality—given the ubiquitous digitisation of human experience we are witnessing today. 

Overnight, Puttaswamy also re-balanced conversations in favour of privacy safeguards to make these equal priorities for builders of digital systems, rather than framing these issues as obstacles to innovation and efficiency. In addition, it challenged the narrative that privacy is an elite construct that only wealthy or privileged people deserve— since many litigants in the original case that had created the Puttaswamy reference were from marginalised groups. Since then, a string of interesting developments have arisen as new cases are reassessing the impact of digital technology on individuals in India, for e.g. the boundaries case of private sector data sharing (such as between Whatsapp and Facebook), or the State’s use of personal data (as in the case concerning Aadhaar, our national identification system) among others. 

Puttaswamy also provided fillip for a big legislative development, which is the creation of an omnibus data protection law in India. A bill to create this framework was proposed by a Committee of Experts under the chairmanship of Justice Srikrishna (an ex-Supreme Court judge), which has been making its way through ministerial and Parliamentary processes. There’s a large possibility that this law will be passed by the Indian parliament in 2021! Definitely a big development to watch.

FPF: How do you see India’s role in the global privacy and digital regulation debate?

Malavika Raghavan: “India’s strategy on privacy and digital regulation will undoubtedly have global impact, given that India is home to 1/7th of the world’s population! The mobile internet revolution has created a huge impact on our society with millions getting access to digital services in the last couple of decades. This has created nuanced mental models and social norms around digital technologies that are slowly being documented through research and analysis. 

The challenge for policy makers is to create regulations that match these expectations and the realities of Indian users to achieve reasonable, fair regulations. As we have already seen from sectoral regulations (such as those from our Central Bank around cross border payments data flows) such regulations also have huge consequences for global firms interacting with Indian users and their personal data.  

In this context, I think India can have the late-mover advantage in some ways when it comes to digital regulation. If we play our cards right, we can take the best lessons from the experience of other countries in the last few decades and eschew the missteps. More pragmatically, it seems inevitable that India’s approach to privacy and digital regulation will also be strongly influenced by the Government’s economic, geopolitical and national security agenda (both internationally and domestically). 

One thing is for certain: there is no path-dependence. Our legislators and courts are thinking in unique and unexpected ways that are indeed likely to result in a fourth way (as described by the Srikrishna Data Protection Committee’s final report), compared to the approach in the US, EU and China.”

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content

Taj Mahal 1209004 1920

On February 25, the Indian Government notified and published Information Technology (Guidelines for Intermediaries and Digital media Ethics Code) Rules 2021. These rules mirror the Digital Services Act (DSA) proposal of the EU to some extent, since they propose a tiered approach based on the scale of the platform, they touch on intermediary liability, content moderation, take-down of illegal content from online platforms, as well as internal accountability and oversight mechanisms, but they go beyond such rules by adding a Code of Ethics for digital media, similar to the Code of Ethics classic journalistic outlets must follow, and by proposing an “online content” labelling scheme for content that is safe for children.

The Code of Ethics applies to online news publishers, as well as intermediaries that “enable the transmission of news and current affairs”. This part of the Guidelines (the Code of Ethics) has already been challenged in the Delhi High Court by news publishers this week. 

The Guidelines have raised several types of concerns in India, from their impact on freedom of expression, impact on the right to privacy through the automated scanning of content and the imposed traceability of even end-to-end encrypted messages so that the originator can be identified, to the choice of the Government to use executive action for such profound changes. The Government, through the two Ministries involved in the process, is scheduled to testify in the Standing Committee of Information Technology of the Parliament on March 15.

New obligations for intermediaries

“Intermediaries” include “websites, apps and portals of social media networks, media sharing websites, blogs, online discussion forums, and other such functionally similar intermediaries” (as defined in rule 2(1)(m)).

Here are some of the most important rules laid out in Part II of the Guidelines, dedicated to Due Diligence by Intermediaries:

“Significant social media intermediaries” have enhanced obligations

“Significant social media intermediaries” are social media services with a number of users above a threshold which will be defined and notified by the Central Government. This concept is similar to the the DSA’s “Very Large Online Platform”, however the DSA includes clear criteria in the proposed act itself on how to identify a VLOP.

As for Significant Social Media Intermediaries” in India, they will have additional obligations (similar to how the DSA proposal in the EU scales obligations): 

These “Guidelines” seem to have the legal effect of a statute, and they are being adopted through executive action to replace Guidelines adopted in 2011 by the Government, under powers conferred to it in the Information Technology Act 2000. The new Guidelines would enter into force immediately after publication in the Official Gazette (no information as to when publication is scheduled). The Code of Ethics would enter into force three months after the publication in the Official Gazette. As mentioned above, there are already some challenges in Court against part of these rules.

Get smart on these issues and their impact

Check out these resources: 

Another jurisdiction to keep your eyes on: Australia

Also note that, while the European Union is starting its heavy and slow legislative machine, by appointing Rapporteurs in the European Parliament and having first discussions on the DSA proposal in the relevant working group of the Council, another country is set to soon adopt digital content rules: Australia. The Government is currently considering an Online Safety Bill, which was open to public consultation until mid February and which would also include a “modernised online content scheme”, creating new classes of harmful online content, as well as take-down requirements for image-based abuse, cyber abuse and harmful content online, requiring removal within 24 hours of receiving a notice from the eSafety Commissioner.

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

Russia: New Law Requires Express Consent for Making Personal Data Available to the Public and for Any Subsequent Dissemination

Authors: Gabriela Zanfir-Fortuna and Regina Iminova

Moscow 2742642 1920 1
Source: Pixabay.Com, by Opsa

Amendments to the Russian general data protection law (Federal Law No. 152-FZ on Personal Data) adopted at the end of 2020 enter into force today (Monday, March 1st), with some of them having the effective date postponed until July 1st. The changes are part of a legislative package that is also seeing the Criminal Code being amended to criminalize disclosure of personal data about “protected persons” (several categories of government officials). The amendments to the data protection law envision the introduction of consent based restrictions for any organization or individual that publishes personal data initially, as well as for those that collect and further disseminate personal data that has been distributed on the basis of consent in the public sphere, such as on social media, blogs or any other sources. 

The amendments:

The potential impact of the amendments is broad. The new law prima facie affects social media services, online publishers, streaming services, bloggers, or any other entity who might be considered as making personal data available to “an indefinite number of persons.” They now have to collect and prove they have separate consent for making personal data publicly available, as well as for further publishing or disseminating PDD which has been lawfully published by other parties originally.

Importantly, the new provisions in the Personal Data Law dedicated to PDD do not include any specific exception for processing PDD for journalistic purposes. The only exception recognized is processing PDD “in the state and public interests defined by the legislation of the Russian Federation”. The Explanatory Note accompanying the amendments confirms that consent is the exclusive lawful ground that can justify dissemination and further processing of PDD and that the only exception to this rule is the one mentioned above, for state or public interests as defined by law. It is thus expected that the amendments might create a chilling effect on freedom of expression, especially when also taking into account the corresponding changes to the Criminal Code.

The new rules seem to be part of a broader effort in Russia to regulate information shared online and available to the public. In this context, it is noteworthy that other amendments to Law 149-FZ on Information, IT and Protection of Information solely impacting social media services were also passed into law in December 2020, and already entered into force on February 1st, 2021. Social networks are now required to monitor content and “restrict access immediately” of users that post information about state secrets, justification of terrorism or calls to terrorism, pornography, promoting violence and cruelty, or obscene language, manufacturing of drugs, information on methods to commit suicide, as well as calls for mass riots. 

Below we provide a closer look at the amendments to the Personal Data Law that entered into force on March 1st, 2021. 

A new category of personal data is defined

The new law defines a category of “personal data allowed by the data subject to be disseminated” (PDD), the definition being added as paragraph 1.1 to Article 3 of the Law. This new category of personal data is defined as “personal data to which an unlimited number of persons have access to, and which is provided by the data subject by giving specific consent for the dissemination of such data, in accordance with the conditions in the Personal Data Law” (unofficial translation). 

The old law had a dedicated provision that referred to how this type of personal data could be lawfully processed, but it was vague and offered almost no details. In particular, Article 6(10) of the Personal Data Law (the provision corresponding to Article 6 GDPR on lawful grounds for processing) provided that processing of personal data is lawful when the data subject gives access to their personal data to an unlimited number of persons. The amendments abrogate this paragraph, before introducing an entirely new article containing a detailed list of conditions for processing PDD only on the basis of consent (the new Article 10.1).

Perhaps in order to avoid misunderstanding on how the new rules for processing PDD fit with the general conditions on lawful grounds for processing personal data, a new paragraph 2 is introduced in Article 10 of the law, which details conditions for processing special categories of personal data, to clarify that processing of PDD “shall be carried out in compliance with the prohibitions and conditions provided for in Article 10.1 of this Federal Law”.

Specific, express, unambiguous and separate consent is required

Under the new law, “data operators” that process PDD must obtain specific and express consent from data subjects to process personal data, which includes any use, dissemination of the data. Notably, under the Russian law, “data operators” designate both controllers and processors in the sense of the General Data Protection Regulation (GDPR), or businesses and service providers in the sense of the California Consumer Privacy Act (CCPA).

Specifically, under Article 10.1(1), the data operator must ensure that it obtains a separate consent dedicated to dissemination, other than the general consent for processing personal data or other type of consent. Importantly, “under no circumstances” may individuals’ silence or inaction be taken to indicate their consent to the processing of their personal data for dissemination, under Article 10.1(8).

In addition, the data subject must be provided with the possibility to select the categories of personal data which they permit for dissemination. Moreover, the data subject also must be provided with the possibility to establish “prohibitions on the transfer (except for granting access) of [PDD] by the operator to an unlimited number of persons, as well as prohibitions on processing or conditions of processing (except for access) of these personal data by an unlimited number of persons”, per Article 10.1(9). It seems that these prohibitions refer to specific categories of personal data provided by the data subject to the operator (out of a set of personal data, some categories may be authorized for dissemination, while others may be prohibited from dissemination).

If the data subject discloses personal data to an unlimited number of persons without providing to the operator the specific consent required by the new law, not only the original operator, but all subsequent persons or operators that processed or further disseminated the PDD have the burden of proof to “provide evidence of the legality of subsequent dissemination or other processing”, under Article 10.1(2), which seems to imply that they must prove consent was obtained for dissemination (probatio diabolica in this case). According to the Explanatory Note to the amendments, it seems that the intention was indeed to turn the burden of proof of legality of processing PDD from data subjects to the data operators, since the Note makes a specific reference to the fact that before the amendments the burden of proof rested with data subjects.

If the separate consent for dissemination of personal data is not obtained by the operator, but other conditions for lawfulness of processing are met, the personal data can be processed by the operator, but without the right to distribute or disseminate them – Article 10.1.(4). 

A Consent Management Platform for PDD, managed by the Roskomnadzor

The express consent to process PDD can be given directly to the operator or through a special “information system” (which seems to be a consent management platform) of the Roskomnadzor, according to Article 10.1(6). The provisions related to setting up this consent platform for PDD will enter into force on July 1st, 2021. The Roskomnadzor is expected to provide technical details about the functioning of this consent management platform and guidelines on how it is supposed to be used in the following months. 

Absolute right to opt-out of dissemination of PDD

Notably, the dissemination of PDD can be halted at any time, on request of the individual, regardless of whether the dissemination is lawful or not, according to Article 12.1(12). This type of request is akin to a withdrawal of consent. The provision includes some requirements for the content of such a request. For instance, it requires writing contact information and listing the personal data that should be terminated. Consent to the processing of the provided personal data is terminated once the operator receives the opt-out request – Article 10.1(13).

A request to opt-out of having personal data disseminated to the public when this is done unlawfully (without the data subject’s specific, affirmative consent) can also be made through a Court, as an alternative to submitting it directly to the data operator. In this case, the operator must terminate the transmission of or access to personal data within three business days from when such demand was received or within the timeframe set in the decision of the court which has come into effect – Article 10.1(14).

A new criminal offense: The prohibition on disclosure of personal data about protected persons

Sharing personal data or information about intelligence officers and their personal property is now a criminal offense under the new rules, which amended the Criminal Code. The law obliges any operators of personal data, including government departments and mobile operators, to ensure the confidentiality of personal information concerning protected persons, their relatives, and their property. Under the new law, “protected persons” include employees of the Investigative Committee, FSB, Federal Protective Service, National Guard, Ministry of Internal Affairs, and Ministry of Defense judges, prosecutors, investigators, law enforcement officers and their relatives. Moreover, the list of protected persons can be further detailed by the head of the relevant state body in which the specified persons work.

Previously, the law allowed for the temporary prohibition of the dissemination of personal data of protected persons only in the event of imminent danger in connection with official duties and activities. The new amendments make it possible to take protective measures in the absence of a threat of encroachment on their life, health and property.

What to watch next: New amendments to the general Personal Data Law are on their way in 2021

There are several developments to follow in this fast changing environment. First, at the end of January, the Russian President gave the government until August 1 to create a set of rules for foreign tech companies operating in Russia, including a requirement to open branch offices in the country.

Second, a bill (No. 992331-7) proposing new amendments to the overall framework of the Personal Data Law (No. 152-FZ) was introduced in July 2020 and was the subject of a Resolution that passed in the State Duma on February 16, allowing for a period for amendments to be submitted, until March 16. The bill is on the agenda for a potential vote in May. The changes would entail expanding the possibility to obtain valid consent through other unique identifiers which are currently not accepted by the law, such as unique online IDs, changes to purpose limitation, a possible certification scheme for effective methods to erase personal data and new competences for the Roskomnadzor to establish requirements for deidentification of personal data and specific methods for effective deidentification.

If you have any questions on Global Privacy and Data Protection developments, contact Gabriela Zanfir-Fortuna at [email protected]

Red Lines under the EU AI Act: Understanding the prohibition of biometric categorization for certain sensitive characteristics 

Blog 7 | Red Lines under the EU AI Act Series

This blog is the seventh of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

The EU AI Act provides for rules on prohibited AI practices that the legislature considers incompatible with fundamental rights and European Union values. Article 5(1)(g) introduces a prohibition on the biometric categorization for “certain sensitive characteristics”, focusing on systems used to categorize individuals “based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation”. 

The European Commission guidelines on prohibited AI practices (hereinafter, “the Guidelines”) note that information, including sensitive data, can be extracted, deduced, or inferred from biometric data with or without the individual’s knowledge, leading to unfair or discriminatory treatment that undermines human dignity, privacy, and the principle of non-discrimination protected under the EU acquis. This provision also reflects longstanding concerns with regard to the risks associated with processing sensitive personal data, particularly where such processing may take place without the knowledge of the individual. 

With this in mind, Section 1 unpacks the (limited) scope and key definitions of the prohibition, including the cumulative conditions required for the provision to apply. Section 2 takes a look at the situations that fall outside the scope of the prohibition, and, finally, Section 3 explores the interaction between the biometric categorization prohibition and the existing EU legal framework. 

Several key takeaways emerge: 

1. (Limited) Scope and key definitions

To trigger the prohibition under Article 5(1)(g) AI Act, five cumulative conditions must be simultaneously met: 

  1. The AI system must be placed on the market, put into service, or used.
  2. The AI system must be a biometric categorization system. 
  3. The AI system must categorize individuals.
  4. The AI system must categorize individuals based on their biometric data.
  5. The AI system must infer sensitive characteristics (e.g., race, political opinions, religious beliefs, and so on). 

The first condition, relating to the placing on the market, putting into service or use of an AI system, applies to both providers and deployers within their respective responsibilities. The Guidelines also clarify that the prohibition does not cover the labelling or filtering of lawfully acquired biometric datasets, including for law enforcement purposes.

The requirement that all five conditions be fulfilled simultaneously is likely to be significant in practice. It may limit the scope of the prohibition and it raises questions about how it will be applied in specific cases, particularly where systems are designed to avoid explicit inference of sensitive traits while still enabling similar outcomes.

1.1 Defining biometric categorization

Biometric categorization refers to assigning individuals to predefined groups based on their biometric data, rather than identifying or verifying their identity. Such categorization may be used, for example, to display targeted advertising or for statistical purposes, without necessarily identifying the individual. 

Article 3(40) AI Act defines a biometric categorization system as an AI system that assigns natural persons to specific categories based on their biometric data, unless this function is ancillary to another commercial service and strictly necessary for objective technical reasons. Biometric data, defined in Article 3(34) AI Act, includes behavioural characteristics based on biometric features. As discussed in a previous blog, this definition is broader than the definition of biometric data in the GDPR. Categorization based on clothing, accessories, or social media activity falls outside the scope of biometric categorization under the AI Act.

The Guidelines further clarify that biometric categorization may involve categories based on physical characteristics such as facial structure or skin colour, some of which may correspond to sensitive characteristics protected under EU non-discrimination law. At the same time, the AI Act definition contains an important limitation: a system will not fall within the definition where the categorization is ancillary to another commercial service and strictly necessary for objective technical reasons. According to Recital 16 AI Act, an ancillary feature is one that is intrinsically linked to another commercial service and cannot be used independently of that service.

The Guidelines provide several examples to illustrate this distinction. For instance, filters that categorize facial or bodily features on online marketplaces, allowing consumers to preview a product on themselves, may constitute an ancillary feature because they are linked to the principal service of selling a product. Similarly, filters integrated into social media platforms that allow users to modify images or videos may also be considered ancillary features because they cannot be used independently of the platform’s content-sharing service.

The Guidelines also identify examples of systems that would fall within the prohibition. These include AI systems that analyse biometric data from photographs uploaded to social media platforms to categorize individuals by their assumed political orientation and send them targeted political messages. Another example concerns AI systems that analyse biometric data from photos to infer a person’s sexual orientation and use that information to serve targeted advertising. In both cases, the categorization would not be strictly necessary for objective technical reasons and therefore would fall within the definition of biometric categorization under the AI Act. Importantly, the systems that perform such categorization need to fall under the definition of “AI system” pursuant the AI Act for the prohibition to apply.

The risks associated with biometric categorization also reflect broader concerns under EU data protection law. The EDPB has clarified that inferences about sensitive characteristics may themselves constitute special categories of personal data under Article 9 GDPR. Also, the Court of Justice of the European Union has held that processing which allows information falling within Article 9(1) GDPR categories to be revealed must be regarded as processing of special categories of personal data (Meta Platforms and Others, C-252/21). However, the prohibition to process sensitive data under the GDPR has several exceptions, such as explicit consent. 

The EDPB and the European Data Protection Supervisor (EDPS) have taken a similar position in their Joint Opinion 5/2021 on the Proposal for the AI Act. They called for a broader prohibition of certain biometric AI practices. In particular, they called for a general ban on the use of AI for automated recognition of human features in publicly accessible spaces, including faces, gait, fingerprints, DNA, voice, and other biometric and behavioural signals.

1.2 For the prohibition to apply, categorization must take place at the level of the individual

Another essential condition for the prohibition to apply is that the system must categorize individual natural persons based on their biometric data. Importantly, the categorization must take place at the level of the individual. If biometric analysis is performed without categorizing specific individuals, the prohibition does not apply. For example, the prohibition would not be triggered where a system analyzes biometric information only to categorize an entire group without identifying or singling out individual persons. These include AI systems that conduct “attribute estimation”, sometimes referred to as demographic analysis, by assigning characteristics such as age, gender or ethnicity based on biometric features such as facial characteristics, height or skin, eye or hair colour, or other features such as a visible scar or distinctive tattoo.

1.3 “Sensitive characteristics” under the AI Act

The prohibition under Article 5(1)(g) AI Act applies only when a biometric categorization system is used to deduce or infer specific sensitive characteristics, such as: race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

This means that not all biometric categorization systems fall within the scope of the prohibition. Rather, the prohibition targets systems that attempt to derive particularly sensitive characteristics from biometric data.

For example, a system that claims to infer an individual’s race from their voice would fall within the scope of the prohibition. By contrast, a system that categorizes individuals according to physical traits such as skin or eye colour, or a system analysing the DNA of crime victims to determine their origin, would not be prohibited under Article 5(1)(g). Another example provided by the Guidelines concerns a biometric categorization system that claims to infer a person’s religious orientation from tattoos or facial characteristics would fall within the prohibition.

2. Biometric categorization for bias detection: What falls outside the scope of the prohibition?

The prohibition in Article 5(1)(g) AI Act does not apply to all uses of biometric categorization. In particular, it does not cover AI systems used for the labelling or filtering of lawfully acquired biometric datasets, including in law enforcement contexts. As explained in Recital 30 AI Act, such uses may include sorting images by biometric characteristics, such as hair or eye colour.

The Guidelines note that labelling or filtering biometric datasets may be necessary to ensure that datasets used to train AI systems are representative across demographic groups. Where training data contains systematic differences between groups, for example, due to historical bias in data collection, algorithms may replicate those biases and potentially lead to discriminatory outcomes. In such cases, labelling data according to certain characteristics may be necessary to improve data quality and prevent discrimination. In some circumstances, the AI Act may even require such labelling operations in order to comply with the requirements applicable to high-risk AI systems (see Article 10 AI Act).

The Guidelines provide several examples of permissible uses. One example concerns the labelling of biometric data to prevent recruitment algorithms from disadvantaging individuals from certain ethnic groups, where historical training data reflects biased outcomes. Another example involves categorizing patients’ images by skin or eye colour, which may be relevant to medical diagnosis, including certain cancer diagnoses.

The exception also applies in law enforcement contexts where biometric datasets have been lawfully acquired. For example, law enforcement authorities may use AI systems to label or filter datasets suspected of containing child sexual abuse material. Such systems may help detect and redact sensitive information in images or assist investigations by labelling biometric features such as gender, age, eye or hair colour, scars, markings, or tattoos in order to identify victims or establish links between cases. Similarly, filtering and labelling features such as hand characteristics or distinctive tattoos may help identify possible suspects in law enforcement contexts.

3. Interplay with other EU laws

This prohibition must be understood in the context of the existing EU data protection framework. 

Interestingly to note, the Guidelines refer to an earlier explanation provided by the Article 29 Working Party (the precursor to the EDPB) when describing “biometric categorization” in the Opinion on developments in biometric technologies. Article 3(40) AI Act provides a legal definition, describing a biometric categorization system as an AI system that assigns natural persons to specific categories on the basis of their biometric data, while also specifying an exclusion where such categorization is ancillary to another commercial service and strictly necessary for objective technical reasons. 

By contrast, the Article 29 Working Party explains biometric categorization as the process of determining whether the biometric data of an individual belongs to a group with predefined characteristics, emphasizing that the objective is not to identify or verify the individual but to assign them automatically to a category, for example, to display different advertisements depending on the perceived age or gender of the person. While both definitions describe categorization based on biometric data rather than identification, the AI Act establishes a regulatory definition determining the scope of the prohibition, whereas the Article 29 Working Party description provides a conceptual explanation of how biometric categorization systems operate in practice.

Furthermore, Article 9(1) GDPR establishes a general prohibition on the processing of special categories of personal data, subject to exceptions, which might see some processing of biometric data in the context of biometric categorisation lawful under the GDPR, as long as it respects its strict provisions. The AI Act introduces an additional layer of restriction, which raises important conflict of law questions with the GDPR. As analyzed in the first blog of this series, the GDPR takes priority in application (the AI Act “shall not affect” the GDPR). Further guidance on the intersection of the GDPR and the AI Act in this respect is needed. 

The Guidelines clarify that AI systems intended to categorize individuals based on biometric data to infer attributes protected under Article 9(1) GDPR are classified as high-risk AI systems, provided they are not already prohibited under Article 5 AI Act. At the same time, Article 5(1)(g) further limits the possibilities for lawful processing of personal data under EU data protection law, including the GDPR, the Law Enforcement Directive (LED), and Regulation (EU) 2018/1725 (EUDPR). In particular, the provision excludes the use of biometric data to categorize natural persons in order to infer sensitive characteristics such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation, subject to the limited exception for the labelling or filtering of lawfully acquired biometric datasets.

The prohibition is also consistent with Article 11(3) LED, which explicitly prohibits profiling that results in discrimination on the basis of special categories of personal data, including race, ethnic origin, political opinions, religious beliefs or sexual orientation.

4. Closing reflections and key takeaways

The AI Act prohibits specific biometric inference practices, not biometric categorization as such

Article 5(1)(g) AI Act does not prohibit biometric categorization in general. It prohibits the placing on the market, putting into service, or use of AI systems that categorize individuals based on biometric data for the purpose of inferring certain sensitive characteristics, such as race, political opinions, religious beliefs, trade union membership, sex life or sexual orientation. The prohibition applies only where all cumulative conditions of Article 5(1)(g) are met. This means that many forms of biometric categorization such as categorization based on non-sensitive physical traits or for purposes that do not involve inferring the listed characteristics, do not fall within the prohibition.

The objective and design of the system are central to determining whether the prohibition applies

The Guidelines place significant emphasis on the purpose and functionality of the AI system, in particular, whether the system is designed to deduce or infer one of the sensitive characteristics listed in the provision. This means that the prohibition is not triggered only by the presence of biometric analysis, but by the intended inference of protected attributes from biometric data. The examples provided in the Guidelines illustrate this distinction: systems that claim to infer race from voice or religious beliefs from facial features would fall within the prohibition, whereas systems categorizing individuals based on traits such as eye or hair colour would not.

Context and use matter for determining the scope of the prohibition

The prohibition applies only where individuals are individually categorized based on their biometric data, and where the categorization results in the inference of the listed sensitive characteristics. Systems that analyse biometric data at an aggregated level without singling out individuals would not meet this condition. Similarly, the AI Act explicitly excludes certain practices from the scope of the prohibition, including the labelling or filtering of lawfully acquired biometric datasets, for example, where such operations are carried out to improve dataset quality, mitigate bias in AI training data, support medical diagnosis or assist law enforcement investigations.

The relationship between this prohibition and EU data protection law needs further clarification

Finally, the prohibition must be understood in the broader context of EU data protection and non-discrimination law. The GDPR already restricts the processing of special categories of personal data under Article 9(1), while the AI Act introduces an additional regulatory layer by prohibiting certain biometric inference practices altogether. Given that the AI Act itself establishes that it does not affect the GDPR, further guidance is needed for those cases where processing of biometric data would be lawful under Article 9(2) GDPR, but prohibited under the AI Act. 

2026 Chatbot Legislation Tracker

Co-authored by Rafal Fryc

With nearly 100 chatbot-specific bills introduced across states in 2026, a complex and increasingly fragmented compliance landscape is quickly emerging. This tracker helps stakeholders understand that landscape by highlighting chatbot legislation advancing through initial chambers in state legislatures and Congress, and organizing key provisions across proposals to show what is coming and how requirements may vary across jurisdictions. The tracker is updated on Thursdays to reflect legislative movement and amendments.

This tracker highlights chatbot-related legislation advancing through U.S. state legislatures and Congress in 2026. It includes bills that have passed at least one legislative chamber and is updated weekly to reflect movement and amendments. This tracker reflects a subset of FPF’s broader legislative tracking work. FPF members receive access to comprehensive tracking across the full AI policy landscape, including all chatbot and AI-related legislation. To learn more about corporate membership, visit FPF’s Become a Member page.

Red Lines under EU AI Act: Unpacking the prohibition of emotion recognition in the workplace and education institutions

Blog 6 | Red Lines under the EU AI Act Series

This blog is the sixth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

The sixth blog in the “Red lines under the EU AI Act” series focuses on unpacking the prohibition on emotion recognition in the workplace and educational institutions, as contained in Article 5(1)(f) AI Act and explored in the Commission’s Guidelines on the topic. This analysis revealed a number of key takeaways:

Article 5(1)(f) AI Act prohibits AI systems from inferring the emotions of a natural person in the workplace and education institutions based on biometric data, with specific exceptions for medical and safety purposes. Recital 44 AI Act invokes the lack of scientific basis for the functioning of such systems and the key shortcomings such as limited reliability, the lack of specificity and the limited generalisability, which may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons. 

Acknowledging the power imbalances in these environments which, combined with the intrusive nature of these systems, could lead to detrimental or unfavorable treatment of certain natural persons or whole groups, this prohibition aims to protect individuals from potentially invasive emotional surveillance. It is important here to note that AI systems for emotion recognition that are not put into use in the areas of workplace or educational institutions do not fall within the scope of this prohibition and qualify as ‘high risk’ under Annex III, paragraph 1, subparagraph c of the AI Act.

It is unclear whether AI systems that do not have as a primary aim the identification or inference of emotions, but have emotion identification or inference as a secondary functionality, are covered by the prohibition. For example, an AI system primarily intended for transcribing meetings that can also infer emotions or intentions, or an AI system that monitors students during a test, but at the same time also identifies emotions.  

1. Limited scope: The provision does not prohibit ‘emotion recognition systems’, but only ‘AI systems to infer emotions’

The Guidelines highlight that the prohibition in Article 5(1)(f) AI Act does not refer to emotion recognition systems more generally, but only to “AI systems to infer emotions of a natural person”. 

Article 3(39) AI Act defines an ‘emotion recognition system’ as “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”. Three elements can be identified in this definition:

  1. The AI system must be used for identifying or inferring
  2. Emotions or intentions of natural persons;
  3. Based on the biometric data of natural persons.

Hence, the target of these AI systems are emotions and intentions of individuals, which might be identified or inferred (a lower threshold). The identification or inference must be based on biometric data of the individuals. This definition seems to cover the use of emotion recognition on individuals, leaving out groups. 

The prohibition in Article 5(1)(f) AI Act does not refer to ‘emotion recognition systems’, but only to ‘AI systems to infer emotions of a natural person’. Recital 44 further clarifies that prohibition covers AI systems ‘to identify or infer emotions.’ ‘Intensions’ are not mentioned either in the Article 5(1)(f), nor in Recital 44. Hence, it appears that while the definition of ‘emotion recognition systems’ provided in Article 3(39) can serve as a reference point for this prohibition, it does not equate to what the prohibition covers, the prohibition being narrower in scope.

Certain cumulative conditions must be fulfilled for the prohibition to apply:

  1. The practice must constitute the “placing on the market”, “putting into service for this specific purpose,” or the “use” of an AI system;
  2. The AI system is used specifically to infer emotions;
  3. The AI system is in the area of the workplace or education and training institutions
  4. AI systems intended for medical or safety reasons are excluded from the prohibition.

All the cumulative conditions listed above must be met simultaneously to trigger the prohibition, a consistent approach of the AI Act’s full set of prohibited practices, an approach which narrows down the scope of the prohibition to very specific use cases.

Recital 18 AI Act provides a non-exhaustive list of emotions referred to in this definition, including happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement. The Recital clarifies that physical states, such as pain or fatigue, are not included in this definition. For example, systems used to detect fatigue in professional pilots or drivers for the purpose of preventing accidents are explicitly excluded from this definition. The mere detection of “readily apparent expressions, gestures, or movements” such as basic facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms, or head, or characteristics of a person’s voice, such as a raised voice or whispering, are not included either unless they are used for identifying or inferring emotions. The Guidelines further clarify that inferring emotions from a written text does not fall within the scope of the prohibition. 

What is left unclear with regard to the prohibition’s scope is that there is a thin line between emotions, other readily apparent expressions, and pain or fatigue, which also result in expressions that can be mistaken for emotions. The distinction between mood and emotion (both of which can manifest in different ways) is similarly not made, making it unclear whether mood detection falls within this prohibition. The distinction between these features would require a detailed analysis of many factors and circumstances on a case-by-case basis, rather than only biometric data, making the application of this prohibition in practice challenging and complex.

2. It is unclear whether the inference of intentions is also prohibited

The AI Act does not provide clarification on what it means by ‘intentions’ and how to differentiate them from ‘emotions’ in cases when the same system can identify both. This distinction is important because the prohibition applies only to emotions and does not refer to intentions, whereas the definition of ‘emotion recognition systems’ applies to both. 

None of the examples provided in Recital 18 seems to fall within the notion of intentions. While the emotions listed in this Recital represent reactions to situations or environments, the notion of ‘intention’ would have a predictive quality about the future. Additionally, the Commission Guidelines seem to focus solely on emotions, without providing any clarification or definition of ‘intentions’. However, in a non-exhaustive list of examples of emotion recognition, they include the example of “Systems inferring from voice or body gestures, that a student is furious and about to become violent”. While ‘furious’ seems to fall within the notion of ‘emotion’, ‘about to become violent’ makes a prediction about a future action based on the (automatically) detected emotion. This prediction might fall within the notion of ‘intention’ to commit an action in the future but it might also be considered as an identification of the transition from a passive state (emotions) to active (a combination of emotions and intentions), making it difficult to understand whether such a prediction falls within the prohibition. The Guidelines, however, do not seem to make such a distinction. 

An example of ‘intentions’ in the workplace could be the detection of an employee’s intention to resign from the job based on their facial expressions during meetings or videocalls. A wide range of intentions, such as intentions to commit a crime, intentions to drop out of school, or even suicidal intentions could fall within this prohibition when detected in the area of the workplace or education. The Guidelines also note that the concept of emotions or intentions should be understood in a broad sense, noting that attitude and emotion are equivalent for the purposes of this prohibition, thus preventing circumvention through changes in terminology.

Another distinction of the prohibition from the definition of ‘emotion recognition systems’ is that the prohibition refers only to the ‘inference of emotions’ as a prohibited practice, whereas the definition of ‘emotion recognition systems’ includes both ‘identification’ and ‘inference’ of emotions.

The Dutch Data Protection Authority (AP) interprets the prohibition as covering both the inference and the identification of emotions and intentions. The Guidelines distinguish between “identification” and “inferring”, clarifying that identification occurs where the processing of the biometric data (for example, of the voice or a facial expression) of a natural person allows to directly compare and identify an emotion with one that has been pre-programmed in the emotion recognition system. “Inferring” involves deduction through analytical processes, including machine learning approaches that learn from data how to detect emotions. 

3.  The prohibition refers only to emotions deriving strictly from biometric data

According to the Guidelines, this prohibition has a similar scope as the rules applicable to other emotion recognition systems1, while it should be limited to inferences based on a person’s biometric data, as defined in Article 3(39) AI Act. Hence, the prohibition in Article 5(1)(f) refers only to emotions deriving strictly from biometric data. 

Biometric data is defined in Article 3(34) AI Act as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. The relationship between the AI Act definition of biometric data and that provided in the GDPR is explored below in section 6 of this blog.

The limited scope of this ban on inference based on biometric data excludes AI systems that perform emotion recognition through other inferences not on the basis of biometric data such as AI systems for crowd control, and AI systems inferring physical states such as pain and fatigue.

4. A broad interpretation of the ‘workplace’ and ‘education institutions’ contexts

The prohibition in Article 5(1)(f) AI Act is limited to emotion recognition systems specifically in the “areas of workplace and educational institutions”. The Guidelines submit that the workplace context should be interpreted broadly, covering any physical or virtual space where the work is performed. The Guidelines also specifically mention training institutions as covered by this prohibition. Training institutions are not mentioned in the AI Act’s provision or any of the related Recitals. Besides simply listing training institutions alongside educational institutions, the guidelines do not provide any further details as to what constitutes training institutions.

Interestingly, the Guidelines clarify that hiring processes also fall within the workplace context for the purpose of this prohibition. Similarly, the Guidelines clarify that educational institutions should encompass all types and levels of education, including admissions procedures, and should also be interpreted broadly, without any limit, in terms of the types or ages of pupils or students or of a specific environment.

Based on the text of Recital 44, this prohibition also covers AI systems for emotion recognition in situations related to the workplace and education. This makes the area of applicability of the prohibition broader while leaving space for interpretation as to what “related to the workplace” might consist of. Such an interpretation may require a case-by-case analysis. The Dutch AP has interpreted this broad notion as including for example, home working environments, online or distance learning, and also the application of emotion recognition for recruitment and selection or application for education. Further clarifications with clearer guidelines might be necessary in this regard, for ensuring legal certainty, while keeping in mind the volatile nature of ‘workplace’.

It is important to note that emotion recognition systems installed in a work environment can also be used for emotion detection of customers rather than employees, such as, for example, to detect suspicious customers. The AI Act prohibition does not apply to such cases. However, the same system might detect the emotions of employees simultaneously with those of customers. The guidelines only superficially touch upon this scenario in an example stating that in such cases, it should be ensured that “no employees are being tracked and there are sufficient safeguards”.

5. Exception(s) to the prohibition: medical and safety reasons

In a Joint Opinion on the AI Act, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) state that the “use of AI to infer emotions of a natural person is highly undesirable and should be prohibited.” In this statement, and in a later EDPS Opinion, they further note that exceptions should be made for “certain well-specified use-cases, namely for health or research purposes”.

The Guidelines note that the exception granted in this prohibition should be narrowly interpreted, limited to what is strictly necessary and proportionate, including limits in time, personal application and scale, and should be accompanied by sufficient safeguards in order to ensure a high-level of fundamental rights protection. The recitals of the AI Act stress that the exception granted in this prohibition applies to AI systems used strictly for medical or safety reasons. For example, a system intended for therapeutic use. As such, therapeutic uses mentioned in Recital 44 AI Act as an exception, should only apply to CE-marked medical devices. The Guidelines note that this exception does not extend to general well-being monitoring, such as stress or burnout detection.

Additionally, the Guidelines note that safety exceptions should be limited to protecting life and health, excluding other interests such as property protection or fraud prevention. “Explicit need” is also mentioned as a requirement for the exception to apply. The Guidelines also highlight that data collected and processed in this context cannot be used for any other purpose, hence linking to the GDPR’s purpose limitation principle.

6. Interplay with the GDPR

The Guidelines mention, in a footnote, that there is a distinction between the definition of biometric data within the AI Act and the definition within the GDPR. The definition provided in the AI Act “does not include the wording ‘which allow or confirm the unique identification’ (the functional use of biometric data), contrary to the definition of biometric data in the GDPR that includes this requirement. As such, the Guidelines conclude that “the GDPR definition of biometric data will apply under data protection rules with regard to the processing of personal data ​​(and when for example Article 9(1) and 9(2) GDPR would be applicable)”. This would mean that the AI Act definition applies in AI contexts, whereas the GDPR definition in data protection contexts. 

When reaching this conclusion, the Commission appears to have disregarded recital 14 of the AI Act, according to which the concept of biometric data in the AI Act must be interpreted “in the light of” the concept of biometric data in the GDPR. The clarification of this gap is crucial given the high bar that needs to be met for unique identification. 

If we stick to the AI Act definition, the notion of ‘biometric data’ becomes broader, including most emotion recognition systems using biometric data that do not necessarily have the ability to identify the individual. These systems would be left out of the prohibition if the GDPR definition of biometric data is to be applied. This raises the question of whether there needs to be a categorization of biometric data for the purposes of Article 5(1)(f), to further clarify the notion of biometric data to be able to determine without doubt which practices count as emotion recognition under this prohibition.

Data protection regulators have already been treating emotion recognition systems, including those in the workplace, as high-risk under data protection law, even before the AI Act became applicable. In the Budapest Bank case, the Hungarian DPA submits that AI emotion recognition in the workplace poses fundamental rights risks and ordered the Bank to modify its data processing practices to comply with the GDPR, specifically by refraining from analyzing emotions during voice analysis. With regard to the emotion recognition of employees based on voice analysis, the DPA stresses the need for a separate balancing of interests, taking into account their vulnerable position resulting from their subordinate status. It can be noticed that the reasoning behind the ban imposed by Article 5(1)(f) of the AI Act is reminiscent of the reasoning in this case.

However, it is interesting to note that in the Budapest Bank case, the DPA did not explicitly classify the voice-derived data as biometric data under Article 9 GDPR. The DPA treated the data as ordinary personal data, attracting heightened scrutiny due to the AI risk, rather than as special category biometric data triggering the application of Article 9 outright. Nevertheless, the Hungarian DPA specifically referenced the EDPB-EDPS Joint Opinion 5/2021 on the AI Act proposal, highlighting that AI-based emotion recognition systems pose a high risk to the fundamental rights of data subjects.

7. Concluding Reflections

The AI Act’s prohibition is consistent with previous case law of DPAs, on the basis of the GDPR

The prohibition’s premise regarding the power imbalance and the vulnerable position of employees and students is reminiscent of a DPA’s fine in a similar case. As such, in the Budapest Bank case of 2022, the Hungarian DPA found that the use of AI for emotion recognition of employees and consumers breached several of the GDPR’s key principles and obligations.

The definition of ‘biometric data’ in the AI Act does not seem to be aligned with the definition of the same concept in the GDPR, creating confusion as to which definition applies in this prohibition.

The Guidelines give precedence to the definition of biometric data in the AI Act for the purpose of Article 5(1)(f) prohibition, whereas the AI Act itself, in its Recital 14, seems to give precedence to the concept of biometric data in the GDPR.

The prohibition expressly differentiates between emotions derived from biometric data and those derived from other types of analysis not involving biometric data, the latter practice being excluded from the prohibition.

Biometric data is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. The narrow scope of this ban on inference based on biometric data excludes AI systems that perform emotion recognition through other inferences not on the basis of biometric data.

  1.  See Annex III, point 1(c), and Article 50 AI Act. ↩︎

Privacy Protections Coming Sooner Rather Than Later to the Sooner State

Oklahoma has become the latest U.S. state to enact a comprehensive consumer privacy law after Governor Stitt signed SB 546 into law on March 20. This ends two long legislative droughts: First, this is the long-awaited 20th state comprehensive privacy law and the first since the Rhode Island Data Transparency and Privacy Protection Act was enacted in June 2024. Second, as the bill’s sponsor Rep. West (R) identified in House floor debate, this concluded Oklahoma’s multi-year journey to enacting a comprehensive consumer privacy law. 

SB 546 is a Virginia-style law with few deviations from that model, and it will go into effect on January 1, 2027. This resource provides an overview of the law’s scope, consumer rights, business obligations, and enforcement provisions.

Definitions and Scope

Covered Entities: This law applies to controllers and processors who conduct business in Oklahoma (or produce a product or service targeted to Oklahoma residents) and annually either (1) control or process the personal data of at least 100K consumers, or (b) control or process the personal data of at least 25K consumers and derive over 50% of gross revenue from selling personal data. (Section 15.) These numbers are consistent with the thresholds established in other states. 

Definitions: The law’s definitions are generally consistent with the Virginia model. For example, this law includes the narrower definition of “sale,” which is limited to exchanges of personal data only for monetary consideration (not other valuable consideration). One divergence from the Virginia-model is that the definition of “biometric data” includes data generated from a physical or digital photograph or a video or audio recording if such data is generated to identify a specific individual. (Section 1.)

Entity and Data-Level Exemptions: The law includes entity-level exemptions for state agencies and political subdivisions (and service providers acting on their behalf), financial institutions subject to Title V of GLBA, covered entities and business associates governed by HIPAA, nonprofits, and institutions of higher education. The law also includes data-level exemptions for data subject to GLBA, HIPAA, FCRA, and FERPA, personal data processed in the course of a purely personal or household activity, personal data collected and used for purposes of the federal policy under the Controlled Substances Act, and more. (Sections 15–16.)

Exceptions for Common Business Activities: The law includes many exceptions which are consistent with existing state comprehensive privacy laws, including: legal compliance (local, state, or federal laws, rules or regulations,and government subpoenas, summons, inquiries or investigations); providing a specifically requested product or service; preventing, detecting, protecting against or responding to security incidents, deceptive activities, or any illegal activity; engaging in public or peer-reviewed scientific or statistical research in the public interest; conducting “internal operations” that are reasonably aligned with the consumer’s expectations, existing relationship with the controller, or are otherwise compatible with processing data in furtherance of the provision of a specifically requested product or service; and more. (Section 19.)

Consumer Rights

Consumers have the standard rights to confirm whether a controller is processing their personal data and access that data, correct inaccuracies in their personal data, delete their personal data, obtain a copy of their personal data in a portable format (if technically feasible), and to opt-out of the processing of their personal data for targeted advertising, the sale of personal data, or profiling in furtherance of a decision that produces a legal or similarly significant effect concerning the consumer. Controllers must notify consumers within 45 days if they are declining to take action on a rights request and provide instructions on how to appeal that decision. The law does not include any provisions regarding authorized agents or opt-out preference signals. (Sections 2-3.)

Consistent with most other state laws, the rights of access, correction, deletion, and portability do not apply to pseudonymous data in cases where the controller can demonstrate that any information necessary to identify the consumer is kept separately and is subject to effective technical and organizational controls preventing the controller from accessing the information. (Section 11.)

Business Obligations

Controllers and processors have enumerated responsibilities under the law, including transparency, data minimization, data security, oversight of processors, and data protection assessments. 

Transparency: Controllers must provide consumers with a “reasonably accessible and clear” privacy notice including information such as categories of data processed, processing purposes, how to exercise rights and appeal decisions, and categories of personal data shared with third parties. (Section 8).

Data Minimization: The law includes procedural data minimization and purpose limitation requirements: A controller must limit the collection of personal data to what is “adequate, relevant, and reasonably necessary” for the purposes disclosed to the individual, obtain opt-in consent to process personal data for purposes that are “neither reasonably necessary to nor compatible with the disclosed purposes for which the personal data is processed,” and must obtain opt-in consent to process a consumer’s sensitive data. (Section 7.)

Data Security: Controllers must maintain “reasonable administrative, technical, and physical measures to protect the confidentiality, integrity, and accessibility” of personal data. (Section 7.)

Processors: Controllers must engage in oversight of processors by entering into a contract that meets statutory criteria, such as setting forth instructions for processing data, the nature and purpose of the processing, confidentiality, obligating the processor to cooperate with “reasonable assessments” by the controller or the controller’s designated assessor.” (Section 9.)

DPIAs: Controllers must conduct and document a data protection assessment for certain processing activities: processing personal data for targeted advertising; selling personal data; processing personal data for profiling that presents a reasonably foreseeable risk of substantial injury to consumers (e.g., unfair or deceptive treatment, financial or physical or reputational injury, intrusion on solitude, seclusion or private affairs), processing sensitive data, or other processing activities involving personal data that present a heightened risk of harm to consumers. (Section 10.) 

Enforcement

The law will go into effect on January 1, 2027 and will be enforced exclusively by the attorney general. (Section 22.) The law includes a mandatory cure period, requiring the AG to notify controllers or processors of alleged violations and allowing 30 days for them to resolve violations. (Sections 13–14.) The civil penalty for each violation is $7,500. (Section 14.)

* * *

At long last, Oklahoma takes its place on the privacy patchwork. Looking to get up to speed on the existing state comprehensive consumer privacy laws? Check out FPF’s 2025 report, Anatomy of a State Comprehensive Privacy Law: Charting the Legislative Landscape

image

Pictured: Oklahoma receiving its star on the FPF “Privacy Patchwork” quilt.

Navigating Autonomy and Privacy in Emerging AgeTech: Insights from the FPF Roundtable

As AgeTech expands into homes across the country—seeking to enable older adults to live independently longer—fundamental questions about autonomy, privacy, and trust are coming into sharper focus. How do we balance caregiver support with individual privacy? Should data pertaining to older adults be treated as “sensitive?” And in a fragmented privacy and consumer protection landscape, how do we build the trustworthiness necessary for adoption?

The Future of Privacy Forum recently convened a pivotal AgeTech Roundtable, supported by the Alfred P. Sloan Foundation, to delve into the ethical, legal, and policy challenges presented by the rapidly expanding field of AgeTech. The core discussions centered on balancing the growth of these tools without compromising the fundamental autonomy and privacy of the older adults they are intended to serve. This post highlights key themes from the roundtable and outlines next steps in FPF’s ongoing AgeTech initiative.

The Core Tension: Privacy vs. Autonomy

AgeTech devices, ranging from smart home sensors to AI-enabled companion devices, collect highly sensitive personal data, including health, location, voice, and behavioral patterns. A critical nuance explored during the roundtable was the unique privacy calculation older adults face: the willingness to accept continuous monitoring in exchange for the ability to live independently at home for longer.

However, this trade-off is complicated by several factors:

The Crisis of Trust: Fraud and AI

Scams targeting older adults are one of the top consumer protection concerns, and the rising phenomenon of fraud and financial theft severely undermines trust in AgeTech products.

Conclusion and Next Steps

The insights gathered at the roundtable—which centered on the ethical, legal, and policy challenges in the emerging AgeTech landscape—provide a robust foundation for FPF’s continued work to ensure that Agetech serves to enhance, not diminish, the dignity and independence of older adults.

Recommendations and next steps from roundtable attendees included:

Incentives or Obligations? The U.S. Regulatory Approach to Voluntary AI Governance Standards

By FPF Legal Intern Rafal Fryc

As artificial intelligence gets increasingly deployed across every sector of the economy, regulators find themselves grappling with a fundamental challenge: how to govern a technology that defies traditional regulatory frameworks and changes faster than legislation can keep pace. One increasingly common approach can be found outside the text of statutes, where state legislatures are pointing developers and deployers toward established voluntary governance frameworks like NIST’s AI Risk Management Framework or ISO 42001. This shift toward incorporating non-binding technical standards into legal requirements represents more than just regulatory convenience – it is creating a new legal regime in which voluntary industry guidelines are influencing everything from negligence determinations and punitive damage calculations to affirmative defenses for regulatory actions. Understanding how these soft law approaches are influencing legal expectations has become essential for anyone building, deploying, or governing AI systems.

This blog post highlights: 

The Growth of AI Laws Utilizing Voluntary Standards

Colorado’s AI Act (SB 205), prior to the revised policy framework, was the first in the U.S. to require deployers to implement a “risk management policy and program” that aligns with NIST’s AI RMF, ISO 42001, or another “nationally or internationally recognized risk management framework for artificial intelligence systems.”1 On top of required implementation, Colorado offered an affirmative defense to deployers and developers for compliance with these frameworks. Although Colorado was the first to introduce such provisions in the AI space, mentions of external standards in the AI Act were removed by the Colorado AI Policy Working Group in their latest proposed revisions to the Act. Texas later came into the scene with the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), also creating an affirmative defense for developers or deployers that comply with NIST’s AI RMF or another nationally or internationally recognized risk management framework for AI systems.2 Most recently, California’s Transparency in Frontier Artificial Intelligence Act (TFAIA) requires developers to disclose whether and to what extent they incorporate “national standards, international standards, and industry-consensus best practices.”3 New York’s RAISE Act (A 9449) takes a similar approach to California, requiring developers to disclose how they “handle” incorporating external standards. Montana (SB 212) also passed a narrow law, requiring deployers to implement a risk management framework that considers external standards when AI is deployed in critical infrastructure. 

While Texas takes an incentive-based approach, by offering compliance with a non-binding framework as an affirmative defense, California requires that deployers or developers consider and publish their approach to risk management frameworks that align with, or are substantially similar to nationally/internationally recognized standards. Colorado originally occupied a complicated middle ground, mandating adherence for deployers’ risk management while simultaneously offering an affirmative defense from state AG actions for developers, deployers and “other persons.” As with many laws surrounding emerging technologies, the approach is fragmented, where entities utilizing voluntary AI governance frameworks are subject to varying degrees of liability and protection.

screenshot 2026 03 16 153947

The Proposed Laws and Their Application

Nonetheless, the approach appears to be gaining momentum, particularly with proposed legislation regarding frontier models, liability, and automated decisionmaking. All take language from existing bills to require or encourage developers and deployers to have a written policy that takes into account NIST’s AI RMF or a similar framework.4 

Frontier Model Bills

Bills focusing on frontier models either copy or expand California’s approach. While California only required developers to publish their approach to external standards, some bills  require developers and deployers to implement a framework that incorporates the same standards. However, bills under this category notably do not refer to specific standards like NIST or ISO, instead opting for the more general terms “industry-consensus best practices” and “national/international standards.” Within this category, the bills fall under two approaches: either mandating a single framework that incorporates/considers external standards or mandating both a public safety and a separate child protection plan that does the same. Bills in the first category include Illinois SB 3312 and Illinois HB 4799. Interestingly, both bills require any amendments or rulemakings made pursuant to the statute to also consider the same external standards. Bills in the second category include Illinois SB 3261, Utah HB 286, Tennessee SB 2171, and Nebraska LB 1083. These bills not only focus on frontier model developers, targeting chatbot providers as well. 

Liability Bills

Bills focusing on liability typically follow Texas’ approach, where developers and deployers are given a safe harbor from product liability based litigation if they implement external standards in various places in the AI lifecycle. Bills in this category include Illinois SB 3502/SB 3590, Maryland HB 712, and Vermont H 792. The requirements to achieve safe harbor differ for developers and deployers, where developers must conduct “testing, evaluation, verification, validation, and auditing of that system consistent with industry best practices” and also submit a data sheet to the state Attorney General that includes:

  1. “Information on the intended contexts and uses of the artificial intelligence system in accordance with industry best practices;
  2. Information regarding the datasets upon which the artificial intelligence system was trained, including sources, volume, whether the dataset is proprietary, and how the datasets further the intended purpose of the product; 
  3. Accounting of foreseeable risks identified and steps taken to manage them consistent with industry best practices; and 
  4. Results of red-teaming testing and steps taken to mitigate identified risks, consistent with industry best practices.”

Deployer requirements for safe harbor are comparatively relaxed, where the bills mandate a risk management framework that incorporates external standards.

ADMT Bills

Bills focusing on ADMT follow Colorado’s original approach made prior to the revised policy framework, where there are both requirements to implement these standards and safe harbors from litigation to encourage doing so. With Washington’s HB 2157, the bill presumes conformity with the statute if the developer or deployer follows NIST’s AI RMF or ISO 42001. The bill also offers a rebuttable presumption to deployers if they implement a risk management framework that conforms to NIST, ISO, or another standard of similar rigor. New York’s S 1169, on the other hand, requires developers and deployers to implement a risk management policy that conforms with NIST’s AI RMF or another standard designated by the state’s Attorney General. This bill diverges from the others by granting the state Attorney General power to name which standards would qualify under the statute, as opposed to the majority of other bills which leave that determination unanswered. 

The Real Effects on Litigation

Industry standards like NIST’s AI RMF can also be used by courts to determine whether companies exercised reasonable care, even when no statute requires their adoption. This judicial reliance on voluntary standards follows established patterns from product liability and negligence cases, where courts have long looked to industry practices to define standards of care. In AI litigation, these frameworks can emerge as critical evidence in three areas: establishing the duty of care in negligence claims, determining defects in strict liability cases, and assessing good faith conduct when calculating punitive damages. The result is that compliance with non-binding standards can determine liability regardless of whether a jurisdiction has AI-specific legislation.

Product liability in AI litigation can be split into two categories: negligence cases and strict liability cases. Negligence cases depend on whether the defendant had a duty of care to the plaintiff and if that duty was breached. A duty of care’s existence depends on many factors, including industry standards. Many courts have recognized that “[e]vidence of industry standards, customs, and practices is ‘often highly probative when defining a standard of care.”5 Other courts have also concurred that “advisory guidelines and recommendations, while not conclusive, are admissible as bearing on the standard of care in determining negligence.”6 Openly complying with industry standards can provide an objective input into an otherwise subjective determination. 

Strict liability takes a different approach by automatically placing liability in cases of dangerous animals, ultrahazardous activities, and product defects. Although not yet established, harm caused by AI would most likely qualify under the product defect theory, which would make an AI developer liable if they failed to provide adequate warning or if the product is “defective.”7 The current wave of chatbot litigation points to this approach, with various plaintiffs pursuing this avenue.8 In determining what qualifies as “adequate warning” or whether a product is “defective,” courts often look to industry standards. Although a minority of states do not consider industry standards for strict liability,9 the majority of states and federal courts do.10 Following widely adopted established standards, like those by NIST and ISO, can prove dispositive in most strict liability cases relating to AI.

In terms of punitive damages, both state and federal courts have looked favorably on adherence to  industry standards when determining if the defendant acted in good faith; however, there have been exceptions:

  1. A manufacturer followed industry standards but actively resisted safer designs based on economic considerations.11
  2. A company followed industry standards but knew about a remaining risk and failed to warn and remedy the risk.12
  3. A manufacturer followed industry standards but knowingly engaged in conduct that endangered people.13

While following industry standards has been beneficial for determining “good faith” when assessing punitive damages, it mostly serves as the baseline for expected behavior. Readily demonstrating adherence to external standards is necessary to avoiding hefty fines.

Whether through statutory requirements in Colorado and California, affirmative defenses in Texas, or judicial interpretation of “reasonable care” and “good faith” in liability cases, frameworks like NIST’s AI RMF and ISO 42001 are transitioning from voluntary best practices to de facto legal requirements. For AI developers and deployers, the message is clear: application of these standards is becoming less optional and more essential for managing legal risk. Companies that wait for explicit regulatory mandates may find themselves already behind the curve, and organizations should begin implementing recognized risk management frameworks now, not because the law explicitly requires it everywhere, but because the legal system is already treating these standards as the baseline for reasonable conduct.

Beyond the defensive calculus of avoiding penalties and litigation exposure, there is an equally compelling case for adopting external standards: they are simply good governance. These frameworks are road-tested guides for building AI systems that produce accurate, ethical, and trustworthy outcomes. Organizations that internalize them are better positioned to gain customer trust, regardless of what any particular state legislature has done. Given the near impossibility of creating compliance programs that anticipate every nuance of every emerging AI law across fifty states, demonstrating a genuine, documented commitment to robust governance gives regulators reason to extend good faith when questions arise. An organization that can point to systematic, principled governance processes is better positioned in a regulatory conversation than one scrambling to reverse engineer compliance after the fact. The growing statutory references to NIST and ISO standards are a development worth watching closely but, the stronger argument for adoption may be the proactive one: that these frameworks represent a genuine commitment to getting AI governance right, not merely a hedge against enforcement risk.

Design elements of the above image was generated with the assistance of AI and reviewed by FPF.

  1. Colo. Rev. Stat. Ann. § 6-1-1703. ↩︎
  2. 2025 Tex. Sess. Law Serv. Ch. 1174, Sec. 552.105(e)(2)(D) (H.B. 149). ↩︎
  3. Cal. Bus. & Prof. Code § 22757.12. ↩︎
  4. H.B. 286, 13-72b-102(1)(a) (Utah 2026); H.B. 4705, Sec. 15(a)(3)(A), 104th Gen. Assem. (Ill. 2026); S.B. 2171, 68-107-103(a)(1)(G), 114th Gen. Assem. (Tenn. 2026); N.Y. S8828 § 1421(a) (2026). ↩︎
  5. Elledge v. Richland/Lexington Sch. Dist. Five, 341 S.C. 473 (Ct. App. 2000). ↩︎
  6. Cook v. Royal Caribbean Cruises, Ltd., No. 11-20723-CIV, 2012 WL 1792628, at *3 (S.D. Fla. May 15, 2012). ↩︎
  7. Restatement (Second) of Torts § 402A (Am. L. Inst. 1965). ↩︎
  8. Garcia v. Character Techs., Inc., No. 6:24-cv-1903-ACC-DCI, 2025 U.S. Dist. LEXIS 215157 (M.D. Fla. Oct. 31, 2025). ↩︎
  9. Sullivan v. Werner Co., 253 A.3d 730 (2021). ↩︎
  10. Joshua D. Kalanic et al., AI Soft Law and the Mitigation of Product Liability Risk, CTR. FOR L., SCIENCE, & INNOVATION, ARIZONA STATE UNIVERSITY (Jul. 2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3909089 (“Yet, many states still allow industry standards to be presented as evidence because such standards can help show the reasonableness and adequacy of the design.”) ↩︎
  11. Gen. Motors Corp. v. Moseley, 447 S.E.2d 302 (1994). ↩︎
  12. Flax v. DaimlerChrysler Corp., 272 S.W.3d 521 (Tenn. 2008). ↩︎
  13. Uniroyal Goodrich Tire Co. v. Ford, 461 S.E.2d 877, 880 (1995). ↩︎

Red Lines under the EU AI Act: Understanding the ban of the untargeted scraping of facial images and facial recognition databases 

Blog 5 | Red Lines under the EU AI Act Series

This blog is the fifth of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

1. Introduction

The fifth blog in the “Red lines under the EU AI Act” series focuses on unpacking the Article 5(1)(e) prohibition to place on the market, put into service, or use AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage. It is notable how this provision targets specifically the acts necessary prior to engaging in facial recognition itself, which is tackled separately, under a different provision of the AI Act, Article 5(1)(h). A number of key takeaways emerge from our analysis: 

Following this brief introduction, Section 2 outlines the rationale behind the prohibition, while Section 3 notes its specific scope as defined in the differentiation between “targeted” and “untargeted” scraping. Section 4 outlines what falls outside the scope of the prohibition, potentially including use-cases of AI-driven deepfakes, while Section 5 explores the AI Act’s interplay with other relevant areas of EU law, including the GDPR and Law Enforcement Directive (LED). After noting significant cases on facial recognition by DPAs, Section 6 includes concluding reflections and key takeaways. 

2. Context and rationale: untargeted scraping of facial images as a particularly intrusive practice posing “unacceptable risk”, consistent with past case law under the GDPR 

Article 5(1)(e) AI Act prohibits the creation or expansion of facial recognition databases through the untargeted scraping of internet or CCTV footage. The European Commission’s Guidelines on Prohibited Artificial Intelligence Practices under the AI Act recognize that the untargeted scraping of facial images “seriously interferes with individuals’ right to privacy and data protection and deny those individuals the right to remain anonymous”. This is further supported by Recital 43 AI Act, which recognizes that the untargeted scraping of facial images can add to the feeling of mass surveillance and lead to gross violations of fundamental rights, including the right to privacy. 

The context and rationale of the AI Act’s prohibition is consistent with past case law by DPAs across the EU on the basis of the GDPR. Indeed, the expansion and creation of facial recognition databases on the basis of the untargeted scraping of data, including biometric data such as facial images, has been a continuous area of serious concern for DPAs. From 2022 to 2024, several DPAs imposed large fines on Clearview AI for GDPR violations due to practices related to facial recognition, as highlighted in Section 5 of this blog. 

3. Defining facial recognition databases and (targeted vs.) untargeted scraping

    Article 5(1)(e) AI Act states that the following practice shall be prohibited: “the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage” (emphasis added).

    Article 5(1)(e) AI Act states that the following practice shall be prohibited: “the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage” (emphasis added).

    Four cumulative conditions must be met for the prohibition to apply: 

    1. The practice must constitute market placement, putting into service for this specific purpose, or usage of the AI system; 
    2. Aim to create or expand facial recognition databases; 
    3. Employ AI tools for untargeted scraping methods; and 
    4. Source images from either the internet or CCTV footage.

    The Guidelines clarify that, similarly to the other Article 5 prohibitions, all four cumulative conditions above must be met simultaneously to trigger the prohibition. This approach, which is a consistent element of the AI Act’s full set of prohibited practices, seems to ensure a targeted approach to banning very specific uses of AI technologies. The prohibition applies to both providers and deployers who, in accordance with their responsibilities and placement in the value chain, have a responsibility not to place on the market, put into service, or use AI systems for this specific purpose. 

    The Guidelines stress that Article 5(1)(e) AI Act does not require that the sole purpose of the database is to be used for facial recognition; it is sufficient that the database can be used for facial recognition. The Guidelines define a “database” in this context as any collection of data or information that is specially organized for rapid search and retrieval by a computer, and may be temporary, centralized or decentralized. 

    An important distinction in the application of this provision is between targeted and untargeted scraping – the prohibition does not apply to any scraping tool with which a database for face recognition may be constructed or expanded, but only to tools for untargeted scraping. In this context, untargeted scraping is defined as a technique absorbing as much data and information as possible from different sources and without a specific focus to a given individual or group of individuals. This may be done using a variety of scraping tools and techniques, including web crawlers, bots, or other means that allow for the extraction of data from a variety of sources, including CCTV footage, social media, and other websites, in an automatic manner. 

    It is crucial to determine the precise scope of the scraping, since the Guidelines further note that the prohibition does not cover “targeted” scraping, such as the collection of images or videos of specific individuals or pre-defined groups of persons for law enforcement purposes. Furthermore, in more complex systems combining targeted with untargeted searches, only the untargeted scraping is prohibited. 

    Notably, the Guidelines highlight that the publication of images on social media by natural persons does not constitute consent for inclusion in facial recognition databases, aligning with the GDPR notion of (valid) consent as a legal basis for processing personal data. 

    4. What falls out of the scope of the prohibition?

    While specifically targeted scraping is in some cases allowed, several other practices fall outside the prohibition’s scope, including the untargeted scraping of biometric data other than facial images (such as voice samples) and, importantly, non-AI scraping methods. The Guidelines also note that AI systems which harvest large amounts of facial images from the internet to build AI models that generate new images about fictitious persons, similarly fall outside the scope of the prohibition.

    While the logic behind this last use-case is seemingly to permit the effective training of AI models, and it explicitly falls outside the scope of the prohibition, attention should be paid to the compliance of this practice with both copyright and data protection laws. Indeed, AI systems that scrape large amounts of facial images to build AI models may trigger the dual application of EU copyright rules, which protect the images themselves to the extent they are under copyright protection, and the application of the GDPR, which protects facial images as personal data, or even as special category biometric data where they are processed with the purpose of uniquely identifying a person. While the scope of this prohibition was agreed upon by co-legislators during final negotiations for the AI Act, this particular use-case may not account for the increasing sophistication of AI-driven deepfakes. 

    In fact, at the time of writing, the European Parliament reportedly reached a political agreement on the AI Act Omnibus wherein the latest compromise text includes a new addition to the list of prohibited practices. Namely, once adopted, non-consensual sexual deepfakes will be banned under the revised AI Act. 

    It is also worth noting that while this new ban will allow for further protection, it will not cover all use-cases of AI-driven deepfakes, potentially marking an area of continuous, ongoing review by regulators and legislators alike. For this purpose, outside of the Omnibus procedure, the AI Act’s Article 112 empowers the Commission to assess and review the list of prohibited practices on a yearly basis, with the resulting assessment report having to be submitted to the EU Parliament and Council. 

    5. Interplay with other EU laws: From the GDPR to the LED

    5.1. Facial recognition as a long-standing regulatory priority for DPAs across the EU

    The creation and expansion of facial recognition databases on the basis of untargeted scraping of facial images has been a prominent area of regulatory intervention on the basis of the GDPR. In February 2022, the Italian DPA (Garante) fined Clearview AI €20 million and imposed a ban on the company’s further collection and processing of data, including biometric data, and ordered the erasure of such data relating to citizens on Italian territory. 

    In October 2022, the French DPA (CNIL) similarly imposed a fine of €20 million on Clearview AI, recognizing the very serious risk to individuals’ fundamental rights posed by their facial recognition software. In September 2024, in an ex officio investigation, the Dutch DPA (AP) fined Clearview AI €30 million for “illegal data collection for facial recognition.” 

    In their investigations, DPAs found breaches of the GDPR’s Article 6 (lawfulness of processing), Article 9 (processing of special categories of personal data), and a failure to fulfil data subject rights, particularly those found in Article 15 (right of access) and Article 17 (right to erasure). The Garante also found breaches of key principles of data protection, in particular of lawfulness, fairness and transparency (Article 5(1)(a) GDPR), the purpose limitation principle (Article 5(1)(b) GDPR), and the storage limitation principle (Article 5(1)(e)). As such, in addition to constituting a prohibited practice under the AI Act, the untargeted scraping of facial images for the purposes of creating or expanding a facial recognition database also contravenes several obligations found in the GDPR.

    The Guidelines themselves similarly note that, in general, the processing of personal data via the untargeted scraping of the Internet or CCTV material to build up or expand face recognition databases is unlawful, and there is no legal basis under the GDPR for such activity. 

    5.2. Law Enforcement use of facial recognition databases

    Law Enforcement Authorities (LEAs) use facial recognition databases for identification purposes, allowing for the automated identification of individuals that may in some way be related to criminal events, such as suspects, wanted persons, victims, or witnesses. Among the different types of databases used for face matching by LEAs are also databases consisting of surveillance footage or private data sources and open-source data from the internet. 

    While the AI Act prohibits the creation or expansion of facial recognition databases through the untargeted image scraping from the internet or CCTV footage, the provision does not seem to prohibit the use of already existing databases that were previously created from untargeted scraping of internet or CCTV footage that are used by LEAs for face matching and identification purposes. Hence, there might be a legal gap between the prohibition of the creation of new databases and the expansion of existing databases from image scraping, and the use of such databases that were created prior to the entry into force of the AI Act prohibition. 

    The AI Act’s Article 5(1)(e) prohibition admits no exceptions for law enforcement use, unlike Article 5(1)(h) on real-time remote biometric identification (to be explored in the final instalment of this blog series), which has a carve-out for competent authorities in public spaces under strict conditions. The AI Act’s blanket ban seems intentional to prevent circumvention through law enforcement justifications. 

    The LED, the specific legal framework for data protection in law enforcement, takes a more balanced approach: it may permit particularly intrusive practices if proportionate, necessary, and legally grounded. Hence, if a biometric database is strictly necessary, sufficiently targeted (i.e., footage related to a specific investigation), and proportionate for law enforcement purposes, it passes the LED test. 

    Article 10 LED governs the processing of special categories of data, including biometric data processed for the purpose of uniquely identifying a natural person, and permits such processing only where it is strictly necessary, subject to appropriate safeguards, and authorized by Union or Member State law. Untargeted scraping does not seem to satisfy Article 10 conditions. 

    Hence, even though the LED does not explicitly prohibit the use of databases from untargeted scraping, it implicitly reaches to the same normative position as the AI Act due to its strict requirements. The primary difference is that the AI Act’s prohibition does not engage with that balancing at all: untargeted scraping is simply prohibited. The two legal instruments thus create overlapping and mutually reinforcing layers of prohibition. One question that remains is whether a database that was created outside of the EU can be used by LEAs in the EU in accordance with the LED or AI Act.

    6. Concluding Reflections and Key Takeaways

    The AI Act’s prohibition is consistent with previous case law of DPAs, on the basis of the GDPR, which remains the most comprehensive protection in facial recognition use-cases

    The prohibition’s differentiation between the targeted and untargeted scraping of facial images, and the subsequent ban of untargeted scraping, is reminiscent of several DPAs’ fines, particularly in the line of Clearview AI cases between 2022 and 2024. DPAs, including the Italian Garante, the Dutch AP, and the CNIL, found that Clearview AI’s facial recognition software breached several of the GDPR’s key principles and obligations.

    The prohibition expressly differentiates between “targeted” and “untargeted” scraping, thereby limiting the scope of its application and excluding qualified “targeted” scraping from its scope 

    The differentiation between targeted and untargeted scraping is also significant because the AI Act does not include a blanket ban on all scraping of facial images. Indeed, it acknowledges that in some cases, such as in law enforcement contexts, targeted scraping may be lawful when strictly necessary and proportionate. The LED sets out specific conditions for such use-cases, which are tightly regulated across the EU. An analysis of the interplay between the LED and AI Act shows an alignment between the two regulations, creating mutually reinforcing layers of prohibition. 

    Some use-cases, such as the harvesting of facial images for training AI models that generate new images of fictitious persons, may lead to increasingly complex compliance scenarios 

    When analyzing the practices or use cases that fall outside the scope of the prohibition, we also found that specific AI-driven deepfakes have so far not been captured by Article 5 AI Act. This seems to have similarly been recognized by legislators when, on 11 March 2026, it was reported that the European Parliament reached a political agreement on the AI Act Omnibus, which aims to include a new ban on non-consensual sexual deepfakes. It is worth noting that while this is a development that will allow for further protection, the (new) ban does not cover all AI-driven deepfakes. 

    FPF Privacy Papers for Policymakers: Impactful Privacy and AI Scholarship for a Digital Future

    FPF recently concluded its 16th Annual Privacy Papers for Policymakers (PPPM) events, hosting two dynamic virtual ceremonies on March 4 and March 11, 2026. This year’s program centered on the most pressing areas in privacy and AI governance, bringing together global awardees to discuss their research with leading discussants from industry, academia, and civil society.

    The awards highlighted important work that analyzes current and emerging privacy and AI issues and proposes achievable short-term solutions or analytical approaches that could lead to real-world policy outcomes. Seven winning papers, two honorable mentions, and one student submission were selected by a select group of FPF staff members and advisors based on originality, applicability to policymaking, and overall quality of writing.

    Papers were presented in two separate virtual sessions. 

    The first session featured awardees:

    With discussants Ed Britan (Salesforce), Christa Laser (Cleveland State University) Gabriel Nicholas (Anthropic), and Jevan Hutson (Grindr). 

    The second session featured awardees:

    With discussants Janis Kestenbaum (Perkins Coie), Theodore Christakis (Université Grenoble Alpes), and Hilary Wandall (Dun & Bradstreet).

    Africa’s Data Protection Reforms: A Continental Perspective on the Drivers of Change in Legal Frameworks

    1. Introduction

    Within an evolving digital landscape, several African jurisdictions have proposed a variety of reforms to existing and novel legal frameworks that regulate the processing of personal data, and the development and deployment of new technologies. Across the continent, there is a growing consensus among legislators on the need to create a regulatory environment that is responsive and adaptable to a changing technological landscape and a growing digital economy.

    This blog traces data protection legal and policy reforms across seven African countries, including Nigeria, Kenya, Angola, Ghana, Mauritius, Botswana, and Seychelles, to identify their scope, rationale, and common and diverging themes. The blog also briefly looks at the regional and sub-regional legal reforms to note the potential implications for other countries that might consider similar reforms and eventual harmonization. 

    While these developments are unfolding across Africa, they are occurring alongside broader global efforts to rethink data protection frameworks. As discussions around data protection policy reforms are intensifying in jurisdictions such as the European Union, which introduced a simplification package to reduce regulatory burdens and boost competitiveness; the UK, which finalized reforms to its data protection framework through the Data Use and Access Act (2025); and South Korea, which continues to explore legal reforms to its data protection law to facilitate the development of AI, data protection reforms across the African continent bring a different flavor to addressing their needs. Indeed, legislators across African jurisdictions agree that any reforms or amendments must first and foremost be reflective of local realities. 

    In the closing section, the blog considers the future of legal reforms on the continent by drawing from ongoing discussions and lessons learned in other key jurisdictions. In doing so, the following takeaways emerge: 

    Overall, most reforms are, for the time being, confined to national borders. However, legal reforms have also been proposed both continentally and at the sub-regional level, for example, through the Economic Community of West African States’ (ECOWAS) reform of its Supplementary Act on Personal Data Protection. While these regional reforms have not gained much traction compared to national efforts, they are nonetheless crucial as they can continue to inform ongoing debates for legal reforms within their respective Member States.

    2. A new task for data protection law? New obligations for digital platforms and developer accountability

    Despite being a relatively new law, proposals to amend the Nigeria Data Protection Act 2023 (NDPA) have already emerged through two separate legislative initiatives. The first, SB.650: Nigeria Data Protection Act (Amendment) Bill, 2024, seeks to amend the NDPA by introducing requirements for social media companies to establish physical offices in the country. At present, no other substantive changes to the NDPA have been outlined, making this the central focus of the reform proposal. The Bill, which is in its second reading, notes that while major social media platforms have significant Nigerian user engagement, they are yet to set up physical presence in Nigeria as they have done in other countries. 

    According to the sponsor of the Bill, Senator Ned Nwoko, the establishment of a company’s physical presence will contribute to the economy as well as ensure their compliance with the country’s legal framework. The Bill was referred to the Senate Committee on ICT & Cybersecurity and a report was expected within two months. 

    The second legislative proposal, HB.2436: Nigeria Data Protection (Amendment) Bill, 2025, focuses on strengthening accountability in the digital ecosystem by introducing obligations for application developers, regulating third-party data sharing, and expanding the enforcement powers of the Nigeria Data Protection Commission. Among other provisions, the bill proposes requirements for application developers to register with the Commission, maintain data processing registers, implement consent interfaces, and conduct annual data protection impact assessments, while also introducing stricter rules governing third-party data sharing and related enforcement measures.

    However, while updates regarding the progress of both Bills have been limited, the decision to amend the NDPA to cover social media companies has been criticized by civil society groups on grounds that the proposal to require social media companies to establish physical offices in the country may extend beyond the initial objectives of the country’s data protection framework. Indeed, the NDPA is a principles-based data protection law that focuses on regulating the processing of data across all sectors, rather than regulating specific entities such as social media companies. 

    A brief look at the history of social media regulation in Nigeria shows that it is intricately connected with state regulation of the freedom of expression. While past attempts to regulate the use of social media platforms have largely been led by ad-hoc bans on the basis of national security concerns, the proposed Bill to amend the NDPA signals a new approach: one that aims to progressively embed social media oversight within broader data governance frameworks, starting with data protection law. In this case, Nigeria’s approach to amending its NDPA uniquely highlights how national-level priorities and new technological realities converge under the umbrella of data protection.

    3. Processing of sensitive personal data by third parties in Kenya 

    Calls for amendments to Kenya’s Data Protection Act of 2019 (KDPA) began informally, largely owing to implementation challenges and gaps observed by controllers and processors. This was further solidified in the Parliamentary Report on the inquiry into the activities and operations of WorldCoin in Kenya, completed in 2023. The inquiry was set up to establish the legality of WorldCoin’s processing of sensitive personal data by an ad hoc Parliamentary Committee. The resulting Report included considerations on legal and regulatory gaps to provide safeguards for this type of data processing activity. While not legally binding, the Parliamentary Committee’s findings have nevertheless informed the push to amend the KDPA on grounds including:

    Negotiations on the amendment of the KDPA are ongoing and public consultations are expected to happen soon. Early contributions have been made by organizations such as the Data Protection and Governance Society of Kenya proposing amendments such as the creation of a data protection appeals tribunal that would hear appeals from the ODPC. This would reduce the burden of appeals at the High Court, which have been numerous. They also suggest repealing Section 54 of the KDPA that provides the Data Protection Commissioner with powers to exempt compliance with certain provisions of the Act, unless such exemptions are provided for under other regulations. This approach would provide more certainty on the conditions for exemption. Overall, Kenya’s approach to amending its data protection framework is driven by a growing interest to address specific procedural challenges as related to enforcement. 

    4. Angola leads the way in amending its data protection law to address the need for regulating AI

    Unlike Kenya and Nigeria, the discourse of data protection reform in Angola is driven by the need to regulate emerging technologies including AI. As African countries continue to carve out policy and legislative proposals aimed at regulating the development and deployment of AI, mostly in the form of national AI strategies, some countries are considering more specific legislation. In this respect, countries such as South Africa have proposed standalone AI legislation under its National AI Framework, while others such as Angola have opted to revise existing data protection laws to address privacy challenges posed by AI systems already in use. 

    Angola’s preparedness to regulate AI began with the recognition of privacy risks posed by AI. In March 2025, Angola’s data protection agency released a public consultation on the revision of its 2011 data protection law. Besides introducing numerous new sections, the draft revised law notably contains a section dedicated to AI. Its robust provisions on AI differentiate the law from other data protection laws in Africa, whose automated decision-making provisions mostly mimic Article 22 of the GDPR. Noteworthy aspects on the regulation of AI in the revised law include:

    What stands out about Angola’s approach to reforming its data protection law is the explicit specification of rules with regard to the use of AI for credit scoring. Article 23 provides nuance to the proposed legal reforms by identifying country-specific challenges introduced by the use of AI, and specifically the use of AI-enabled systems for credit scoring, thus moving away from the more general automated decision-making provisions seen continentally.

    The use of AI in credit scoring remains one of the earliest uses of AI continentally and has generated considerable data protection concerns, leading to several landmark enforcement decisions in some countries and necessitating specific guidelines on the use of personal data by digital lenders. For example, Kenya’s body of enforcement decisions consists of numerous such decisions including repeat offenders. The decision to specifically regulate the use of AI within the credit scoring industry points to the need to address subject-specific issues relating to the processing of personal data in Angola.Notably, Angola’s proposed reforms parallel the EU AI Act’s approach by specifically regulating AI-enabled credit scoring as a high-risk application, recognizing its widespread use and potential for harm. Like the EU AI Act’s Annex III(5)(b), which classifies credit scoring as high-risk, Angola moves beyond general provisions on automated decision-making to addressing country-specific risks to data subjects.

    5. Mauritius seeks to boost its growing business processing outsourcing industry

    National economic considerations such as Mauritius’ vision of becoming a preferred destination for business process outsourcing (BPO) and knowledge-based services have been central to its data protection reforms. The recently released National ICT Blueprint views legal and regulatory reforms, including to the data protection framework, as enablers of Mauritius’ goals for economic growth. According to the Blueprint, Mauritius intends to align its national frameworks with the AU Data Policy Framework as well as create regulatory conditions for pursuing an EU adequacy decision. These ongoing reforms aim to position Mauritius as a leader for outsourced services.


    Such economic considerations have been a major factor influencing the repeated data protection law amendments in Mauritius to date. Its first data protection law, enacted in 2004, was heavily influenced by the EU Data Protection Directive of 1995. The 2004 law was amended twice to bring the text of the law in closer alignment with the EU Data Protection Directive to provide Mauritius with better chances of accreditation by the European Commission as an adequate country, thus facilitating personal data transfers at a time when the country sought investments in its BPO sector, with the EU as its primary beneficiary. In 2017, the current data protection law of Mauritius was enacted, repealing the 2004 law but maintaining the initial aspirations of being a leader in outsourced BPO service providers. This further saw Mauritius ratify Council of Europe’s Convention 108, and Convention 108+ in 2020.

    6. Botswana’s path to filling in practical implementation gaps

    Botswana enacted its new data protection law in 2024, repealing the 2018 law and introducing new provisions to address implementation gaps in the latter. The 2018 framework, which had been in transition since 2021, did not provide sufficient clarity on certain provisions, including the institutional independence of the Information and Data Protection Commission, the scope of its enforcement powers, or the practical obligations of data controllers and processors. 

    For example, when compared to most data protection laws on the continent, Botswana’s 2018 data protection law did not provide modalities for responding to data subject rights, and its limited focus on data controllers with processors treated merely as agents created ambiguity in shared compliance responsibilities. It also lacked provisions on accountability, joint controllership, or clear rules governing relationships between controllers and processors, including the use of sub-processors. Similarly, there were no requirements for data protection impact assessments (DPIAs) or structured procedures for breach notification beyond informing the Commission, and sanctions were limited to fixed fines and criminal penalties rather than risk-based administrative measures.

    The 2024 Act responds to such uncertainty by clearly defining the Commission’s authority, strengthening accountability mechanisms, and introducing risk-based tools such as DPIAs. It distinguishes between controllers and processors as separate entities with direct statutory obligations, introduces concepts of joint controllership and data protection by design and default, and requires formalised contractual arrangements for processor relationships, including restrictions on the use of sub-processors. The Act further mandates breach notifications to both the Commission and affected data subjects, introduces proportionate administrative fines, and establishes structured compliance roles such as Data Protection Officers (DPOs). These reforms, alongside an expanded territorial scope and refined definitions of sensitive data, collectively close the significant regulatory and operational gaps left by the 2018 framework.

    7. Seychelles’ reforms reflect clearer provisions and expanded transfer mechanisms while retaining limited extraterritorial application

    Still on the shift from theoretical legal frameworks to practical and clearer provisions, Seychelles repealed its 2002 Data Protection Act which had never been implemented with the 2023 Data Protection Act. The overhaul of Seychelles’ data protection regime marked a move from a largely symbolic framework to one grounded in enforcement, accountability, and operational clarity. Unlike the earlier law, which relied on formal registration of “data users” and “computer bureaux” but imposed few operational duties, the 2023 Act abandons registration in favour of an accountability-based model requiring data controllers and processors to maintain internal records, demonstrate compliance, and cooperate with regulatory audits. Security obligations have also evolved from a general duty to prevent unauthorized disclosure to a detailed mandate for technical, organisational, and physical safeguards, including breach notification duties.

    Equally, the 2023 Act introduced explicit obligations for data processors including acting only on a controller’s instructions, maintaining security measures, and being jointly liable for breaches supported by mandatory written contracts between controllers and processors that define purpose, scope, and safeguards. The law also embeds governance mechanisms through the requirement for DPOs and DPIAs for high-risk processing, neither of which existed in the 2002 text. 

    With regard to cross-border data transfers, the 2023 regime replaces the earlier “transfer prohibition notice” system with a more flexible approach permitting international data flows where adequate protection or recognised safeguards exist. Notably, the 2023 Act expressly recognises participation in frameworks such as the Global Cross-Border Privacy Rules (CBPR) System, signalling Seychelles’ intention to align its transfer mechanisms with interoperable international privacy standards expanding mechanisms for transfers. 

    Finally, enforcement capacity has been strengthened with the 2023 Act empowering the Information Commission to conduct audits and inspections independently, issue enforcement notices, and impose administrative fines, enhancing oversight compared to the limited, warrant-based powers of the 2002 law. 

    While its territorial scope remains modest compared to broader extraterritorial models, these reforms collectively transform Seychelles’ data protection law into a more operational, risk-based, and globally interoperable framework.

    8. Ghana seeks to introduce a new Bill to strengthen enforcement and oversight, including broader data subject protections

    Ghana first enacted its data protection law in 2012, which also established the Data Protection Commission. However, implementation challenges soon emerged, including the absence of a clear framework for cross-border data transfers, limited protection for vulnerable groups such as children, and a narrow scope of application compared to new generation data protection laws that did not extend to foreign entities offering goods or services in Ghana. These gaps created practical and regulatory difficulties. On 17 October, the new Data Protection Bill, 2025, spearheaded by the Ministry of Communication, Digital Technology, and Innovations, was therefore introduced with the aim of addressing these shortcomings and modernizing the country’s data governance framework.

    Overall, the Bill aims to strengthen oversight by introducing clearer obligations, enhanced data subject rights, and a more robust regulatory structure. Particularly, it introduces key reforms by addressing emerging privacy challenges associated with new technologies, introducing data ownership rights, and refining exemptions for the processing of personal data.

    In contrast to Angola’s targeted approach to addressing privacy concerns in AI systems, Ghana seeks to adopt a broader stance by regulating all emerging technologies, including AI systems, insofar as they process personal data. For automated decision-making (ADM) systems, the Bill would require outcomes to be explainable, contestable, and subject to human oversight, obligations that were absent from the 2012 Act which only required notification when decisions involved ADM. The Bill also aims to introduce explicit requirements for the use of privacy-enhancing technologies in ADM systems, a novel provision not contained in the earlier law.

    On data ownership, the Bill would introduce a data ownership framework that recognises personal data as the property of the data subject, and establish a fiduciary-style relationship between data subjects and controllers. Under this model, controllers and processors are deemed custodians of personal data with a duty of care, and any form of processing does not confer ownership rights including for public authorities. If passed, Ghana would become one of the few jurisdictions globally to recognise the proprietary nature of personal data, with significant implications for secondary data use, AI development, and the application of rights such as the right to object to processing. The Bill was open for public consultation until 28 November 2025, and could be adopted as early as 2026.

    Regarding exemptions, the Bill aims to retain the broad exemption themes found in the 2012 Act, but significantly expand and refine them. While both instruments include exemptions for national security, the 2012 Act required a ministerial certificate to validate the exemption. The 2025 Bill removes this safeguard, a notable development given the increasing reliance on public-interest grounds to limit privacy protections across the continent.

    Crucially, the Bill would introduce a comprehensive regime for cross-border data transfers, which was absent from the 2012 Act. The new framework emphasizes data localization, unless such localization would impair business operations. Where transfers are necessary, the Bill would require data subject consent, approval from the Data Protection Authority, and compliance with additional conditions designed to safeguard personal data before it leaves Ghana.

    9. The patchwork challenge: emerging regional frameworks

    Even as countries unilaterally consider legal reforms, there are regional plans, led by the AU and the respective RECs, to amend or create new data protection frameworks for their Member States. Regional initiatives must navigate a complex landscape where many States already have distinct data protection regimes. At the continental level, the AU announced plans to revise the Malabo Convention. At the sub-regional level, ECOWAS is expected to revise the Supplementary Act on Data Protection, the East Africa Community (EAC) is developing its data governance framework, and the Southern Africa Development Community (SADC) has plans to revise the Model Law. Despite their minimal influence on national laws, legal reforms at the REC level could spur similar actions for Member States, especially in the ECOWAS region where the Supplementary Act on Personal Data Protection is legally binding on member states.

    As legal reforms continue, bigger questions of what will be the drivers of such reforms remain, especially considering that some African countries still maintain legal frameworks influenced by the now-defunct 1995 EU Data Protection Directive. 

    9.1. Development and deployment of AI in Africa

    Strongly tied to the aspect of responsible data use is the development of local AI systems as well as the general adoption of AI across the continent. Discussions of the former largely revolve around the lack of local datasets for training AI models, hence the emergence of targeted initiatives seeking to address this issue. The theory that effective data protection regimes can allow responsible local data collection and use has advanced, as seen in continental data governance frameworks such as the AU Data Policy Framework.

    Additionally, the risks posed by the general adoption of AI have been highlighted on the continent as drivers of legal reforms in countries such as Angola, as explored above. Data protection frameworks have been fronted as useful instruments for ensuring responsible development and deployment of AI as seen in the text of numerous national AI strategies, some of which note, however, that ADM provisions alone may not be sufficient for addressing AI harms. For example, the AI Policy Framework of South Africa considers a standalone AI Act to complement its national data protection law.

    While there is growing regulatory momentum on comprehensive AI specific laws, there are currently no AI specific laws that provide guidance on the development and deployment of high-risk AI systems. Nonetheless, some DPAs on the continent are grappling with the foundational questions of what privacy risks are unfolding in the use of AI systems. DPA activities related to regulating AI have included Senegal’s CDP rejecting an application for the use of facial recognition systems in the workplace requiring the controller to use less intrusive means of registering work attendance and Mauritius’ data protection authority’s Guide on Data Protection for Health Data and Artificial Intelligence. Such approaches signal that even though considerations towards stand-alone AI regulation on the continent are in their nascent stages, DPAs are nevertheless addressing new AI technologies on the basis of national data protection law, either in the form of guidance or through enforcement.

    10. Concluding reflections: The future of data protection legal reforms in Africa

    The EU, whose data protection legal framework has been relied on by many African countries, is currently considering amendments to its existing data protection framework through an Omnibus initiative. Amendments to laws that have largely informed legal frameworks across Africa could provide a moment of reflection for the “recipient” countries, some of which have already registered the challenges of implementing current data protection frameworks, especially for SMEs, and questioned the impact of the “Brussels effect” for their own national data protection laws.

    In addition to the shifts noted in the EU, legal reforms in Africa are also increasingly influenced by the growing recognition of data as a national asset and the subsequent need for autonomy on its protection and governance. There are already new sector-specific regulations that place emphasis on balancing data use and protection, as well as explicitly designating governments as custodians of such data. Implementation of these sector-specific laws has revealed gaps in foundational data protection frameworks, prompting legal reforms towards frameworks that not only safeguard rights but also enable responsible data access and re-use.

    As data protection reforms take shape across the continent, the question is not whether change will come but, rather, what form it will take. 

    The Chatbot Moment: Mapping the Emerging 2026 U.S. Chatbot Legislative Landscape

    Special thanks to Rafal Fryc, U.S. Legislation Intern, for his research and development of the resources referenced.

    If there is one area of AI policy that lawmakers seem particularly eager to regulate in 2026, it’s chatbots. As state legislative sessions ramp up across the country, policymakers at both the state and federal levels have introduced dozens of bills aimed at chatbots from so-called “AI companions” to “mental health chatbots.” The Future of Privacy Forum (FPF) is currently tracking 98 chatbot-specific bills across 34 states, as well as three federal proposals.

    Yet despite the shared concern driving these proposals (often tied to safety risks, youth protections, and several high-profile incidents involving chatbots and self-harm) the bills themselves look very different from one another. Definitions of “chatbot” vary widely across legislation. The result is the early contours of a potential regulatory patchwork, where different tools may fall within the scope of different state laws and where compliance obligations, like disclosures or safety protocols, could vary broadly across jurisdictions. As states including Oregon and Washington prepare to imminently enact new chatbot legislation, it remains to be seen how closely 2026 frameworks will ultimately align.

    To help make sense of this rapidly evolving landscape, FPF developed two one-pager resources summarizing key trends in chatbot legislation. The first highlights some of the definitional patterns beginning to appear, identifying eleven legislative frameworks used to define chatbots. The second maps the six most common regulatory provisions appearing across chatbot bills.

    With these resources, we explore two central questions shaping the emerging chatbot policy debate: how lawmakers are defining chatbots and what regulatory approaches are beginning to emerge across states.

    Chatbot Legislation in 2025

    Chatbots began attracting legislative attention in 2025. Last year, five states enacted chatbot-specific legislation: California (SB 243), New York (S-3008C), New Hampshire (HB 143), Utah (HB 452), and Maine (LD 1727). Other laws enacted in 2025, such as Illinois (HB 1806) and Nevada (AB 406), did not define chatbots directly but regulate the use of AI systems, including chatbots, in the delivery of licensed mental or behavioral health services.1

    Even with this activity in 2025, the scale of legislative interest in 2026 represents a significant expansion. In both volume and policy focus, chatbot legislation is emerging as one of the most active areas of AI policymaking. Interest is also broadly bipartisan, with 53 percent of the chatbot bills tracked by FPF introduced by Democrats and 46 percent introduced by Republicans.

    Defining Chatbots: Why It’s Harder Than It Looks

    One of the central challenges lawmakers face is: what exactly counts as a chatbot?

    In practice, chatbots appear in a wide range of contexts, from customer service assistants and tutoring tools to wellness apps, voice assistants, and AI companions designed for social interaction. Small shifts in how legislation defines these systems can dramatically impact which technologies fall inside or outside a bill’s scope.

    Three Terms Are Defined in Chatbot Legislation

    As detailed in the FPF one-pager resources, three primary terms are emerging to scope chatbot legislation in 2026: “chatbots,” “companion chatbots,” and “mental health chatbots.” Each term attempts to capture a different category and type of risk. For example, mental health chatbot bills typically focus on preventing AI systems from providing therapy without licensed professional oversight. Meanwhile, roughly half of the chatbot-related bills introduced this year focus specifically on companion chatbots, reflecting concern about systems designed to simulate interpersonal relationships with users, especially minors.

    At the same time, states are experimenting with a wide range of definitional approaches. Some definitions focus on technology, such as limiting scope to systems that use generative AI (NE LB 939). Others define chatbots based on capabilities or interaction behaviors, like whether the tool sustains dialogue about personal matters (MI SB 760, TN HB 1455). Still others define chatbots based on deployment context, such as whether a system is publicly accessible or marketed as a companion (U.S. S 3062, HI SB 2788). Legislators are not converging on a single definition but rather exploring multiple models simultaneously.

    Three Models for Companion Chatbot Definitions

    In the case of companion chatbots, three definitional approaches are beginning to emerge as particularly influential:

    Why Definitions Matter for Regulatory Scope

    These definitional differences matter because they determine who must comply with a law and who does not. For example, a definition that focuses on conversational capability may capture general-purpose assistants, tutoring tools, or wellness applications even when companionship is not their primary function.

    To narrow scope, many bills, similar to last year’s California’s SB 243 (enacted), include carveouts for tools such as customer service systems, research assistants, or workplace productivity tools. While these exclusions may reduce the likelihood that certain tools fall in scope, they also introduce interpretive questions. Chatbots often serve multiple purposes simultaneously. A chatbot might act as a customer support tool but also answer general informational questions or engage in broader dialogue. In these cases, it may be unclear when a system is “used for customer service” versus when it becomes a more general conversational chatbot. Many proposals leave the issue unresolved, for example, by not specifying that exclusions apply when a system’s primary purpose falls within those categories.

    Themes Within Chatbot Legislation

    Chatbot bills vary widely not only in how they define chatbots, but also in the substantive regulatory requirements they propose. Still, several common policy themes are beginning to emerge. Across proposals introduced in 2026, six broad regulatory themes appear: transparency; age assurance and minors’ access controls; content safety and harm prevention; professional licensure and regulated services; data protection; and liability and enforcement.

    These provisions reflect a notable shift from the first generation of chatbot legislation. Early laws such as California SB 243 and New York S-3008C focused primarily on disclosure that users were interacting with AI and basic safety protocols, such as providing crisis hotline resources when users express suicidal ideation. (FPF previously analyzed SB 243 in an earlier blog.)

    In 2026, however, lawmakers appear to be treating chatbot legislation as a broader regulatory vehicle. Bills now frequently incorporate issues beyond disclosure and companion AI safety, including restrictions on engagement optimization, data governance provisions, and even regulatory sandbox programs (CT SB 86). In some cases, this expansion has prompted debate about whether chatbot bills may implicate speech concerns and raise potential First Amendment questions. For instance, many chatbot proposals would require recurring disclosures during conversations or mandate reporting about specific categories of user speech (e.g. statements of suicidal ideation), raising questions about compelled speech and editorial discretion. As detailed in the chatbot provisions chart, the six most common regulatory provisions appearing across proposed chatbot legislation include:

    1. Transparency: Nearly every chatbot bill includes a non-human disclosure requirement, mandating that operators inform users they are interacting with an AI system rather than a human. Most proposals require “clear and conspicuous” disclosure, though timing and format vary. Some require disclosure only at the start of an interaction or every three hours (PA SB 1090), while others require persistent reminders during conversations every hour or thirty minutes (SC HB 5138). A smaller subset goes further by requiring transparency reporting, such as public disclosures about safety protocols or incidents (WA HB 2225, UT HB 438).
    2. Age Assurance and Minors’ Access Controls: Youth safety has become a central focus of chatbot legislation in 2026. Several proposals require operators to determine whether a user is a minor and impose additional safeguards. Approaches vary widely: some bills require age verification (GA SB 540, SD SB 168), others restrict or prohibit minors’ access to certain content (IA SF 2417), and some require parental consent or monitoring tools (AZ HB 2311). Notably, none of the chatbot laws enacted in 2025 (and few bills advancing in 2026) establish robust, standalone age verification systems.
    3. Content Safety and Harm Prevention: Almost every chatbot bill advancing in 2026 incorporates harm detection and response protocols. Similar to California and New York’s laws, many require operators to provide crisis resources, such as suicide hotline referrals, when detecting indicators of self-harm. A growing number also address anthropomorphic or manipulative interactions, including restrictions on emotional deception or features designed to foster dependency, like rewarding prolonged interaction (HI HB 2502, OR SB 1546). 
    4. Professional Licensure and Regulated Services: Another category of provisions addresses the use of chatbots to deliver services traditionally regulated through professional licensure. Several laws prohibit AI systems from diagnosing, treating, or representing themselves as licensed professionals, particularly in mental or behavioral healthcare (VA HB 669, TN HB 1470). Others allow AI tools to assist licensed professionals but require human oversight or transparency of recommendations or treatment plans (VA HB 668, RI HB 7538).
    5. Data Protection: Chatbot legislation is also beginning to incorporate data governance requirements, particularly around conversational logs and sensitive user information. These proposals include restrictions on collecting, sharing, or selling chatbot input data, along with requirements for data minimization or deletion (MD SB 827). Some bills also restrict the use of minors’ data for AI training or advertising (UT HB 438).
    6. Liability and Enforcement: Most proposals grant enforcement authority to state attorneys general and establish civil penalties for violations. Some also introduce private rights of action (OR SB 1546), allowing individuals to bring lawsuits directly, as seen in California SB 243. A smaller number go further, establishing non-disclaimable liability for certain harms involving minors or creating criminal penalties for specific chatbot behaviors (TN SB 1493).

    Litigation is Shaping The Legislative Agenda

    One notable feature of the 2026 chatbot landscape is how closely these proposed provisions mirror themes emerging from recent chatbot litigation and enforcement, such as Commonwealth of Kentucky v. Character Technologies Inc. and Garcia v. Character Technologies Inc. et al.

    Across lawsuits and investigations2, several recurring concerns appear:

    Many of these concerns are now directly reflected in legislative proposals, particularly those targeting engagement optimization and emotional manipulation (WA HB 2225, OR SB 1546, IA SF 2417, GA SB 540, among others). At the same time, most proposed safety interventions remain reactive rather than preventative. Many bills require chatbots to provide crisis resources once a user expresses distress, but none mandate features that would automatically terminate risky conversations or set session limits.

    Where Chatbot Legislation May Go Next

    As more states move from proposal to enactment in 2026, the coming months will provide an early signal of which legislative approaches for chatbot governance will ultimately prevail.

    Much of this experimentation is happening at the state level, where lawmakers are advancing a wide range of chatbot definitions and regulatory approaches. But the conversation is increasingly moving to the federal stage as well. Recent activity in the U.S. House to amend the KIDS Act—including the addition of the SAFE Bots Act establishing requirements for AI chatbots interacting with minors—demonstrates that chatbots are now firmly on the national policy agenda, even as the federal administration has expressed opposition to certain state regulatory efforts in this space.

    Still, the regulatory picture remains unsettled. Several proposals gaining traction this year introduce provisions that were largely absent from the first generation of chatbot laws, including restrictions on engagement optimization practices, parental control tools for minors’ chatbot interactions and data, and limits on the use of conversational data for advertising or training purposes.  As these bills move through legislatures, the next few months will help clarify which of these emerging approaches are most likely to shape the next phase of chatbot governance.

    1. For more information on these laws and other enacted AI legislation, see FPF’s blog: From Proposal to Passage: Enacted U.S. AI Laws 2023-2025 ↩︎
    2. There have been 13 cases filed to date, most brought by parents of minors on behalf of children who either committed or attempted self-harm following interactions with AI chatbots. ↩︎

    Design elements of these resources were generated with the assistance of AI and reviewed by FPF.