Future of Privacy Forum Launches the FPF Center for Artificial Intelligence

The FPF Center for Artificial Intelligence will serve as a catalyst for AI policy and compliance leadership globally, advancing responsible data and AI practices for public and private stakeholders

Today, the Future of Privacy Forum (FPF) launched the FPF Center for Artificial Intelligence, established to better serve policymakers, companies, non-profit organizations, civil society, and academics as they navigate the challenges of AI policy and governance. The Center will expand FPF’s long-standing AI work, introduce large-scale novel research projects, and serve as a source for trusted, nuanced, nonpartisan, and practical expertise. 

FPF’s Center work will be international as AI continues to deploy globally and rapidly. Cities, states, countries, and international bodies are already grappling with implementing laws and policies to manage the risks.“Data, privacy, and AI are intrinsically interconnected issues that we have been working on at FPF for more than 15 years, and we remain dedicated to collaborating across the public and private sectors to promote their ethical, responsible, and human-centered use,” said Jules Polonetsky, FPF’s Chief Executive Officer. “But we have reached a tipping point in the development of the technology that will affect future generations for decades to come. At FPF, the word Forum is a core part of our identity. We are a trusted convener positioned to build bridges between stakeholders globally, and we will continue to do so under the new Center for AI, which will sit within FPF.”

The Center will help the organization’s 220+ members navigate AI through the development of best practices, research, legislative tracking, thought leadership, and public-facing resources. It will be a trusted evidence-based source of information for policymakers, and it will collaborate with academia and civil society to amplify relevant research and resources. 

“Although AI is not new, we have reached an unprecedented moment in the development of the technology that marks a true inflection point. The complexity, speed and scale of data processing that we are seeing in AI systems can be used to improve people’s lives and spur a potential leapfrogging of societal development, but with that increased capability comes associated risks to individuals and to institutions,” said Anne J. Flanagan, Vice President for Artificial Intelligence at FPF. “The FPF Center for AI will act as a collaborative force for shared knowledge between stakeholders to support the responsible development of AI, including its fair, safe, and equitable use.”

The Center will officially launch at FPF’s inaugural summit DC Privacy Forum: AI Forward. The in-person and public-facing summit will feature high-profile representatives from the public and private sectors in the world of privacy, data and AI. 

FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers. 

See the full list of founding FPF Center for AI Leadership Council members here.

I am excited about the launch of the Future of Privacy Forum’s new Center for Artificial Intelligence and honored to be part of its leadership council. This announcement builds on many years of partnership and collaboration between Workday and FPF to develop privacy best practices and advance responsible AI, which has already generated meaningful outcomes, including last year’s launch of best practices to foster trust in this technology in the workplace.  I look forward to working alongside fellow members of the Council to support the Center’s mission to build trust in AI and am hopeful that together we can map a path forward to fully harness the power of this technology to unlock human potential.

Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday

I’m honored to be a founding member of the Leadership Council of the Future of Privacy Forum’s new Center for Artificial Intelligence. AI’s impact transcends borders, and I’m excited to collaborate with a diverse group of experts around the world to inform companies, civil society, policymakers, and academics as they navigate the challenges and opportunities of AI governance, policy, and existing data protection regulations.

Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden

“As we enter this era of AI, we must require the right balance between allowing innovation to flourish and keeping enterprises accountable for the technologies they create and put on the market. IBM believes it will be crucial that organizations such as the Future of Privacy Forum help advance responsible data and AI policies, and we are proud to join others in industry and academia as part of the Leadership Council.”

Learn more about the FPF Center for AI here.

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. 

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.

FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance

FPF’s Youth and Education team has developed a checklist and accompanying policy brief to help schools vet generative AI tools for compliance with student privacy laws. Vetting Generative AI Tools for Use in Schools is a crucial resource as the use of generative AI tools continues to increase in educational settings. It’s critical for school leaders to understand how existing federal and state student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) apply to the complexities of machine learning systems to protect student privacy. With these resources, FPF aims to provide much-needed clarity and guidance to educational institutions grappling with these issues.

Click here to access the checklist and policy brief.

“AI technology holds immense promise in enhancing educational experiences for students, but it must be implemented responsibly and ethically,” said David Sallay, the Director for Youth & Education Privacy at the Future of Privacy Forum. “With our new checklist, we aim to empower educators and administrators with the knowledge and tools necessary to make informed decisions when selecting generative AI tools for classroom use while safeguarding student privacy.”

The checklist, designed specifically for K -12 schools, outlines key considerations when incorporating generative AI into a school or district’s edtech vetting checklist. 

These include: 

By prioritizing these steps, educational institutions can promote transparency and protect student privacy while maximizing the benefits of technology-driven learning experiences for students. 

The in-depth policy brief outlines the relevant laws and policies a school should consider, the unique compliance considerations of generative AI tools (including data collection, transparency and explainability, product improvement, and high-risk decision-making), and their most likely use cases (student, teacher, and institution-focused).

The brief also encourages schools and districts to update their existing edtech vetting policies to address the unique considerations of AI technologies (or to create a comprehensive policy if one does not already exist) instead of creating a separate vetting process for AI. It also highlights the role that state legislatures can play in ensuring the efficiency of school edtech vetting and oversight and calls on vendors to be proactively transparent with schools about their use of AI.

li live promo

Check out the LinkedIn Live with CEO Jules Polonetsky and Youth & Education Director David Sallay about the Checklist and Policy Brief.

To read more of the Future of Privacy Forum’s youth and student privacy resources, visit www.StudentPrivacyCompass.org

FPF Releases “The Playbook: Data Sharing for Research” Report and Infographic

Today, the Future of Privacy Forum (FPF) published “The Playbook: Data Sharing for Research,” a report on best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report.

Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific insights and drive progress in public health, education, social science, and a myriad of other fields for the betterment of the broader society. Academic researchers use this data to consider consumer, commercial, and scientific questions at a scale they cannot reach using conventional research data-gathering techniques alone. This data also helped researchers answer questions on topics ranging from bias in targeted advertising and the influence of misinformation on election outcomes to early diagnosis of diseases through data collected by fitness and health apps.

The playbook addresses vital steps for data management, sharing, and program execution between companies and researchers. Creating a data-sharing ecosystem that positively advances scientific research requires a better understanding of the established risks, opportunities to address challenges, and the diverse stakeholders involved in data-sharing decisions. This report aims to encourage safe, responsible data-sharing between industries and researchers.

“Corporate data sharing connects companies with research institutions, by extension increasing the quantity and quality of research for social good,” said Shea Swauger, Senior Researcher for Data Sharing and Ethics. “This Playbook showcases the importance, and advantages, of having appropriate protocols in place to create safe and simple data sharing processes.”

In addition to the Playbook, FPF created a companion infographic summarizing the benefits, challenges, and opportunities of data sharing for research outlined in the larger report.

research data sharing infographic

As a longtime advocate for facilitating the privacy-protective sharing of data by industry to the research community, FPF is proud to have created this set of best practices for researchers, institutions, policymakers, and data-holding companies. In addition to the Playbook, the Future of Privacy Forum has also opened nominations for its annual Award for Research Data Stewardship.

“Our goal with these initiatives is to celebrate the successful research partnerships transforming how corporations and researchers interact with each other,” Swauger said. “Hopefully, we can continue to engage more audiences and encourage others to model their own programs with solid privacy safeguards.”

Shea Swauger, Senior Researcher for Data Sharing and Ethics, Future of privacy Forum

Established by FPF in 2020 with support from The Alfred P. Sloan Foundation, the Award for Research Data Stewardship recognizes excellence in the privacy-protective stewardship of corporate data shared with academic researchers. The call for nominations is open and closes on Tuesday, January 17, 2023. To submit a nomination, visit the FPF site.

FPF has also launched a newly formed Ethics and Data in Research Working Group; this group receives late-breaking analyses of emerging US legislation affecting research and data, meets to discuss the ethical and technological challenges of conducting research, and collaborates to create best practices to protect privacy, decrease risk, and increase data sharing for research, partnerships, and infrastructure. Learn more and join here

FPF Testifies Before House Subcommittee on Energy and Commerce, Supporting Congress’s Efforts on the “American Data Privacy and Protection Act” 

This week, FPF’s Senior Policy Counsel Bertram Lee testified before the U.S. House Energy and Commerce Subcommittee on Consumer Protection and Commerce hearing, “Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security” regarding the bipartisan, bicameral privacy discussion draft bill, “American Data Privacy and Protection Act” (ADPPA). FPF has a history of supporting the passage of a comprehensive federal consumer privacy law, which would provide businesses and consumers alike with the benefit of clear national standards and protections.

Lee’s testimony opened by applauding the Committee on its efforts towards comprehensive federal privacy legislation and emphasized the “time is now” for its passage. As it is written, the ADPPA would address gaps in the sectoral approach to consumer privacy, establish strong national civil rights protections, and establish new rights and safeguards for the protection of sensitive personal information. 

“The ADPPA is more comprehensive in scope, inclusive of civil rights protections, and provides individuals with more varied enforcement mechanisms in comparison to some states’ current privacy regimes,” Lee said in his testimony. “It also includes corporate accountability mechanisms, such as the requiring privacy designations, data security offices, and executive certifications showing compliance, which is missing from current states’ laws. Notably, the ADPPA also requires ‘short-form’ privacy notices to aid consumers of how their data will be used by companies and their rights — a provision that is not found in any state law.” 

Lee’s testimony also provided four recommendations to strengthen the bill, which include: 

Many of the recommendations would ensure that the legislation gives individuals meaningful privacy rights and places clear obligations on businesses and other organizations that collect, use and share personal data. The legislation would expand civil rights protections for individuals and communities harmed by algorithmic discrimination as well as require algorithmic assessments and evaluations to better understand how these technologies can impact communities. 

The submitted testimony and a video of the hearing can be found on the House Committee on Energy & Commerce site.

Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework

The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield. 

This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached. 

The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.

This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases. 

After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement. 

In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.

Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.

Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.

The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.

The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”. 

Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.

One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.

Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side. 

Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.  

What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.

Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.

In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.

Privacy Best Practices for Rideshare Drivers Using Dashcams

FPF & Uber Publish Guide Highlighting Privacy Best Practices for Drivers who Record Video and Audio on Rideshare Journeys

FPF and Uber have created a guide for US-based rideshare drivers who install “dashcams” – video cameras mounted on a vehicle’s dashboard or windshield. Many drivers install dashcams to improve safety, security, and accountability; the cameras can capture crashes or other safety-related incidents outside and inside cars. Dashcam footage can be helpful to drivers, passengers, insurance companies, and others when adjudicating legal claims. At the same time, dashcams can pose substantial privacy risks if appropriate safeguards are not in place to limit the collection, use, and disclosure of personal data. 

Dashcams typically record video outside a vehicle. Many dashcams also record in-vehicle audio and some record in-vehicle video. Regardless of the particular device used, ride-hail drivers who use dashcams must comply with applicable audio and video recording laws.

The guide explains relevant laws and provides practical tips to help drivers be transparent, limit data use and sharing, retain video and audio-only for practical purposes, and use strict security controls. The guide highlights ways that drivers can employ physical signs, in-app notices, and other means to ensure passengers are informed about dashcam use and can make meaningful choices about whether to travel in a dashcam-equipped vehicle. Drivers seeking advice concerning specific legal obligations or incidents should consult legal counsel.

Privacy best practices for dashcams include: 

  1. Give individuals notice that they are being recorded
    • Place recording notices inside and on the vehicle.
    • Mount the dashcam in a visible location.
    • Consider, in some situations, giving an oral notification that recording is taking place.
    • Determine whether the ride sharing service provides recording notifications in the app, and utilize those in-app notices.
  2. Only record audio and video for defined, reasonable purposes
    • Only keep recordings for as long as needed for the original purpose.
    • Inform passengers as to why video and/or audio is being recorded.
  3. Limit sharing and use of recorded footage
    • Only share video and audio with third parties for relevant reasons that align with the original reason for recording.
    • Thoroughly review the rideshare service’s privacy policy and community guidelines if using an app-based rideshare service, and be aware that many rideshare companies maintain policies against widely disseminating recordings.
  4. Safeguard and encrypt recordings and delete unused footage
    • Identify dashcam vendors that provide the highest privacy and security safeguards.
    • Carefully read the terms and conditions when buying dashcams to understand the data flows.

Uber will be making these best practices available to drivers in their app and website. 

Many ride-hail drivers use dashcams in their cars, and the guidance and best practices published today provide practical guidance to help drivers implement privacy protections. But driver guidance is only one aspect of ensuring individuals’ privacy and security when traveling. Dashcam manufacturers must implement privacy-protective practices by default and provide easy-to-use privacy options. At the same time, ride-hail platforms must provide drivers with the appropriate tools to notify riders, and carmakers must safeguard drivers’ and passengers’ data collected by OEM devices.

In addition, dashcams are only one example of increasingly sophisticated sensors appearing in passenger vehicles as part of driver monitoring systems and related technologies. Further work is needed to apply comprehensive privacy safeguards to emerging technologies across the connected vehicle sector, from carmakers and rideshare services to mobility services providers and platforms. Comprehensive federal privacy legislation would be a good start. And in the absence of Congressional action, FPF is doing further work to identify key privacy risks and mitigation strategies for the broader class of driver monitoring systems that raise questions about technologies beyond the scope of this dashcam guide.

12th Annual Privacy Papers for Policymakers Awardees Explore the Nature of Privacy Rights & Harms

The winners of the 12th annual Future of Privacy (FPF) Privacy Papers for Policymakers Award ask big questions about what should be the foundational elements of data privacy and protection and who will make key decisions about the application of privacy rights. Their scholarship will inform policy discussions around the world about privacy harms, corporate responsibilities, oversight of algorithms, and biometric data, among other topics.

“Policymakers and regulators in many countries are working to advance data protection laws, often seeking in particular to combat discrimination and unfairness,” said FPF CEO Jules Polonetsky. “FPF is proud to highlight independent researchers tackling big questions about how individuals and society relate to technology and data.”

This year’s papers also explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and among international data protection authorities. The winning papers will be presented at a virtual event on February 10, 2022. 

The winners of the 2022 Privacy Papers for Policymakers Award are:

From the record number of nominated papers submitted this year, these six papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were selected based on the research and solutions that are relevant for policymakers and regulators in the U.S. and abroad.

In addition to the winning papers, FPF has selected two papers for Honorable Mention: Verification Dilemmas and the Promise of Zero-Knowledge Proofs by Kenneth Bamberger, University of California, Berkeley – School of Law; Ran Canetti, Boston University, Department of Computer Science, Boston University, Faculty of Computing and Data Science, Boston University, Center for Reliable Information Systems and Cybersecurity; Shafi Goldwasser, University of California, Berkeley – Simons Institute for the Theory of Computing; Rebecca Wexler, University of California, Berkeley – School of Law; and Evan Zimmerman, University of California, Berkeley – School of Law; and A Taxonomy of Police Technology’s Racial Inequity Problems by Laura Moy, Georgetown University Law Center.

FPF also selected a paper for the Student Paper Award, A Fait Accompli? An Empirical Study into the Absence of Consent to Third Party Tracking in Android Apps by Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford. The Student Paper Award Honorable Mention was awarded to Yeji Kim, University of California, Berkeley – School of Law, for her paper, Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent.

The winning authors will join FPF staff to present their work at a virtual event with policymakers from around the world, academics, and industry privacy professionals. The event will be held on February 10, 2022, from 1:00 – 3:00 PM EST. The event is free and open to the general public. To register for the event, visit https://bit.ly/3qmJdL2.

Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release

The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users.

“We have a prime opportunity now to implement strong privacy and human rights protections as brain-computer interfaces become more widely used,” said Jeremy Greenberg, Policy Counsel at the Future of Privacy Forum. “Among other uses, these technologies have tremendous potential to treat people with diseases and conditions like epilepsy or paralysis and make it easier for people with disabilities to communicate, but these benefits can only be fully realized if meaningful privacy and ethical safeguards are in place.”

Brain-computer interfaces are computer-based systems that are capable of directly recording, processing, analyzing, or modulating human brain activity. The sensitivity of data that BCIs collect and the capabilities of the technology raise concerns over consent, as well as the transparency, security, and accuracy of the data. The report offers a number of policy and technical solutions to mitigate the risks of BCIs and highlights their positive uses.

“Emerging innovations like neurotechnology hold great promise to transform healthcare, education, transportation, and more, but they need the right guardrails in place to protect individuals’ privacy,” said IBM Chief Privacy Officer Christina Montgomery. “Working together with the Future of Privacy Forum, the IBM Policy Lab is pleased to release a new framework to help policymakers and businesses navigate the future of neurotechnology while safeguarding human rights.”

FPF and IBM have outlined several key policy recommendations to mitigate the privacy risks associated with BCIs, including:

FPF and IBM have also included several technical recommendations for BCI devices, including:

FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.

Read FPF’s four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.

FPF Launches Asia-Pacific Region Office, Global Data Protection Expert Clarisse Girot Leads Team

The Future of Privacy Forum (FPF) has appointed Clarisse Girot, PhD, LLM, an expert on Asian and European privacy legislation, to lead its new FPF Asia-Pacific office based in Singapore as Director. This new office expands FPF’s international reach in Asia and complements FPF’s offices in the U.S., Europe, and Israel, as well as partnerships around the globe.
 
Dr. Clarisse Girot is a privacy professional with over twenty years of experience in the privacy and data protection fields. Since 2017, Clarisse has been leading the Asian Business Law Institute’s (ABLI) Data Privacy Project, focusing on the regulations on cross-border data transfers in 14 Asian jurisdictions. Prior to her time at ABLI, Clarisse served as the Counsellor to the President of the French Data Protection Authority (CNIL) and Chair of the Article 29 Working Party. She previously served as head of CNIL’s Department of European and International Affairs, where she sat on the Article 29 Working Party, the group of EU Data Protection Authorities, and was involved in major international cases in data protection and privacy.
 
“Clarisse is joining FPF at an important time for data protection in the Asia-Pacific region. The two most populous countries in the world, India, and China, are introducing general privacy laws, and established data protection jurisdictions, like Singapore, Japan, South Korea, and New Zealand, have recently updated their laws,” said FPF CEO Jules Polonetsky. “Her extensive knowledge of privacy law will provide vital insights for those interested in compliance with regional privacy frameworks and their evolution over time.”
 
FPF Asia-Pacific will focus on several priorities by the end of the year including hosting an event at this year’s Singapore Data Protection Week. The office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and legal professionals. Rajah & Tann Singapore LLP is supporting the work of the FPF Asia-Pacific office.
 
“The FPF global team will greatly benefit from the addition of Clarisse. She will advise FPF staff, advisory board members, and the public on the most significant privacy developments in the Asia-Pacific region, including data protection bills and cross-border data flows,” said Gabriela Zanfir-Fortuna, Director for Global Privacy at FPF. “Her past experience in both Asia and Europe gives her a unique ability to confront the most complex issues dealing with cross-border data protection.”
 
As over 140 countries have now enacted a privacy or data protection law, FPF continues to expand its international presence to help data protection experts grapple with the challenges of ensuring responsible uses of data. Following the appointment of Malavika Raghavan as Senior Fellow for India in 2020, the launch of the FPF Asia-Pacific office further expands FPF’s international reach.
 
Dr. Gabriela Zanfir-Fortuna leads FPF’s international efforts and works on global privacy developments and European data protection law and policy. The FPF Europe office is led by Dr. Rob van Eijk, who prior to joining FPF worked at the Dutch Data Protection Authority as Senior Supervision Officer and Technologist for nearly ten years. FPF has created thriving partnerships with leading privacy research organizations in the European Union, such as Dublin City University and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB). FPF continues to serve as a leading voice in Europe on issues of international data flows, the ethics of AI, and emerging privacy issues. FPF Europe recently published a report comparing the regulatory strategy for 2021-2022 of 15 Data Protection Authorities to provide insights into the future of enforcement and regulatory action in the EU.
 
Outside of Europe, FPF has launched a variety of projects to advance tech policy leadership and scholarship in regions around the world, including Israel and Latin America. The work of the Israel Tech Policy Institute (ITPI), led by Managing Director Limor Shmerling Magazanik, includes publishing a report on AI Ethics in Government Services and organizing an OECD workshop with the Israeli Ministry of Health on access to health data for research.
 
In Latin America, FPF has partnered with the leading research association Data Privacy Brasil, provided in-depth analysis on Brazil’s LGPD privacy legislation and various data privacy cases decided in the Brazilian Supreme Court. FPF recently organized a panel during the CPDP LatAm Conference which explored the state of Latin American data protection laws alongside experts from Uber, the University of Brasilia, and the Interamerican Institute of Human Rights.
 

Read Dr. Girot’s Q&A on the FPF blog. Stay updated: Sign up for FPF Asia-Pacific email alerts.
 

FPF and Leading Health & Equity Organizations Issue Principles for Privacy & Equity in Digital Contact Tracing Technologies

With support from the Robert Wood Johnson Foundation, FPF engaged leaders within the privacy and equity communities to develop actionable guiding principles and a framework to help bolster the responsible implementation of digital contact tracing technologies (DCTT). Today, seven privacy, civil rights, and health equity organizations signed on to these guiding principles for organizations implementing DCTT.

“We learned early in our Privacy and Pandemics initiative that unresolved ethical, legal, social, and equity issues may challenge the responsible implementation of digital contact tracing technologies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “So we engaged leaders within the civil rights, health equity, and privacy communities to create a set of actionable principles to help guide organizations implementing digital contact tracing that respects individual rights.”

Contact tracing has long been used to monitor the spread of various infectious diseases. In light of COVID-19, governments and companies began deploying digital exposure notification using Bluetooth and geolocation data on mobile devices to boost contact tracing efforts and quickly identify individuals who may have been exposed to the virus. However, as DCTT begins to play an important role in public health, it is important to take necessary steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that might accompany its implementation today and in the future. Governance efforts that seek to better understand these risks will be better able to bolster public trust in DCTT technologies. 

“LGBT Tech is proud to have participated in the development of the Principles and Framework alongside FPF and other organizations. We are heartened to see that the focus of these principles is on historically underserved and under-resourced communities everywhere, like the LGBTQ+ community. We believe the Principles and Framework will help ensure that the needs and vulnerabilities of these populations are at the forefront during today’s pandemic and future pandemics.”

Carlos Gutierrez, Deputy Director, and General Counsel, LGBT Tech

“If we establish practices that protect individual privacy and equity, digital contact tracing technologies could play a pivotal role in tracking infectious diseases,” said Dr. Rachele Hendricks-Sturrup, Research Director at the Duke-Margolis Center for Health Policy. “These principles allow organizations implementing digital contact tracing to take ethical and responsible approaches to how their technology collects, tracks, and shares personal information.”

FPF, together with Dialogue on Diversity, the National Alliance Against Disparities in Patient Health (NADPH), BrightHive, and LGBT Tech, developed the principles, which advise organizations implementing DCTT to commit to the following actions:

  1. Be Transparent About How Data Is Used and Shared. 
  1. Apply Strong De-Identification Techniques and Solutions. 
  1. Empower Users Through Tiered Opt-in/Opt-out Features and Data Minimization. 
  1. Acknowledge and Address Privacy, Security, and Nondiscrimination Protection Gaps. 
  1. Create Equitable Access to DCTT. 
  1. Acknowledge and Address Implicit Bias Within and Across Public and Private Settings.
  1. Democratize Data for Public Good While Employing Appropriate Privacy Safeguards. 
  1. Adopt Privacy-By-Design Standards That Make DCTT Broadly Accessible. 

Additional supporters of these principles include the Center for Democracy and Technology and Human Rights First.

To learn more and sign on to the DCTT Principles visit fpf.org/DCTT.

Support for this program was provided by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the Foundation.

Navigating Preemption through the Lens of Existing State Privacy Laws

This post is the second of two posts on federal preemption and enforcement in United States federal privacy legislation. See Preemption in US Privacy Laws (June 14, 2021).

In drafting a federal baseline privacy law in the United States, lawmakers must decide to what extent the law will override state and local privacy laws. In a previous post, we discussed a survey of 12 existing federal privacy laws passed between 1968-2003, and the extent to which they are preemptive of similar state laws. 

Another way to approach the same question, however, is to examine the hundreds of existing state privacy laws currently on the books in the United States. Conversations around federal preemption inevitably focus on comprehensive laws like the California Consumer Privacy Act, or the Virginia Consumer Data Protection Act — but there are hundreds of other state privacy laws on the books that regulate commercial and government uses of data. 

In reviewing existing state laws, we find that they can be categorized usefully into: laws that complement heavily regulated sectors (such as health and finance); laws of general applicability; common law; laws governing state government activities (such as schools and law enforcement); comprehensive laws; longstanding or narrowly applicable privacy laws; and emerging sectoral laws (such as biometrics or drones regulations). As a resource, we recommend: Robert Ellis Smith, Compilation of State and Federal Privacy Laws (last supplemented in 2018). 

  1. Heavily Regulated Sectoral Silos. Most federal proposals for a comprehensive privacy law would not supersede other existing federal laws that contain privacy requirements for businesses, such as the Health Insurance Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act (GLBA). As a result, a new privacy law should probably not preempt state sectoral laws that: (1) supplement their federal counterparts and (2) were intentionally not preempted by those federal regimes. In many cases, robust compliance regimes have been built around federal and state parallel requirements, creating entrenched privacy expectations, privacy tools, and compliance practices for organizations (“lock in”).
  1. Laws of General Applicability. All 50 states have laws barring unfair and deceptive commercial and trade practices (UDAP), as well as generally applicable laws against fraud, unconscionable contracts, and other consumer protections. In cases where violations involve the mis-use of personal information, such claims could be inadvertently preempted by a national privacy law.
  1. State Common Law. Privacy claims have been evolving in US common law over the last hundred years, and claims vary from state to state. A federal privacy law might preempt (or not preempt) claims brought under theories of negligence, breach of contract, product liability, invasions of privacy, or other “privacy torts.”
  2. State Laws Governing State Government Activities. In general, states retain the right to regulate their own government entities, and a commercial baseline privacy law is unlikely to affect such state privacy laws. These include, for example, state “mini Privacy Acts” applying to state government agencies’ collection of records, state privacy laws applicable to public schools and school districts, and state regulations involving law enforcement — such as government facial recognition bans.
  1. Comprehensive or Non-Sectoral State Laws. Lawmakers considering the extent of federal preemption should take extra care to consider the effect on different aspects of omnibus or comprehensive consumer privacy laws, such as the California Consumer Privacy Act (CCPA), the Colorado Privacy Act, and the Virginia Consumer Data Protection Act. In addition, however, there are a number of other state privacy laws that can be considered “non-sectoral” because they apply broadly to businesses that collect or use personal information. These include, for example, CalOPPA (requiring commercial privacy policies), the California “Shine the Light” law (requiring disclosures from companies that share personal information for direct marketing), data breach notification laws, and data disposal laws.
  1. Longstanding, Narrowly Applicable State Privacy Laws. Many states have relatively long-standing privacy statutes on the books that govern narrow use cases, such as: state laws governing library records, social media password laws, mugshot laws, anti-paparazzi laws, state laws governing audio surveillance between private parties, and laws governing digital assets of decedents. In many cases, such laws could be expressly preserved or incorporated into a federal law. 
  1. Emerging Sectoral and Future-Looking Privacy Laws. New state laws have emerged in recent years in response to novel concerns, including for: biometric data; drones; connected and autonomous vehicles; the Internet of Things; data broker registration; and disclosure of intimate images. This trend is likely to continue, particularly in the absence of a federal law.

Congressional intent is the “ultimate touchstone” of preemption. Lawmakers should consider long-term effects on current and future state laws, including how they will be impacted by a preemption provision, as well as how they might be expressly preserved through a Savings Clause. In order to help build consensus, lawmakers should work with stakeholders and experts in the numerous categories of laws discussed above, to consider how they might be impacted by federal preemption.

ICYMI: Read the first blog in this series PREEMPTION IN US PRIVACY LAWS.

Manipulative Design: Defining Areas of Focus for Consumer Privacy

In consumer privacy, the phrase “dark patterns” is everywhere. Emerging from a wide range of technical and academic literature, it now appears in at least two US privacy laws: the California Privacy Rights Act and the Colorado Privacy Act (which, if signed by the Governor, will come into effect in 2025).

Under both laws, companies will be prohibited from using “dark patterns,” or “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision‐making, or choice,” to obtain user consent in certain situations–for example, for the collection of sensitive data.

When organizations give individuals choices, some forms of manipulation have long been barred by consumer protection laws, with the Federal Trade Commission and state Attorneys General prohibiting companies from deceiving or coercing consumers into taking actions they did not intend or striking bargains they did not want. But consumer protection law does not typically prohibit organizations from persuading consumers to make a particular choice. And it is often unclear where the lines fall between cajoling, persuading, pressuring, nagging, annoying, or bullying consumers. The California and Colorado laws seek to do more than merely bar deceptive practices; they prohibit design that “subverts or impairs user autonomy.”

What does it mean to subvert user autonomy, if a design does not already run afoul of traditional consumer protections law? Just as in the physical world, the design of digital platforms and services always influences behavior — what to pay attention to, what to read and in what order, how much time to spend, what to buy, and so on. To paraphrase Harry Brignull (credited with coining the term), not everything “annoying” can be a dark pattern. Some examples of dark patterns are both clear and harmful, such as a design that tricks users into making recurring payments, or a service that offers a “free trial” and then makes it difficult or impossible to cancel. In other cases, the presence of “nudging” may be clear, but harms may be less clear, such as in beta-testing what color shades are most effective at encouraging sales. Still others fall in a legal grey area: for example, is it ever appropriate for a company to repeatedly “nag” users to make a choice that benefits the company, with little or no accompanying benefit to the user?

In Fall 2021, Future of Privacy Forum will host a series of workshops with technical, academic, and legal experts to help define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops will feature experts on manipulative design in at least three contexts of consumer privacy: (1) Youth & Education; (2) Online Advertising and US Law; and (3) GDPR and European Law. 

As lawmakers address this issue, we identify at least four distinct areas of concern:

This week at the first edition of the annual Dublin Privacy Symposium, FPF will join other experts to discuss principles for transparency and trust. The design of user interfaces for digital products and services pervades modern life and directly impacts the choices people make with respect to sharing their personal information. 

India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

New FPF Report Highlights Privacy Tech Sector Evolving from Compliance Tools to Platforms for Risk Management and Data Utilization

As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

“The privacy tech sector is at an inflection point, as its offerings have expanded beyond assisting with regulatory compliance,” said FPF CEO Jules Polonetsky. “Increasingly, companies want privacy tech to help businesses maximize the utility of data while managing ethics and data protection compliance.”

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding. 

“The customers buying privacy-enhancing tech used to be primarily Chief Privacy Officers,” said report lead author Tim Sparapani. “Now it’s also Chief Marketing Officers, Chief Data Scientists, and Strategy Officers who value the insights they can glean from de-identified customer data.”

The report highlights five trends in the privacy enhancing tech market:

The report also draws seven implications for competition in the market:

The report makes a series of recommendations, including that the industry define as a priority a common vernacular for privacy tech; set standards for technologies in the “privacy stack” such as differential privacy, homomorphic encryption, and federated learning; and explore the needs of companies for privacy tech based upon their size, sector, and structure. It calls on vendors to recognize the need to provide adequate support to customers to increase uptake and speed time from contract signing to successful integration.

The Future of Privacy Forum launched the Privacy Tech Alliance (PTA) as a global initiative with a mission to define, enhance and promote the market for privacy technologies. The PTA brings together innovators in privacy tech with customers and key stakeholders.

Members of the PTA Advisory Board, which includes Anonos, BigID, D-ID, Duality, Ethyca, Immuta, OneTrust, Privacy Analytics, Privitar, SAP, Truata, TrustArc, Wirewheel, and ZL Tech, have formed a working group to address impediments to growth identified in the report. The PTA working group will define a common vernacular and typology for privacy tech as a priority project with chief privacy officers and other industry leaders who are members of FPF. Other work will seek to develop common definitions and standards for privacy-enhancing technologies such as differential privacy, homomorphic encryption, and federated learning and identify emerging trends for venture capitalists and other equity investors in this space. Privacy Tech companies can apply to join the PTA by emailing [email protected].


Perspectives on the Privacy Tech Market

Quotes from Members of the Privacy Tech Alliance Advisory Board on the Release of the “Privacy Tech’s Third Generation” Report

anonos feature image 1

“The ‘Privacy Tech Stack’ outlined by the FPF is a great way for organizations to view their obligations and opportunities to assess and reconcile business and privacy objectives. The Schrems II decision by the Court of Justice of the European Union highlights that skipping the second ‘Process’ layer can result in desired ‘Outcomes’ in the third layer (e.g., cloud processing of, or remote access to, cleartext data) being unlawful – despite their global popularity – without adequate risk management controls for decentralized processing.” — Gary LaFever, CEO & General Counsel, Anonos

bigid 1

“As a founding member of this global initiative, we are excited by the conclusions drawn from this foundational report – we’ve seen parallels in our customer base, from needing an enterprise-wide solution to the rich opportunity for collaboration and integration. The privacy tech sector continues to mature as does the imperative for organizations of all sizes to achieve compliance in light of the increasingly complicated data protection landscape.’’—Heather Federman, VP Privacy and Policy at BigID

logo

“There is no doubt of the massive importance of the privacy sector, an area which is experiencing huge growth. We couldn’t be more proud to be part of the Privacy Tech Alliance Advisory Board and absolutely support the work they are doing to create alignment in the industry and help it face the current set of challenges. In fact we are now working on a similar initiative in the synthetic media space to ensure that ethical considerations are at the forefront of that industry too.” — Gil Perry, Co-Founder & CEO, D-ID

dualitytechnologies

“We congratulate the Future of Privacy Forum and the Privacy Tech Alliance on the publication of this highly comprehensive study, which analyzes key trends within the rapidly expanding privacy tech sector. Enterprises today are increasingly reliant on privacy tech, not only as a means of ensuring regulatory compliance but also in order to drive business value by facilitating secure collaborations on their valuable and often sensitive data. We are proud to be part of the PTA Advisory Board, and look forward to contributing further to its efforts to educate the market on the importance of privacy-tech, the various tools available and their best utilization, ultimately removing barriers to successful deployments of privacy-tech by enterprises in all industry sectors” — Rina Shainski, Chairwoman, Co-founder, Duality

onetrustlogo

“Since the birth of the privacy tech sector, we’ve been helping companies find and understand the data they have, compare it against applicable global laws and regulations, and remediate any gaps in compliance. But as the industry continues to evolve, privacy tech also is helping show business value beyond just compliance. Companies are becoming more transparent, differentiating on ethics and ESG, and building businesses that differentiate on trust. The privacy tech industry is growing quickly because we’re able to show value for compliance as well as actionable business insights and valuable business outcomes.” — Kabir Barday, CEO, OneTrust

pa logo iqvia

“Leading organizations realize that to be truly competitive in a rapidly evolving marketplace, they need to have a solid defensive footing. Turnkey privacy technologies enable them to move onto the offense by safely leveraging their data assets rapidly at scale.” — Luk Arbuckle, Chief Methodologist, Privacy Analytics

1024px sap logo.svg

“We appreciate FPF’s analysis of the privacy tech marketplace and we’re looking forward to further research, analysis, and educational efforts by the Privacy Tech Alliance. Customers and consumers alike will benefit from a shared understanding and common definitions for the elements of the privacy stack.” — Corinna Schulze, Director, EU Government Relations, Global Corporate Affairs, SAP

unknown

“The report shines a light on the evolving sophistication of the privacy tech market and the critical need for businesses to harness emerging technologies that can tackle the multitude of operational challenges presented by the big data economy. Businesses are no longer simply turning to privacy tech vendors to overcome complexities with compliance and regulation; they are now mapping out ROI-focused data strategies that view privacy as a key commercial differentiator. In terms of market maturity, the report highlights a need to overcome ambiguities surrounding new privacy tech terminology, as well as discrepancies in the mapping of technical capabilities to actual business needs. Moving forward, the advantage will sit with those who can offer the right blend of technical and legal expertise to provide the privacy stack assurances and safeguards that buyers are seeking – from a risk, deployment and speed-to-value perspective. It’s worth noting that the growing importance of data privacy to businesses sits in direct correlation with the growing importance of data privacy to consumers. Trūata’s Global Consumer State of Mind Report 2021 found that 62% of global consumers would feel more reassured and would be more likely to spend with companies if they were officially certified to a data privacy standard. Therefore, in order to manage big data in a privacy-conscious world, the opportunity lies with responsive businesses that move with agility and understand the return on privacy investment. The shift from manual, restrictive data processes towards hyper automation and privacy-enhancing computation is where the competitive advantage can be gained and long-term consumer loyalty—and trust— can be retained.” — Aoife Sexton, Chief Privacy Officer and Chief of Product Innovation, Trūata

unknown 1

“As early pioneers in this space, we’ve had a unique lens on the evolving challenges organizations have faced in trying to integrate technology solutions to address dynamic, changing privacy issues in their organizations, and we believe the Privacy Technology Stack introduced in this report will drive better organizational decision-making related to how technology can be used to sustainably address the relationships among the data, processes, and outcomes.” — Chris Babel, CEO, TrustArc

wirewheel logo

“It’s important for companies that use data to do so ethically and in compliance with the law, but those are not the only reasons why the privacy tech sector is booming. In fact, companies with exceptional privacy operations gain a competitive advantage, strengthen customer relationships, and accelerate sales.” — Justin Antonipillai, Founder & CEO, Wirewheel

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

Brazilian Supreme Federal Court

Author: Dr. Luca Belli

Dr. Luca Belli is Professor at FGV Law School, Rio de Janeiro, where he leads the CyberBRICS Project and the Latin American edition of the Computers, Privacy and Data Protection (CPDP) conference. The opinions expressed in his articles are strictly personal. The author can be contacted at [email protected].

The Brazilian Supreme Federal Court, or “STF” in its Brazilian acronym, recently took a landmark decision concerning the right to be forgotten (RTBF), finding that it is incompatible with the Brazilian Constitution. This attracted international attention to Brazil for a topic quite distant than the sadly frequent environmental, health, and political crises.

Readers should be warned that while reading this piece they might experience disappointment, perhaps even frustration, then renewed interest and curiosity and finally – and hopefully – an increased open-mindedness, understanding a new facet of the RTBF debate, and how this is playing out at constitutional level in Brazil.

This might happen because although the STF relies on the “RTBF” label, the content behind such label is quite different from what one might expect after following the same debate in Europe. From a comparative law perspective, this landmark judgment tellingly shows how similar constitutional rights play out in different legal cultures and may lead to heterogeneous outcomes based on the constitutional frameworks of reference.   

How it started: insolvency seasoned with personal data

As it is well-known, the first global debate on what it means to be “forgotten” in the digital environment arose in Europe, thanks to Mario Costeja Gonzalez, a Spaniard who, paradoxically, will never be forgotten by anyone due to his key role in the construction of the RTBF.

Costeja famously requested to deindex from Google Search information about himself that he considered to be no longer relevant. Indeed, when anyone “googled” his name, the search engine provided as the top results some link to articles reporting Costeja’s past insolvency as a debtor. Costeja argued that, despite having been convicted for insolvency, he had already paid his debt with Justice and society many years before and it was therefore unfair that his name would continue to be associated ad aeternum with a mistake he made in the past.

The follow up is well known in data protection circles. The case reached the Court of Justice of the European Union (CJEU), which, in its landmark Google Spain Judgment (C-131/12), established that search engines shall be considered as data controllers and, therefore, they have an obligation to de-index information that is inappropriate, excessive, not relevant, or no longer relevant, when a data subject to whom such data refer requests it. Such an obligation was a consequence of Article 12.b of Directive 95/46 on the protection of personal data, a pre-GDPR provision that set the basis for the European conception of the RTBF, providing for the “rectification, erasure or blocking of data the processing of which does not comply with the provisions of [the] Directive, in particular because of the incomplete or inaccurate nature of the data.”

The indirect consequence of this historic decision, and the debate it generated, is that we have all come to consider the RTBF in the terms set by the CJEU. However, what is essential to emphasize is that the CJEU approach is only one possible conception and, importantly, it was possible because of the specific characteristics of the EU legal and institutional framework. We have come to think that RTBF means the establishment of a mechanism like the one resulting from the Google Spain case, but this is the result of a particular conception of the RTBF and of how this particular conception should – or could – be implemented.

The fact that the RTBF has been predominantly analyzed and discussed through the European lenses does not mean that this is the only possible perspective, nor that this approach is necessary the best. In fact, the Brazilian conception of the RTBF is remarkably different from a conceptual, constitutional, and institutional standpoint. The main concern of the Brazilian RTBF is not how a data controller might process personal data (this is the part where frustration and disappointment might likely arise in the reader) but the STF itself leaves the door open to such possibility (this is the point where renewed interest and curiosity may arise).

The Brazilian conception of the right to be forgotten

Although the RTBF has acquired a fundamental relevance in digital policy circles, it is important to emphasize that, until recently, Brazilian jurisprudence had mainly focused on the juridical need for “forgetting” only in the analogue sphere. Indeed, before the CJEU Google Spain decision, the Brazilian Supreme Court of Justice or “STJ” – the other Brazilian Supreme Court that deals with the interpretation of the Law, differently from the previously mentioned STF, which deals with the interpretation of constitutional matters – had already considered the RTBF as a right not to be remembered, affirmed by the individual vis-à-vis traditional media outlets.

This interpretation first emerged in the “Candelaria massacre” case, a gloomy page of Brazilian history, featuring a multiple homicide perpetrated in 1993 in front of the Candelaria Church, a beautiful colonial Baroque building in Rio de Janeiro’s downtown. The gravity and the particularly picturesque stage of the massacre led Globo TV, a leading Brazilian broadcaster, to feature the massacre in a TV show called Linha Direta. Importantly, the show included in the narration some details about a man suspected of being one of the perpetrators of the massacre but later discharged.

Understandably, the man filed a complaint arguing that the inclusion of his personal information in the TV show was causing him severe emotional distress, while also reviving suspects against him, for a crime he had already been discharged of many years before. In September 2013, further to Special Appeal No. 1,334,097, the STJ agreed with the plaintiff establishing the man’s “right not to be remembered against his will, specifically with regard to discrediting facts.” This is how the RTBF was born in Brazil.

Importantly for our present discussion, this interpretation is not born out of digital technology and does not impinge upon the delisting of specific type of information as results of search engine queries. In Brazilian jurisprudence the RTBF has been conceived as a general right to effectively limit the publication of certain information. The man included in the Globo reportage had been discharged many years before, hence he had a right to be “let alone,” as Warren and Brandeis would argue, and not to be remembered for something he had not even committed. The STJ, therefore, constructed its vision of the RTBF, based on article 5.X of the Brazilian Constitution, enshrining the fundamental right to intimacy and preservation of image, two fundamental features of privacy. 

Hence, although they utilize the same label, the STJ and CJEU conceptualize two remarkably different rights, when they refer to the RTBF. While both conceptions aim at limiting access to specific types of personal information, the Brazilian conception differs from the EU one on at least three different levels.

First, their constitutional foundations. While both conceptions are intimately intertwined with individuals’ informational self-determination, the STJ built the RTBF based on the protection of privacy, honour and image, whereas the CJEU built it upon the fundamental right to data protection, which in the EU framework is a standalone fundamental right. Conspicuously, in the Brazilian constitutional framework an explicit right to data protection did not exist at the time of the Candelaria case and only since 2020 it has been in the process of being recognized

Secondly, and consequently, the original goal of the Brazilian conception of the RTBF was not to regulate how a controller should process personal data but rather to protect the private sphere of the individual. In this perspective, the goal of STJ was not – and could not have been – to regulate the deindexation of specific incorrect or outdated information, but rather to regulate the deletion of “discrediting facts” so that the private life, honour and image of any individual might be illegitimately violated.

Finally, yet extremely importantly, the fact that, at the time of the decision, an institutional framework dedicated to data protection was simply absent in Brazil did not allow the STJ to have the same leeway of the CJEU. The EU Justices enjoyed the privilege of delegating to search engine the implementation of the RTBF because, such implementation would have received guidance and would have been subject to the review of a well-consolidated system of European Data Protection Authorities. At the EU level, DPAs are expected to guarantee a harmonious and consistent interpretation and application of data protection law. At the Brazilian level, a DPA has just been established in late 2020 and announced its first regulatory agenda only in late January 2021.

This latter point is far from trivial and, in the opinion of this author, an essential preoccupation that might have driven the subsequent RTBF conceptualization of the STJ.

The stress-test

The soundness of the Brazilian definition of the RTBF, however, was going to be tested again by the STJ, in the context of another grim and unfortunate page of Brazilian story, the Aida Curi case. This case originated with the sexual assault and subsequent homicide of the young Aida Curi, in Copacabana, Rio de Janeiro, on the evening of 14 July 1958. At the time the case crystallized considerable media attention, not only because of its mysterious circumstances and the young age of the victim, but also because the sexual assault perpetrators tried to dissimulate it by throwing the body of the victim from the rooftop of a very high building on the Avenida Atlantica, the fancy avenue right in front of the Copacabana beach.

Needless to say, Globo TV considered the case as a perfect story for yet another Linha Direta episode. Aida Curi’s relatives, far from enjoying the TV show, sued the broadcaster for moral damages and demanded the full enjoyment of their RTBF – in the Brazilian conception, of course. According to the plaintiffs, it was indeed not conceivable that, almost 50 years after the murder, Globo TV could publicly broadcast personal information about the victim – and her family – including the victim’s name and address, in addition to unauthorized images, thus bringing back a long-closed and extremely traumatic set of events.

The brothers of Aida Curi claimed reparation against Rede Globo, but the STJ, decided that the time passed was enough to mitigate the effects of anguish and pain on the dignity of Aida Curi’s relatives, while arguing that it was impossible to report the events without mentioning the victim. This decision was appealed by Ms Curi’s family members, who demanded by means of Extraordinary Appeal No. 1,010,606, that STF recognized “their right to forget the tragedy.” It is interesting to note that the way the demand is constructed in this Appeal exemplifies tellingly the Brazilian conception of “forgetting” as erasure and prohibition from divulgation.

At this point, the STF identified in the Appeal the interest of debating the issue “with general repercussion” which is a peculiar judicial process that the Court can utilize when recognizes that a given case has particular relevance and transcendence for the Brazilian legal and judicial system. Indeed, the decision of a case with general repercussion does not only bind the parties but rather establishes a jurisprudence that must be replicated by all lower-level courts.

In February 2021, the STF finally deliberated on the Aida Curi case, establishing that “the idea of ​​a right to be forgotten is incompatible with the Constitution, thus understood as the power to prevent, due to the passage of time, the disclosure of facts or data that are true and lawfully obtained and published in analogue or digital media” and that “any excesses or abuses in the exercise of freedom of expression and information must be analyzed on a case-by-case basis, based on constitutional parameters – especially those relating to the protection of honor, image, privacy and personality in general – and the explicit and specific legal provisions existing in the criminal and civil spheres.”

In other words, what the STF has deemed as incompatible with the Federal Constitution is a specific interpretation of the Brazilian version of the RTBF. What is not compatible with the Constitution is to argue that the RTBF allows to prohibit publishing true facts, lawfully obtained. At the same time, however, the STF clearly states that it remains possible for any Court of law to evaluate, on a case-by-case basis and according to constitutional parameters and existing legal provisions, if a specific episode can allow the use of the RTBF to prohibit the divulgation of information that undermine the dignity, honour, privacy, or other fundamental interests of the individual.

Hence, while explicitly prohibiting the use of the RTBF as a general right to censorship, the STF leaves room for the use of the RTBF for delisting specific personal data in an EU-like fashion, while specifying that this must be done finding guidance in the Constitution and the Law.

What next?

Given the core differences between the Brazilian and EU conception of the RTBF, as highlighted above, it is understandable in the opinion of this author that the STF adopted a less proactive and more conservative approach. This must be especially considered in light of the very recent establishment of a data protection institutional system in Brazil.

It is understandable that the STF might have preferred to de facto delegate the interpretation of when and how the RTBF could be rightfully invoked before Courts, according to constitutional and legal parameters. First, in the Brazilian interpretation of the RTBF, this right fundamentally insist on the protection of privacy – i.e. the private sphere of an individual – and, while admitting the existence of data protection concerns, these are not the main ground on which the Brazilian RTBF conception relays.

It is understandable that in a country and a region where the social need to remember and shed light on what happened in a recent history, marked by dictatorships, well-hidden atrocities, and opacity, outweighs the legitimate individual interest to prohibit the circulation of truthful and legally obtained information. In the digital sphere, however, the RTBF quintessentially translates into an extension of informational self-determination, which the Brazilian General Data Protection Law, better known as “LGPD” (Law No. 13.709 / 2018), enshrines in its article 2 as one of the “foundations” of data protection in the country and that whose fundamental character was recently recognized by the STF itself.

In this perspective, it is useful to remind the dissenting opinion of Justice Luiz Edson Fachin, in the Aida Curi case, stressing that “although it does not expressly name it, the Constitution of the Republic, in its text, contains the pillars of the right to be forgotten, as it celebrates the dignity of the human person (article 1, III), the right to privacy (article 5, X) and the right to informational self-determination – which was recognized, for example, in the disposal of the precautionary measures of the Direct Unconstitutionality Actions No. 6,387, 6,388, 6,389, 6,390 and 6,393, under the rapporteurship of Justice Rosa Weber (article 5, XII).”

It is the opinion of this author that the Brazilian debate on the RTBF in the digital sphere would be clearer if it its dimension as a right to deindexation of search engines results were to be clearly regulated. It is understandable that the STF did not dare regulating this, given its interpretation of the RTBF and the very embryonic data protection institutional framework in Brazil. However, given the increasing datafication we are currently witnessing, it would be naïve not to expect that further RTBF claims concerning the digital environment and, specifically, the way search engines process personal data will keep emerging.

The fact that the STF has left the door open to apply the RTBF in the case-by-case analysis of individual claims may reassure the reader regarding the primacy of constitutional and legal arguments in such case-by-case analysis. It may also lead the reader to – very legitimately – wonder whether such a choice is the facto the most efficient to deal with the potentially enormous number of claims and in the most coherent way, given the margin of appreciation and interpretation that each different Court may have.  

An informed debate able to clearly highlight what are the existing options and what might be the most efficient and just ways to implement them, considering the Brazilian context, would be beneficial. This will likely be one of the goals of the upcoming Latin American edition of the Computers, Privacy and Data Protection conference (CPDP LatAm) that will take place in July, entirely online, and will aim at exploring the most pressing issues for Latin American countries regarding privacy and data protection.

Photo Credit: “Brasilia – The Supreme Court” by Christoph Diewald is licensed under CC BY-NC-ND 2.0

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

FPF announces appointment of Malavika Raghavan as Senior Fellow for India

The Future of Privacy Forum announces the appointment of Malavika Raghavan as Senior Fellow for India, expanding our Global Privacy team to one of the key jurisdictions for the future of privacy and data protection law. 

Malavika is a thought leader and a lawyer working on interdisciplinary research, focusing on the impacts of digitisation on the lives of lower-income individuals. Her work since 2016 has focused on the regulation and use of personal data in service delivery by the Indian State and private sector actors. She has founded and led the Future of Finance Initiative for Dvara Research (an Indian think tank) in partnership with the Gates Foundation from 2016 until 2020, anchoring its research agenda and policy advocacy on emerging issues at the intersection of technology, finance and inclusion. Research that she led at Dvara Research was cited by the India’s Data Protection Committee in its White Paper as well as its final report with proposals for India’s draft Personal Data Protection Bill, with specific reliance placed on such research on aspects of regulatory design and enforcement. See Malavika’s full bio here.

“We are delighted to welcome Malavika to our Global Privacy team. For the following year, she will be our adviser to understand the most significant developments in privacy and data protection in India, from following the debate and legislative process of the Data Protection Bill and the processing of non-personal data initiatives, to understanding the consequences of the publication of the new IT Guidelines. India is one of the most interesting jurisdictions to follow in the world, for many reasons: the innovative thinking on data protection regulation, the potentially groundbreaking regulation of non-personal data and the outstanding number of individuals whose privacy and data protection rights will be envisaged by these developments, which will test the power structures of digital regulation and safeguarding fundamental rights in this new era”, said Dr. Gabriela Zanfir-Fortuna, Global Privacy lead at FPF. 

We have asked Malavika to share her thoughts for FPF’s blog on what are the most significant developments in privacy and digital regulation in India and about India’s role in the global privacy and digital regulation debate.

FPF: What are some of the most significant developments in the past couple of years in India in terms of data protection, privacy, digital regulation?

Malavika Raghavan: “Undoubtedly, the turning point for the privacy debate India was the 2017 judgement of the Indian Supreme Court in Justice KS Puttaswamy v Union of India. The judgment affirmed the right to privacy as a constitutional guarantee, protected by Part III (Fundamental Rights) of the Indian Constitution. It was also regenerative, bringing our constitutional jurisprudence into the 21st century by re-interpreting timeless principles for the digital age, and casting privacy as a prerequisite for accessing other rights—including the right to life and liberty, to freedom of expression and to equality—given the ubiquitous digitisation of human experience we are witnessing today. 

Overnight, Puttaswamy also re-balanced conversations in favour of privacy safeguards to make these equal priorities for builders of digital systems, rather than framing these issues as obstacles to innovation and efficiency. In addition, it challenged the narrative that privacy is an elite construct that only wealthy or privileged people deserve— since many litigants in the original case that had created the Puttaswamy reference were from marginalised groups. Since then, a string of interesting developments have arisen as new cases are reassessing the impact of digital technology on individuals in India, for e.g. the boundaries case of private sector data sharing (such as between Whatsapp and Facebook), or the State’s use of personal data (as in the case concerning Aadhaar, our national identification system) among others. 

Puttaswamy also provided fillip for a big legislative development, which is the creation of an omnibus data protection law in India. A bill to create this framework was proposed by a Committee of Experts under the chairmanship of Justice Srikrishna (an ex-Supreme Court judge), which has been making its way through ministerial and Parliamentary processes. There’s a large possibility that this law will be passed by the Indian parliament in 2021! Definitely a big development to watch.

FPF: How do you see India’s role in the global privacy and digital regulation debate?

Malavika Raghavan: “India’s strategy on privacy and digital regulation will undoubtedly have global impact, given that India is home to 1/7th of the world’s population! The mobile internet revolution has created a huge impact on our society with millions getting access to digital services in the last couple of decades. This has created nuanced mental models and social norms around digital technologies that are slowly being documented through research and analysis. 

The challenge for policy makers is to create regulations that match these expectations and the realities of Indian users to achieve reasonable, fair regulations. As we have already seen from sectoral regulations (such as those from our Central Bank around cross border payments data flows) such regulations also have huge consequences for global firms interacting with Indian users and their personal data.  

In this context, I think India can have the late-mover advantage in some ways when it comes to digital regulation. If we play our cards right, we can take the best lessons from the experience of other countries in the last few decades and eschew the missteps. More pragmatically, it seems inevitable that India’s approach to privacy and digital regulation will also be strongly influenced by the Government’s economic, geopolitical and national security agenda (both internationally and domestically). 

One thing is for certain: there is no path-dependence. Our legislators and courts are thinking in unique and unexpected ways that are indeed likely to result in a fourth way (as described by the Srikrishna Data Protection Committee’s final report), compared to the approach in the US, EU and China.”

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content

Taj Mahal 1209004 1920

On February 25, the Indian Government notified and published Information Technology (Guidelines for Intermediaries and Digital media Ethics Code) Rules 2021. These rules mirror the Digital Services Act (DSA) proposal of the EU to some extent, since they propose a tiered approach based on the scale of the platform, they touch on intermediary liability, content moderation, take-down of illegal content from online platforms, as well as internal accountability and oversight mechanisms, but they go beyond such rules by adding a Code of Ethics for digital media, similar to the Code of Ethics classic journalistic outlets must follow, and by proposing an “online content” labelling scheme for content that is safe for children.

The Code of Ethics applies to online news publishers, as well as intermediaries that “enable the transmission of news and current affairs”. This part of the Guidelines (the Code of Ethics) has already been challenged in the Delhi High Court by news publishers this week. 

The Guidelines have raised several types of concerns in India, from their impact on freedom of expression, impact on the right to privacy through the automated scanning of content and the imposed traceability of even end-to-end encrypted messages so that the originator can be identified, to the choice of the Government to use executive action for such profound changes. The Government, through the two Ministries involved in the process, is scheduled to testify in the Standing Committee of Information Technology of the Parliament on March 15.

New obligations for intermediaries

“Intermediaries” include “websites, apps and portals of social media networks, media sharing websites, blogs, online discussion forums, and other such functionally similar intermediaries” (as defined in rule 2(1)(m)).

Here are some of the most important rules laid out in Part II of the Guidelines, dedicated to Due Diligence by Intermediaries:

“Significant social media intermediaries” have enhanced obligations

“Significant social media intermediaries” are social media services with a number of users above a threshold which will be defined and notified by the Central Government. This concept is similar to the the DSA’s “Very Large Online Platform”, however the DSA includes clear criteria in the proposed act itself on how to identify a VLOP.

As for Significant Social Media Intermediaries” in India, they will have additional obligations (similar to how the DSA proposal in the EU scales obligations): 

These “Guidelines” seem to have the legal effect of a statute, and they are being adopted through executive action to replace Guidelines adopted in 2011 by the Government, under powers conferred to it in the Information Technology Act 2000. The new Guidelines would enter into force immediately after publication in the Official Gazette (no information as to when publication is scheduled). The Code of Ethics would enter into force three months after the publication in the Official Gazette. As mentioned above, there are already some challenges in Court against part of these rules.

Get smart on these issues and their impact

Check out these resources: 

Another jurisdiction to keep your eyes on: Australia

Also note that, while the European Union is starting its heavy and slow legislative machine, by appointing Rapporteurs in the European Parliament and having first discussions on the DSA proposal in the relevant working group of the Council, another country is set to soon adopt digital content rules: Australia. The Government is currently considering an Online Safety Bill, which was open to public consultation until mid February and which would also include a “modernised online content scheme”, creating new classes of harmful online content, as well as take-down requirements for image-based abuse, cyber abuse and harmful content online, requiring removal within 24 hours of receiving a notice from the eSafety Commissioner.

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

Russia: New Law Requires Express Consent for Making Personal Data Available to the Public and for Any Subsequent Dissemination

Authors: Gabriela Zanfir-Fortuna and Regina Iminova

Moscow 2742642 1920 1
Source: Pixabay.Com, by Opsa

Amendments to the Russian general data protection law (Federal Law No. 152-FZ on Personal Data) adopted at the end of 2020 enter into force today (Monday, March 1st), with some of them having the effective date postponed until July 1st. The changes are part of a legislative package that is also seeing the Criminal Code being amended to criminalize disclosure of personal data about “protected persons” (several categories of government officials). The amendments to the data protection law envision the introduction of consent based restrictions for any organization or individual that publishes personal data initially, as well as for those that collect and further disseminate personal data that has been distributed on the basis of consent in the public sphere, such as on social media, blogs or any other sources. 

The amendments:

The potential impact of the amendments is broad. The new law prima facie affects social media services, online publishers, streaming services, bloggers, or any other entity who might be considered as making personal data available to “an indefinite number of persons.” They now have to collect and prove they have separate consent for making personal data publicly available, as well as for further publishing or disseminating PDD which has been lawfully published by other parties originally.

Importantly, the new provisions in the Personal Data Law dedicated to PDD do not include any specific exception for processing PDD for journalistic purposes. The only exception recognized is processing PDD “in the state and public interests defined by the legislation of the Russian Federation”. The Explanatory Note accompanying the amendments confirms that consent is the exclusive lawful ground that can justify dissemination and further processing of PDD and that the only exception to this rule is the one mentioned above, for state or public interests as defined by law. It is thus expected that the amendments might create a chilling effect on freedom of expression, especially when also taking into account the corresponding changes to the Criminal Code.

The new rules seem to be part of a broader effort in Russia to regulate information shared online and available to the public. In this context, it is noteworthy that other amendments to Law 149-FZ on Information, IT and Protection of Information solely impacting social media services were also passed into law in December 2020, and already entered into force on February 1st, 2021. Social networks are now required to monitor content and “restrict access immediately” of users that post information about state secrets, justification of terrorism or calls to terrorism, pornography, promoting violence and cruelty, or obscene language, manufacturing of drugs, information on methods to commit suicide, as well as calls for mass riots. 

Below we provide a closer look at the amendments to the Personal Data Law that entered into force on March 1st, 2021. 

A new category of personal data is defined

The new law defines a category of “personal data allowed by the data subject to be disseminated” (PDD), the definition being added as paragraph 1.1 to Article 3 of the Law. This new category of personal data is defined as “personal data to which an unlimited number of persons have access to, and which is provided by the data subject by giving specific consent for the dissemination of such data, in accordance with the conditions in the Personal Data Law” (unofficial translation). 

The old law had a dedicated provision that referred to how this type of personal data could be lawfully processed, but it was vague and offered almost no details. In particular, Article 6(10) of the Personal Data Law (the provision corresponding to Article 6 GDPR on lawful grounds for processing) provided that processing of personal data is lawful when the data subject gives access to their personal data to an unlimited number of persons. The amendments abrogate this paragraph, before introducing an entirely new article containing a detailed list of conditions for processing PDD only on the basis of consent (the new Article 10.1).

Perhaps in order to avoid misunderstanding on how the new rules for processing PDD fit with the general conditions on lawful grounds for processing personal data, a new paragraph 2 is introduced in Article 10 of the law, which details conditions for processing special categories of personal data, to clarify that processing of PDD “shall be carried out in compliance with the prohibitions and conditions provided for in Article 10.1 of this Federal Law”.

Specific, express, unambiguous and separate consent is required

Under the new law, “data operators” that process PDD must obtain specific and express consent from data subjects to process personal data, which includes any use, dissemination of the data. Notably, under the Russian law, “data operators” designate both controllers and processors in the sense of the General Data Protection Regulation (GDPR), or businesses and service providers in the sense of the California Consumer Privacy Act (CCPA).

Specifically, under Article 10.1(1), the data operator must ensure that it obtains a separate consent dedicated to dissemination, other than the general consent for processing personal data or other type of consent. Importantly, “under no circumstances” may individuals’ silence or inaction be taken to indicate their consent to the processing of their personal data for dissemination, under Article 10.1(8).

In addition, the data subject must be provided with the possibility to select the categories of personal data which they permit for dissemination. Moreover, the data subject also must be provided with the possibility to establish “prohibitions on the transfer (except for granting access) of [PDD] by the operator to an unlimited number of persons, as well as prohibitions on processing or conditions of processing (except for access) of these personal data by an unlimited number of persons”, per Article 10.1(9). It seems that these prohibitions refer to specific categories of personal data provided by the data subject to the operator (out of a set of personal data, some categories may be authorized for dissemination, while others may be prohibited from dissemination).

If the data subject discloses personal data to an unlimited number of persons without providing to the operator the specific consent required by the new law, not only the original operator, but all subsequent persons or operators that processed or further disseminated the PDD have the burden of proof to “provide evidence of the legality of subsequent dissemination or other processing”, under Article 10.1(2), which seems to imply that they must prove consent was obtained for dissemination (probatio diabolica in this case). According to the Explanatory Note to the amendments, it seems that the intention was indeed to turn the burden of proof of legality of processing PDD from data subjects to the data operators, since the Note makes a specific reference to the fact that before the amendments the burden of proof rested with data subjects.

If the separate consent for dissemination of personal data is not obtained by the operator, but other conditions for lawfulness of processing are met, the personal data can be processed by the operator, but without the right to distribute or disseminate them – Article 10.1.(4). 

A Consent Management Platform for PDD, managed by the Roskomnadzor

The express consent to process PDD can be given directly to the operator or through a special “information system” (which seems to be a consent management platform) of the Roskomnadzor, according to Article 10.1(6). The provisions related to setting up this consent platform for PDD will enter into force on July 1st, 2021. The Roskomnadzor is expected to provide technical details about the functioning of this consent management platform and guidelines on how it is supposed to be used in the following months. 

Absolute right to opt-out of dissemination of PDD

Notably, the dissemination of PDD can be halted at any time, on request of the individual, regardless of whether the dissemination is lawful or not, according to Article 12.1(12). This type of request is akin to a withdrawal of consent. The provision includes some requirements for the content of such a request. For instance, it requires writing contact information and listing the personal data that should be terminated. Consent to the processing of the provided personal data is terminated once the operator receives the opt-out request – Article 10.1(13).

A request to opt-out of having personal data disseminated to the public when this is done unlawfully (without the data subject’s specific, affirmative consent) can also be made through a Court, as an alternative to submitting it directly to the data operator. In this case, the operator must terminate the transmission of or access to personal data within three business days from when such demand was received or within the timeframe set in the decision of the court which has come into effect – Article 10.1(14).

A new criminal offense: The prohibition on disclosure of personal data about protected persons

Sharing personal data or information about intelligence officers and their personal property is now a criminal offense under the new rules, which amended the Criminal Code. The law obliges any operators of personal data, including government departments and mobile operators, to ensure the confidentiality of personal information concerning protected persons, their relatives, and their property. Under the new law, “protected persons” include employees of the Investigative Committee, FSB, Federal Protective Service, National Guard, Ministry of Internal Affairs, and Ministry of Defense judges, prosecutors, investigators, law enforcement officers and their relatives. Moreover, the list of protected persons can be further detailed by the head of the relevant state body in which the specified persons work.

Previously, the law allowed for the temporary prohibition of the dissemination of personal data of protected persons only in the event of imminent danger in connection with official duties and activities. The new amendments make it possible to take protective measures in the absence of a threat of encroachment on their life, health and property.

What to watch next: New amendments to the general Personal Data Law are on their way in 2021

There are several developments to follow in this fast changing environment. First, at the end of January, the Russian President gave the government until August 1 to create a set of rules for foreign tech companies operating in Russia, including a requirement to open branch offices in the country.

Second, a bill (No. 992331-7) proposing new amendments to the overall framework of the Personal Data Law (No. 152-FZ) was introduced in July 2020 and was the subject of a Resolution that passed in the State Duma on February 16, allowing for a period for amendments to be submitted, until March 16. The bill is on the agenda for a potential vote in May. The changes would entail expanding the possibility to obtain valid consent through other unique identifiers which are currently not accepted by the law, such as unique online IDs, changes to purpose limitation, a possible certification scheme for effective methods to erase personal data and new competences for the Roskomnadzor to establish requirements for deidentification of personal data and specific methods for effective deidentification.

If you have any questions on Global Privacy and Data Protection developments, contact Gabriela Zanfir-Fortuna at [email protected]

Five Big Questions (and Zero Predictions) for the U.S. Privacy and AI Landscape in 2026

Introduction

For better or worse, the U.S. is heading into 2026 under a familiar backdrop: no comprehensive federal privacy law, plenty of federal rumblings, and state legislators showing no signs of slowing down. What has changed is just how intertwined privacy, youth, and AI policy debates have become, whether the issue is sensitive data, data-driven pricing, or the increasingly spirited discussions around youth online safety. And with a new administration reshuffling federal priorities, the balance of power between Washington and the states may shift yet again.

In a landscape this fluid, it’s far too early to make predictions (and unwise to pretend otherwise). Instead, this post highlights five key questions that will influence how legislators and regulators navigate the evolving intersection of privacy and AI policy in the year ahead.

  1. No new comprehensive privacy laws in 2025: A portent of stability, or will 2026 increase legal fragmentation?

One of the major privacy storylines of 2025 is that no new state comprehensive privacy laws were enacted this year. Although that is a significant departure from the pace set in prior years, it is not due to an overall decrease in legislative activity on privacy and related issues. FPF’s U.S. Legislation team tracked hundreds of privacy bills, nine states amended their existing comprehensive privacy laws, and many more enacted notable sectoral laws dealing with artificial intelligence, health, and youth privacy and online safety. Nevertheless, the number of comprehensive privacy laws remains fixed for now at 19 (or 20, for those who count Florida). 

Reading between the lines, there are several things this could mean for 2026. Perhaps the lack of new laws this year was more due to chance than anything else, and next year will return to business-as-usual. After all, Alabama, Arkansas, Georgia, Massachusetts, Oklahoma, Pennsylvania, Vermont, and West Virginia all had bills make it to a floor vote or progress into cross-chamber, and some of those bills have been carried over into the 2026 legislative session. Or perhaps this is indicative that a critical capacity of state laws has been reached and we should expect stability, at least in terms of which states do and do not have comprehensive privacy laws. 

A third possibility is that next year promises something different. Although the landscape has come to be dominated by the “Connecticut model” for privacy, a growing bloc of other New England states are experimenting with bolder, more restrictive frameworks. Vermont, Maine, and Massachusetts all have live bills going into the 2026 legislative session that would, if enacted, represent some of the strictest state privacy laws on the books–many drawing heavily from Maryland’s substantive data minimization requirements. Vermont’s proposal would also include private right of action, and Massachusetts’ proposals, S.2619 and H.4746, would ban selling sensitive data and targeted advertising to minors. State privacy law is clearly at an inflection point, and what these states do in 2026—including whether they move in lock-step—could prove influential on the state privacy landscape. 

— Jordan Francis

  1. Are age signals the future of youth online protections in 2026?

As states have ramped up youth online privacy and safety legislation in recent years, a perennial question emerges each legislative session like clockwork: how can entities apply protections to minors if they don’t know who is a minor? Historically, legislatures have tried to solve this riddle with different approaches to knowledge standards that define when entities know, or should know, whether a user is a minor, while others tested age assurance requirements placed at the point of access to covered services. In 2025, however, that experimentation took a notable turn with the emergence of novel “age signals” frameworks. 

Unlike earlier models that focused on service-level age assurance, age signals frameworks seek to shift age determination responsibilities upstream in the technology stack, relying on app stores or operating system providers to generate and transmit age signals to developers. In 2025, lawmakers enacted two distinct versions of this approach: the App Store Accountability Act (ASAA) model in Utah, Texas, and Louisiana; and the California AB 1043 model. 

While both frameworks rely on age signaling concepts, they diverge significantly in scope and regulatory ambition. The ASAA model assigns app stores responsibility for age verification and parental consent, and requires them to send developers age signals that indicate (1) users’ ages and (2), for minors, whether parental consent has been obtained. These obligations introduce new and potentially significant technical challenges for companies, which must integrate age-signaling systems while reconciling these obligations with requirements under COPPA and state privacy laws. Meanwhile, the Texas’ ASAA law is facing two First Amendment challenges in federal court, with plaintiffs seeking to obtain preliminary injunctions before the law’s January 1 effective date. 

California’s AB 1043 represents a different approach. The law requires operating system (OS) providers to collect age information during device setup and share this information with developers via the app store. This law does not require parental consent or additional substantive protections for minors; its sole purpose is to enable age data sharing to support compliance with laws like the CCPA and COPPA. The AB 1043 model—while still mandating novel age signaling dynamics between operating system providers, app stores, and developers— could be simpler to implement and received notable support from industry stakeholders prior to enactment.

So what might one ponder—but not dare predict—about the future of age signals in 2026? Two developments bear watching. The highly anticipated decision on the plaintiff’s request for an injunction against the Texas law may set the direction for how aggressively states will replicate this model—though momentum may continue, particularly given federal interest reflected in the House Energy & Commerce Committee’s introduction of H.R. 3149 to nationalize the ASAA framework. Second, the California AB 1043 model, which has not yet been challenged in court, may gain traction in 2026 as a more constitutionally durable option. With some states that have robust protections for minors established in existing privacy law, perhaps the AB 1043 model may serve as an attractive model for facilitating compliance with such obligations.

– Daniel Hales

  1. Is 2026 shaping up to be another “Year of the Chatbots,” or is a legislative plot twist on the horizon?

If 2025 taught us anything, it’s that chatbots have stepped out of the supporting cast and into the starring role in AI policy debates. This year marked the first time multiple states (including Utah, New York, California, and Maine) enacted laws that explicitly address AI chatbots. Much of that momentum followed a wave of high-profile incidents involving “companion chatbots,” systems designed to simulate emotional relationships. Several families alleged that these tools encouraged their children to self-harm, sparking litigation, congressional testimony, and inquiries from both the Federal Trade Commission (FTC) and Congress and carrying chatbots to the forefront of policymakers’ minds.

States responded quickly. California (SB 243) and New York (S-3008C) enacted disclosure-based laws requiring companion chatbot operators to maintain safety protocols and clearly tell users when they are interacting with AI, with California adding extra protections for minors. Importantly, neither state opted for a ban on chatbot use, setting their focus on transparency and notice rather than prohibition.

And the story isn’t slowing down in 2026. Several states have already pre-filed chatbot bills, most centering once again on youth safety and mental health. Some may build on California’s SB 243 with stronger youth-specific requirements or tighter ties to age assurance frameworks. It is possible other states may broaden the conversation, like looking at chatbot use in elders, education, or employment, as well as diving deeper into questions of sensitive data.

The big question for the year ahead: Will policymakers stick with disclosure-first models, or pivot toward outright use restrictions on chatbots, especially for minors? Congress is now weighing in with three bipartisan proposals (the GUARD Act, the CHAT Act, and the SAFE Act), ranging from disclosure-forward approaches to full restrictions on minors’ access to companion chatbots. With public attention high and lawmakers increasingly interested in action, 2026 may be the year Congress steps in, potentially reshaping, or even preempting, state frameworks adopted in 2025.

– Justine Gluck

4. Will health and location data continue to dominate conversations around sensitive data in 2026?

While 2025 did not produce the hoped-for holiday gift of compliance clarity for sensitive or health data, the year did supply flurries, storms, light dustings, and drifts of legislative and enforcement activity. In 2025, states focused heavily on health inferences, neural data, and location data, often targeting the sale and sharing of this information. 

For health, the proposed New York Health Information Privacy Act captured headlines and left us in waiting. That bill (still active at the time of writing) broadly defined “regulated health information” to include data such as location and payment information. It included a “strictly necessary” standard for the use of regulated health information and unique, heightened consent requirements. Health data also remains a topic of interest at the federal level. Senator Cassidy (R-LA) recently introduced the Health Information Privacy Reform Act (HIPRA / S. 3097), which would expand federal health privacy protections to include new technologies such as smartwatches and health apps. Enforcers, too, got in on the action. The California DOJ completed a settlement concerning the disclosure of consumers’ viewing history with respect to web pages that create sensitive health inferences.

Location was another sensitive data category singled out by lawmakers and enforcers in 2025. In Oregon, HB 2008 amended the Oregon Consumer Privacy Act to ban the sale of precise location data (as well as the personal data of individuals under the age of 16). Colorado also amended its comprehensive privacy law to add precise location data (defined as within 1,850’) to the definition of sensitive data, subjecting it to opt-in consent requirements. Other states, such as California, Illinois, Massachusetts, and Rhode Island, also introduced laws restricting the collection and use of location data, often by requiring heightened consent for companies to sell or share such data (if not outright banning it). Like with health data, enforcers were also looking at location data practices. In Texas, we saw the first lawsuit under a state comprehensive privacy law, and it focused on the collection and use of location data (namely, inadequate notice and failure to obtain consent). The FTC was likewise looking at location data practices throughout the year.

Sensitive data—health, location, or otherwise—is unlikely to get less complex in 2026. New laws are being enacted and enforcement activity is heating up. The regulatory climate is shifting—freezing out old certainties and piling on high-risk categories like health inferences, location data, and neural data. In light of drifting definitions, fractal requirements, technologist-driven investigations, and slippery contours, robust data governance may offer an option to glissade through a changing landscape. Accurately mapping data flows and having ready documentation seems like essential equipment for unfavorable regulatory weather. 

— Jordan Wrigley, Beth Do & Jordan Francis

5. Will a federal moratorium steer the AI policy conversation in 2026?

If there’s been one recurring plot point in 2025, it was the interest at the White House and among some congressional leaders in hitting the pause button on state AI regulation. The year opened with lawmakers attempting to tuck a 10-year moratorium on state AI laws into the “One Big Beautiful Bill,” a move that would have frozen enforcement of a wide swath of state frameworks. That effort fizzled due to push back from a range of Republican and Democratic leaders, but the idea didn’t: similar language resurfaced during negotiations over the annual defense spending bill (NDAA). Ultimately, in December, President Trump signed an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” with the goal of curbing state regulations on AI deemed excessive via an AI Litigation Task Force and restrictions on funding for states enforcing AI laws that conflict with the principles outlined in the EO. This EO tees up a moment where states, agencies, and industry may soon be navigating not just compliance with new laws, but also federal challenges to how those laws operate (as well as federal challenges to the EO itself).  

A core challenge of the EO is the question of what, exactly, qualifies as an “AI law.” While standalone statutes such as Colorado’s AI Act (SB 205) are explicit targets of the EO’s efforts, many state measures are not written as AI-specific laws at all. Instead they are embedded within broader privacy, safety, or consumer protection frameworks.  Depending on how “AI law” is construed, a wide range of existing state requirements could fall within scope and potentially face challenge, including AI-related updates to existing civil rights or anti-discrimination statutes, privacy law provisions governing automated decisionmaking, profiling, and the use of personal data for AI training, and criminal statutes addressing deepfakes and non-consensual intimate images. 

Notably, however, the EO also identifies specific areas where future federal action would not preempt state laws, including child safety protections, AI compute and data-center infrastructure, state government procurement and use of AI, and (more open-endedly) “other topics as shall be determined.” That last carveout leaves plenty of room for interpretation and makes clear that the ultimate boundaries of federal preemption are still very much in flux. In practice, what ends up in or out of scope will hinge on how the EO’s text is interpreted and implemented. Technologies like chatbots highlight this ambiguity, as they can simultaneously trigger child safety regimes and AI governance requirements that the administration may seek to constrain.’

That breadth raises another big question for 2026: As the federal government steps in to limit state AI activity, will a substantive federal framework emerge in its place? Federal action on AI has been limited so far, which means a pause on state laws could arrive without a national baseline to fill the gaps, a notable departure from traditional preemption, where federal standards typically replace state ones outright. At the same time, Section 8(a) of the EO signals the Administration’s commitment to work with Congress to develop a federal legislative framework, while the growing divergence in state approaches has created a compliance patchwork that organizations operating nationwide must navigate.

With this EO, the role of state versus federal law in technology policy is likely to be the defining issue of 2026, with the potential to reshape not only state AI laws but the broader architecture of U.S. privacy regulation.

— Tatiana Rice & Justine Gluck

Youth Privacy in Australia: Insights from National Policy Dialogues

Throughout the fall of 2024, the Future of Privacy Forum (FPF), in partnership with the Australian Academic and Research Network (AARNet) and Australian Strategic Policy Institute (ASPI), convened a series of three expert panel discussions across Australia exploring the intersection of privacy, security, and online safety for young people. This event series built on the success of a fall 2023 one-day event that FPF hosted on privacy, safety, and security regarding industry standards promulgated by the Office of the eSafety Commissioner (eSafety).

These discussions took place in Sydney, Melbourne, and Canberra, and brought together leading academics, government representatives, industry voices, and civil society organizations. The discussions provide insight into the Australian approach to improving online experiences for young people through law and regulation, policy, and education. By bringing together experts across disciplines, the event series aimed to bridge divides between privacy, security, and safety conversations, and surface key tensions and opportunities for future work. This report summarizes key themes that emerged across these conversations for policymakers to consider as they develop forward-looking policies that support young people’s wellbeing and rights online.

FPF Releases Updated Report on the State Comprehensive Privacy Law Landscape

The state privacy landscape continues to evolve year-to-year. Although no new comprehensive privacy laws were enacted in 2025, nine states amended their existing laws and regulators increased enforcement activity, providing further clarity (and new questions) about the meaning of the law. Today FPF is releasing its second annual report on the state comprehensive privacy law landscape—Anatomy of a State Comprehensive Privacy Law: Charting The Legislative Landscape

The updated version of this report builds on last year’s work and incorporates developments from the 2025 legislative session. Between 2018 and 2024, nineteen U.S. states enacted comprehensive consumer privacy laws. As the final state legislatures close for the year, 2025 looks poised to break that trend and see no new laws enacted. Nevertheless, nine U.S. states—California, Colorado, Connecticut, Kentucky, Montana, Oregon, Texas, Utah, and Virginia—passed amendments to existing laws this year. This report summarizes the legislative landscape. The core components that comprise the “anatomy” of a comprehensive privacy law include: 

The report concludes with an overview of ongoing legislative trends: 

This report highlights the strong commonalities and the nuanced differences between the various state laws, showing how they can exist within a common, partially-interoperable framework while also creating challenging compliance difficulties for companies within their overlapping ambits. Until a federal privacy law materializes, this ever changing state landscape will continue to evolve as lawmakers iterate upon the existing frameworks and add novel obligations, rights, and exceptions to respond to changing societal, technological, and economic trends.

Future of Privacy Forum Appoints Matthew Reisman as Vice President of U.S. Policy

Washington, D.C. — (December 9, 2025) — The Future of Privacy Forum (FPF), a global non-profit focused on data protection, AI, and data governance, has appointed Matthew Reisman as Vice President, U.S. Policy. 

Reisman brings extensive experience in privacy policy, data protection, and AI governance to FPF. He most recently served as a Director of Privacy and Data Policy at the Centre for Information Policy Leadership (CIPL), where he led research, public engagement, and programming on topics including accountable development and deployment of AI, privacy and data protection policy, cross-border data flows, organizational governance of data, and privacy-enhancing technologies (PETs). Prior to joining CIPL, he was a Director of Global Privacy Policy at Microsoft, where he helped shape the company’s approach to privacy and data policy, including its intersections with security, digital safety, trade, data governance, cross-border data flows, and emerging technologies such as AI and 5G. His work included close collaboration with Microsoft field teams and engagement with policymakers and regulators across Asia-Pacific, Latin America, the Middle East, Africa, and Europe.

“Matthew is joining FPF with a rare combination of policy expertise, practical experience, and a clear commitment to thoughtful privacy leadership,” said Jules Polonetsky, CEO of FPF. “He understands our mission, our community, and the complexities of data governance, which makes him an outstanding fit for this role. We’re delighted to have him on board.”

In his role as Vice President, U.S. Policy, Reisman will oversee FPF’s U.S. policy work, including legislative and regulatory engagement, research, and initiatives addressing emerging data protection, AI, and technology challenges. He will also lead FPF’s experts across youth privacy, data governance, health, and other portfolios to advance key FPF projects and priorities.

“FPF has long been a leader for thoughtful, pragmatic privacy and data policy and analysis,” said Reisman. “I’m honored to join the team and excited to help advance FPF’s mission of shaping smart policy that safeguards individuals and supports innovation.”

FPF welcomes Reisman at a critical time for data governance, as Congress, federal agencies, and states increase their focus on artificial intelligence, children’s privacy, data security, and privacy legislation. FPF’s U.S. policy team recently published an analysis of the package of youth privacy and online safety bills introduced in the U.S. House in November here and a landscape analysis of state chatbot legislation here.

To learn more about the Future of Privacy Forum, visit fpf.org

###

Brussels Privacy Symposium 2025 Report – A Data Protection (R)evolution?

Co-Author: Margherita Corrado

Editor: Bianca Ioana-Marcu

This year’s Brussels Privacy Symposium, held on 14 October 2025, brought together stakeholders from across Europe and beyond for a conversation about the GDPR’s role within the EU’s evolving digital framework. Co-organized jointly by the Future of Privacy Forum and the Brussels Privacy Hub of the Vrije Universiteit Brussel, the ninth edition convened experts from academia, data protection authorities, EU institutions, industry, and civil society to discuss Europe’s shifting regulatory landscape, under the umbrella title of A Data Protection (R)evolution?

The opening keynote delivered by Ana Gallego (Director General, DG JUST, European Commission) explored how the GDPR continues to anchor the EU’s digital rulebook, even as the European Commission pursues targeted simplification measures, and how the GDPR interacts other legislative instruments such as the DSA, DGA, and the AI Act, framing them not as overlapping frameworks, but rather complementary pillars that reinforce the EU’s evolving digital framework. 

Across the three expert panels, the guest speakers underlined a shift from rewriting the GDPR to refining its implementation through targeted adjustments, stronger regulatory cooperation, and clarified guidance on issues such as legitimate interests for AI training and the CJEU decision on pseudonymization. The final panel placed Data Protection Authorities at the center of Europe’s future in AI governance, reinforcing GDPR safeguards and guiding AI Act harmonization. 

A series of lightning talks looked at the challenges posed by large language models and automated decision-making, emphasizing the need for lifecycle-based risk management, robust oversight. In a guest speaker talk, Professor Norman Sadeh addressed the growing role of AI agents, and the need for interoperable standards and protocols to support user autonomy in increasingly automated environments.

European Data Protection Supervisor Wojciech Wiewiórowski and Professor Gianclaudio Malgieri closed the ninth edition of the Symposium with a dialogue reflecting on the need to safeguard fundamental rights amid ongoing calls for simplification.

In the Report of the Brussels Privacy Symposium 2025, readers will find insights from these discussions, along with additional highlights from the panels, workshops, and lightning talks that dived into the broader EU digital architecture. 

FPF releases Issue Brief on Brazil’s Digital ECA: new paradigm of safety & privacy for minors online

This Issue Brief analyzes Brazil’s recently enacted children’s online safety law, summarizing its key provisions and how they interact with existing principles and obligations under the country’s general data protection law (LGPD). It provides insight into an emerging paradigm of protection for minors in online environments through an innovative and strengthened institutional framework, focusing on how it will align with and reinforce data protection and privacy safeguards for minors in Brazil and beyond.

This Issue Brief summarizes the Digital ECA’s most relevant provisions, including:

● Broad extraterritorial scope: the law applies to all information technology products and services aimed at or likely to be accessed by minors, with extraterritorial application.
● “Likelihood of access” of a technology service or product as a novel standard, composed of three elements: attractiveness, ease of use, and potential risks to minors.
● Provisions governed by the principle of the “best interest of the child,” requiring providers to prioritize the rights, interests, and safety of minors from the design and throughout their operations.
● Online safety by design and by default, mandating providers to adopt protective measures by design and monitor them throughout the operation of the service or product, including age verification mechanisms and parental supervision tools.
● Age rating as novelty, requiring providers to maintain age rating policies and continuously assess their content based on such rating.
● Enforcement of the law is assigned to the ANPD, which was transformed into a regulatory agency with increased and strengthened powers to monitor its compliance, in addition to its responsibilities under the data protection law.
● Significant sanctions under the Digital ECA, which can range from warnings and fines up to 10% of a company’s revenue to the permanent suspension of activities in Brazil.

What’s New in COPPA 2.0? A Summary of the Proposed Changes

On November 25th, U.S. House Energy and Commerce introduced a comprehensive bill package to advance child online privacy and safety, which included its own version of the Children and Teens’ Online Privacy Protection Act (“COPPA 2.0”) to modernize COPPA. First enacted in 1998, the Children’s Online Privacy Protection Act (COPPA) is a federal law that provides important online protections for children’s data. Now that the law is nearly 30 years old, many advocates, stakeholders, and Congressional lawmakers are pushing to amend COPPA to ensure its data protections are reflective and befitting of the online environments youth experience today. 

The new House version of COPPA 2.0, introduced by Reps. Tim Walberg (R-MI) and Laurel Lee (R-FL), would amend the law by adding new definitions, revising the knowledge standard, augmenting core requirements, and adding in new substantive provisions. Although the new COPPA 2.0 introduction marks meaningful progress in the House, it is not the first attempt to update COPPA. The Senate has pursued COPPA reforms since as early as 2021, and Senators Markey (D-MA) and Cassidy (R-LA) most recently reintroduced their version of this framework in March 2025–one that is distinguishable from this new House version in several meaningful ways. Note: For more information on the exact deviations between the current Senate and House versions of COPPA 2.0, click the button below for a redline comparison of these two proposals.

Putting all the dynamic COPPA 2.0 legislative activity into focus–this blog post summarizes notable changes to COPPA under the House proposal and notes key divergence points from the long-standing Senate framework. In sum, a few key takeaways include:

Scope and Definitions 

While there are many technical amendments proposed in the House COPPA 2.0 legislation to clarify existing provisions in COPPA, there are four key additions and modifications in the bill that significantly alter its scope and application. First, the bill expands protections to teens. While current COPPA protections only cover children up to the age of 13, COPPA 2.0 would expand protections to include teens under the age of 17. 

Second, the bill would revise the definition of “personal information” to match the expanded interpretation established through FTC regulations, which includes subcategories such as geolocation data, biometric identifiers, and persistent identifiers (e.g. IP address and cookies), among others. The proposed definitions for these categories largely follow the COPPA rule definitions, except for a notable difference to the definition of biometric identifiers. 

Specifically, COPPA 2.0 includes a broader definition of biometric identifiers by removing the requirement that processed characteristics “can be used” for individual identification that was included in the COPPA Rule definition. Therefore, under the new text, any processing of an individual’s biological or behavioral traits–such as fingerprints, voiceprints, retinal scans, facial templates, DNA, and gait–would qualify as a biometric identifier, even if the information is not capable of or intended for identifying an individual. The broader definition of biometric identifiers embraced by the House may have noteworthy implications for state privacy laws, which typically limit definitions of biometric information to data “that is used” to identify an individual. In contrast, to the House approach, the Senate proposal for COPPA 2.0 adopts a definition of biometric identifiers that is limited to characteristics “that are used” to identify an individual.

Third, COPPA 2.0 would formally codify the long-standing school consent exception used in COPPA compliance and FTC guidance for over a decade. As a result, operators acting under an agreement with educational agencies or institutions would be exempted from the law’s parental consent requirements with respect to students, though notably, the proffered definition of “educational agency or institution” would only capture public schools, not private schools and institutions.

Lastly, one of the most significant proposed modifications to COPPA’s scope involves the knowledge standard. Currently, COPPA requires operators to comply with the law’s obligations when they have actual knowledge that they are collecting the personal information of children under 13 or when they operate a website or online service that is directed towards children. The House version of COPPA 2.0 would establish a two-tiered standard that largely maintains the actual knowledge threshold for operators, except for “high-impact social media companies” who would be subject to an actual knowledge or willful disregard standard. The House’s use of an actual knowledge or willful disregard standard for large social media companies tracks with the emerging trend in some state privacy laws that provide heightened online protections for youth, which have more broadly employed the actual knowledge or willful disregard standard. In contrast, the Senate COPPA 2.0 proposal includes a novel and untested “actual knowledge or knowledge fairly implied on the basis of objective circumstances” standard.

Substantive Obligations and Rights 

The House version of COPPA 2.0 would both augment existing COPPA protections and add in new substantive obligations and provisions significant for compliance. Notable amendments proposed in this new legislation to augment COPPA protections include:

In addition to amendments that bolster existing COPPA protections, several amendments also add in notable substantive provisions:

Looking Ahead

Enacting COPPA 2.0 would expand online privacy protections for children and teens; and the fact that both chambers have introduced proposals underscores the growing legislative momentum to enshrine stronger youth privacy protections at the federal level. And yet, despite the Congressional motivation to advance legislation on youth privacy and safety this session, it is notable that the House version of COPPA 2.0 does not have the same bipartisan support as its Senate counterpart. What the exact impact of the lack of bipartisan support will mean for the future of the House’s COPPA 2.0 proposal remains subject to speculation. However, FPF will continue to monitor the development of COPPA 2.0 legislation alongside the progression of other bills included in the robust House Energy & Commerce youth online privacy and safety legislative package.

FPF Holiday Gift Guide for AI-Enabled, Privacy-Forward AgeTech

On Cyber Monday, giving supportive technology to an older loved one or caregiver is a great option. Finding the perfect holiday gift for an older adult who values their independence can be a challenge. This year, it might be worth exploring the exciting world of AI-enabled AgeTech. It’s not only gadgets; it’s also about giving the gift of autonomy, safety, and a little bit of futuristic fun. Here are three types of AI-enabled AgeTech to consider and information help pick the right privacy fit for the older adult and/or caregiver in your like this holiday season.

Mobility and Movement AgeTech

This category is all about keeping the adventure going, whether it’s a trip across town or safely navigating the living room. These gifts use AI-driven features to support physical activity and reduce the worry of falls or isolation. Think of them as the ultimate support tech for staying active.

For those who need a little help around the house, AI-powered home assistant robots can fetch items on command. For AI-driven transportation, AI is used to find the best route to a place, match riders with the best-suited drivers, and interpret voice commands. In wearables, AI continuously analyzes data like gait and heart rate to learn baseline health patterns, allowing it to detect a potential fall or an abnormal health trend and generate emergency alerts. In future AI-powered home assistant robots, AI might be the brain behind the natural language processing that understands commands and the mapping needed to safely find its way around and pick up tissues, medication bottles, or other small items.

Gift Guide: Mobility and Movement AgeTech

These capabilities rely on personal data to understand where a person is on a map or in their house. These devices collect location or spatial data to help individuals know where they are or let others know where they are. Sometimes these data are collected at the same time as when the person moves for features like “trip share” to help others know where they are. Many mobility AgeTech gifts may also collect body movement data, such as steps, balance, gait patterns, and alerts for inactivity or potential falls.

For any AgeTech gift receiver, be clear that, in some cases, AI may need to continuously analyze personal habit and location data to learn baseline patterns and be useful and even to reduce bias. They must consent to this ongoing collection and processing of data if they or their caregivers want to use certain features. Given sensitive data, how private information might be shared with others or how the company might share the data based on their policies, especially if the AI integrates with other services should be transparent and easy to understand.


State Consumer AI and Data Privacy Laws

Location data is protected under several state privacy laws, including when it is collected by AI. In states that have enacted privacy laws, special consent to collect or process “sensitive” data including location data may be requested by AgeTech devices or apps for certain features or functions. Currently, 20 states in the U.S. have enacted a consumer data privacy law. These laws generally provide consumers with rights to access, delete, and correct their personal data; and provide special protections for sensitive data, such as biometric identifiers, precise geolocation, and certain financial identifiers.

In 2025, state legislatures passed a number of AI-focused bills that covered issues such as chatbots, deepfakes, and more. These existing and proposed regulations may have impacts on AgeTech design and practices, as they determine safeguards and accountability mechanisms that developers must incorporate to ensure AgeTech tools remain compliant and safe for older adults.

Connection and Companionship AgeTech

These gifts leverage AI to offer sympathetic companionship, reduce the complexities of care coordination, and foster connection between individuals and their communities of care. They are specifically engineered to bridge the distance in modern caregiving, providing an essential safeguard against social isolation and loneliness.

Gift givers will find a mix of helpful tools here, like typical AI-driven calendar apps repurposed as digital care hubs for all family members to coordinate; simplified communication devices (tablets with custom, easy interfaces) paired with a friendly AI helper for calls and reminders; and even AI animatronic pets that respond to touch and voice, offering therapeutic benefits without the chores associated with a real pet. 

Gift Guide: Connection and Companionship AgeTech

These devices may log personal routines, capturing medication times, appointments, and daily habits. They may also collect voice interactions with AI helpers and may collect data related to mood, emotions, pain, or cognition through check-ins or certain body-related data (including neural data) during check-ins.

Gift-givers should discuss potential gifts with older adults and consider the privacy of others who might be unintentionally surveilled, such as friends, workers, or bystanders. Since caregivers often view shared calendars and activity logs, ensure access controls are distinct, role-based, and align with the older adult’s preferences. The older adult should control data access (e.g., medical routines vs. social events). Be transparent about whether AI companions record conversations or check-in responses, and how that sensitive personal data is stored and analyzed.


Health and Wellness Data Protections

The Health Insurance Portability and Accountability Act (HIPAA), does not protect all health data. HIPAA very generally applies to health care professionals and plans providing payment and insurance for health care services. So an AI companion health device provided as part of treatment by your doctor will be covered by HIPAA, but one sold directly to a consumer is less likely to be protected by HIPAA. However, some states have recently passed laws providing certain rights to consumers for general health information that is not protected by HIPAA. 

Daily Life and Task Support AgeTech

This tech category covers life’s essentials through AI-driven automation, focusing on managing finances, medication, and overall health with intelligent, passive monitoring. It’s about creating powerful, often invisible, digital safeguards, ranging from financial safeguard tools to connected health devices integrated into AI-based AI-driven homes, that offer profound peace of mind by anticipating and flagging risks.

This is where gift givers can find presents to help protect older adults and caregivers from increasingly common AI-driven leveraging machine learning to identify and flag suspicious activity targeting older populations. Look for AI financial tools that watch bank accounts for potential fraud or unusual activity, connected home devices that passively check for health changes, and AI-driven pill dispensers that light up and accurately sort medication. 

Gift Guide: Daily Life and Task Support AgeTech

The sensitive data collected by these devices can be a major target of scammers and other bad actors. Some tools may collect transaction history and alerts for “unusual” spending to help reduce scam risks. Other AgeTech may log medication “adherence” (timestamps of dispensed doses) or need an older adult’s medical history to work well. In newer systems with advanced identification technologies, it could also include biometric data such as fingerprints or face scans used to ensure safe access to individual accounts.

Gift-givers need to consider how the AI determines something is “unusual” to avoid unnecessary worry from false alarms in banking or health. For devices like AI-driven pill dispensers, also ask what happens to the device’s functionality if the subscription is canceled. For passive monitoring devices, ensure meaningful consent; the older adult must have explicit, ongoing understanding and consent for continuous collection of highly sensitive data that happens while living their daily life from bathroom trips to spending habits. 


Financial Data Protections

Those giving gifts of this kind may want to consult with a trusted financial institution or professional before purchasing. If a financial-monitoring tool is instead provided by a non-bank (such as a consumer-facing fintech app), consumer financial protections may not apply, even if the data is still highly sensitive. State privacy laws and FTC authority may offer protections that can vary in scope. 

AI-enabled AgeTech Gift Checklist

We suggest evaluating AgeTech products through a practical lens:

Since multiple state and federal laws create a mix of protections, gift givers need to take an extra step to understand the protections and choose the best balance of safety, privacy, and support to go with their AgeTech present.  A national privacy law could simplify the inconsistency and gaps, but does not seem to be on the agenda in Congress.

The United States population demographics point to an increasingly aged population, and AI-enabled AgeTech has shown promise in supporting the independence of older adults. Gift-givers have an opportunity to offer tools that support independence and strengthen autonomy, especially as AI continues to be adapted to older adults’ specific needs and preferences. In a recent national poll by the University of Michigan, 96% of older adults who used AI-powered home security devices and systems and 80% who used AI-powered voice assistants in the past year said these devices help them live independently and safely in their home.

Whether the device helps someone move confidently, stay socially engaged, or manage essential tasks, each category relies on sensitive personal data that must be handled thoughtfully. By thinking through how these technologies work, what information they collect, and the rights and safeguards that protect that data, you can ensure your presents are empowering and future-thinking.

Happy Holidays from the Future of Privacy Forum!

GPA 2025: AI development and human oversight of decisions involving AI systems were this year’s focus for Global Privacy regulators

The 47th Global Privacy Assembly (GPA), an annual gathering of the world’s privacy and data protection authorities, took place between September 15 and 19, 2025, hosted by South Korea’s Personal Information Protection Commission in Seoul. Over 140 authorities from more than 90 countries are members of the GPA, and its annual conferences serve as an excellent bellwether for the priorities of the global data protection and privacy regulatory community, providing the gathered authorities an opportunity to share policy updates, priorities, collaborate on global standards, and adopt joint resolutions on the most critical issues in data protection.  

This year, the GPA adopted three resolutions after completing its five-day agenda, including two closed-session days for members and observers only: 

The first key takeaway from the results of GPA’s Closed Session is a substantial difference in the scope of the resolutions relative to prior years. In contrast to the five resolutions adopted in 2024 or the seven adopted in 2023, which covered a wide variety of data protection topics from surveillance to the use of health data for scientific research, the 2025 resolutions are much more narrowly tailored and primarily focused on AI, with a pinch of digital literacy. Taken together with the meeting’s content and agenda, these resolutions provide insight into the current priorities of the global privacy regulatory community – and perhaps unsurprisingly, reflect a much-narrowed focus on AI issues compared to previous years. 

Across all three resolutions adopted in 2025, a few core issues become apparent: 

Finally, exploring what topics the Assembly didn’t address is also interesting. A deeper dive into each resolution is illustrative of some of the shared goals of the global privacy regulatory community – particularly in an age where major tech policymakers in the U.S., the European Union, and around the world are overwhelmingly focused on AI. It should be noted that the three resolutions passed quasi-unanimously, with only one abstention among GPA members noted in the public documents (US Federal Trade Commission).

  1. Resolution on the collection, use and disclosure of personal data to pre-train, train and fine-tune AI models

The first resolution, covering the collection, use and disclosure of personal data to pre-train, train, and fine-tune AI models, was sponsored by the Office of the Australian Information Commissioner and co-sponsored by 15 other GPA member authorities. The GPA resolved to four specific steps after articulating a greater number of underlying concerns – specifically, that:

The specific resolved steps indicate a particular focus on generative AI technologies, and a recognition that in order to be effective, it is likely that regulatory standards will need to be consistent across international boundaries. Three of the four steps also emphasize cooperation among international privacy enforcement authorities; although notably this resolution does not include any specific proposals for adopting shared terminology directly. 

The broader document relies on a rights-based understanding of data protection rights and notes several times that the untrammeled collection and use of personal data in the development of AI technologies may imperil the fundamental right to privacy, but casts the development of AI technologies in a rights-consistent manner as “ensur[ing] their trustworthiness and facilitat[ing] their adoption.” The resolution repeatedly emphasizes that all stages of the algorithmic lifecycle are important in the context of processing personal data. 

The resolution also provides eight familiar data protection principles that are reminiscent of the OECD’s data protection principles and the Fair Information Practice Principles that preceded them – under this resolution personal data should only be used throughout the AI lifecycle when its use comports with: a lawful and fair basis for processing; purpose specification and use limitation; data minimization; transparency; accuracy; data security; accountability and privacy by design; and the rights of data subjects. 

The resolution does characterize some of these principles in ways specific to the training of AI models – critically noting that: 

This articulation of traditional data protection principles demonstrates how the global data protection community is considering how the existing principles-based data privacy frameworks will specifically apply to AI and other emerging technologies.

  1. Resolution on meaningful human oversight of decisions involving AI systems 

The second resolution of 2025 was submitted by the Office of the Privacy Commissioner of Canada and was joined by thirteen co-sponsors, and focused on addressing how the members could synchronize their approaches to “meaningful human oversight” of AI decision-making. After explanatory text, the Assembly resolved four specific points:

This resolution, topically much more narrowly focused than the first one analyzed above, is based on the contention that AI systems’ decision-making processes may have “significant adverse effects on individuals’ rights and freedoms” if there is no “meaningful human oversight” of system decision-making and thus no effective recourse for an impacted individual to challenge such a decision. This is a notable premise, as only this resolution (of the three) also acknowledges that “some privacy and data protection laws” establish a right not to be subject to automated decision-making along the lines of Article 22 GDPR

Ahead of the specifically resolved points, the second resolution appears to identify the potential for “timely human review” of automated decisions that “may significantly affect individuals’ fundamental rights and freedoms” as the critical threshold for ensuring that automated decisionmaking and AI technologies do not erode data protection rights. Another critical piece is the distinction the Assembly makes between “human oversight” – which may occur throughout the decision-making process, and “human review” – which may occur exclusively after the fact – the GPA explicitly identifies “human review” as only one activity within a broader concept of “oversight.”    

Most critically, the GPA identifies specific considerations in evaluating whether a human oversight system is “meaningful”:

The resolution also considers tools that organizations possess in order to ensure that “meaningful oversight” is actually occurring, including:

Overall, the resolution notes that human oversight mechanisms are the responsibility of developers and deployers, and are critical in mitigating the risk to fundamental rights and freedoms posed by potential bias in algorithmic decision making, specifically noting the risks of self-reinforcing bias based on training data or the improper weighting of past decisions as threats meaningful oversight processes can counteract. 

  1. Resolution on Digital Education, Privacy and Personal Data Protection for Responsible Inclusive Digital Citizenship

The third and final resolution of 2025 was submitted by the Institute for Transparency, Access to Public Information and Protection of Personal Data of the State of Mexico and Municipalities (Infoem), a new body that has replaced Mexico’s former GPA representative, the the National Institute for Transparency, Access to Information and Personal Data Protection (INAI). This resolution was joined by only seven co-sponsors, and reflected the GPA’s commitment to developing privacy in the digital education space and promoting “inclusive digital citizenship.” Here, the GPA resolved five particular points, each accompanied by a number of recommendations for GPA Members: 

The resolution also evidences the 2025 Assembly’s specific concerns relating to generative AI, including a statement “reaffirming that … generative artificial intelligence, pose[s] specific risks to vulnerable groups and must be addressed using an approach based on ethics and privacy by design” and recommending under the resolved points that GPA members “[p]romote the creation and inclusion of educational content that allows for understanding and exercising rights related to personal data — such as access, rectification, erasure, objection, and portability, among others — as well as critical reflection on the responsible use of emerging technologies.”

Among its generalized resolved points, the Assembly critically recommends that GPA Members may: 

Finally, the third resolution also includes an optional “Glossary” that offers definitions for some of the terminology that it uses. Although the glossary does not seek to define “artificial intelligence”, “personal data,” or, indeed, “children,” the glossary does offer definitions for both “digital citizenship” – “the ability to participate actively, ethically, and responsibly in digital environments, exercising rights and fulfilling duties, with special attention to the protection of privacy and personal data” and “age assurance” – “a mechanism or procedure for verifying or estimating the age of users in digital environments, in order to protect children from online risks.” Glossaries such as this one are useful in evaluating where areas of conceptual agreement in terminology (and thus, regulatory scope) are emerging among the global regulatory community. 

  1. Sandboxes and Simplification: not yet in focus

It is also worth noting a few specific areas that the GPA did not address in this year’s resolutions. As previously noted, the topical range of the resolutions was more targeted than in prior years. Within the narrowed focus on AI, the Assembly did not make any mention of regulatory sandboxes for AI governance, nor challenged or referred to the ongoing push for regulatory simplification, both topics increasingly common to the discussion relative to AI regulation around the globe. Something to follow for next year’s GPA will be how privacy regulators will engage with these trends.

  1. Concluding remarks

The resolutions adopted by the GPA in 2025 indicate increasing focus and specialization of the world’s privacy regulators onto AI issues, at least for the immediate future. In contrast to the multi-subject resolutions of previous years (some of which were AI related, true) this years’ GPA produced resolutions that were essentially only concerned with AI, although still approaching the new technology in the context of its impact on pre-existing data protection rights. Moving into 2026, it would be wise to observe whether the GPA (or other internationally cooperative bodies) pursue mutually consistent conceptual and enforcement frameworks, particularly concerning the definitions of AI systems and associated oversight mechanisms.  

Understanding the New Wave of Chatbot Legislation: California SB 243 and Beyond

As more states consider how to govern AI-powered chatbots, California’s SB 243 joins New York’s S-3008C as one of the first few enacted laws governing companion chatbots and stands out as the first to include protections tailored to minors. Signed by Governor Gavin Newsom this month, the law focuses on transparency and youth safety, requiring “companion chatbot” operators to adopt new disclosure and risk-mitigation measures. Notably, because SB 243 creates a private right of action for injured individuals, the law has drawn attention for its potential implications for significant damage claims.

The law’s passage comes amid a broader wave of state activity on chatbot legislation. As detailed in the Future of Privacy Forum’s State of State AI Report, 2025 was the first year multiple states introduced or enacted bills explicitly targeting chatbots, including Utah, New York, California, and Maine1. This growing attention reflects both the growing integration of chatbots into daily life–for instance, tools that personalize learning, travel, or writing–and increasing calls for transparency and user protection2.

While SB 243 is distinct in its focus on youth safeguards, it reflects broader efforts on the state-level to define standards for responsible chatbot deployment. As additional legislatures weigh similar proposals, understanding how these frameworks differ in scope, obligations, and enforcement will be key to interpreting the next phase of chatbot governance in 2026.

Why Chatbots Captured Lawmakers Attention

A series of high-profile incidents and lawsuits in recent months has drawn sustained attention to AI chatbots, particularly companion chatbots or systems designed to simulate empathetic, human-like conversations and adapt to users’ emotional needs. Unlike informational or customer-service bots, these chatbots often have names and personalities and sustain ongoing exchanges that can resemble real relationships. Some reports claim these chatbots are especially popular among teens. 

Early research underscores the complex role that these systems can play in human lives. A Common Sense Media survey asserts that nearly three in four teens (72%) have used an AI companion, with many reporting frequent or emotionally oriented interactions. However, like many technologies, their impact is complex and evolving. A Stanford study found that 3% of young adults using a companion chatbot credited it with temporarily halting suicidal thoughts and other studies have suggested that chatbots can help alleviate the U.S.’s loneliness epidemic. Yet, several cases have also emerged in which chatbots allegedly encouraged children and teens to commit suicide or self-harm, leading to litigation and public outcry. 

Parents involved in these cases have testified before state legislatures and in U.S. Senate hearings, prompting investigations by the Federal Trade Commission and Congress into AI chatbot policies.

This growing scrutiny has shaped how Congress and the states are legislating in 2025, with most proposals focusing on transparency, safety protocols, and youth protection. At the same time, these frameworks have prompted familiar policy debates around innovation, data privacy, and liability.

SB 243 Explained

According to the bill’s author, Senator Padilla (D), California’s SB 243 was enacted in response to these growing concerns, requiring companion chatbot operators to maintain certain disclosures, safety protocols, and additional safeguards when the user is known to be a minor.

While California is not the first state to regulate companion chatbots—New York’s S-3008C, enacted earlier this year, includes similar transparency and safety provisions—SB 243 is the first to establish youth-specific protections. Its requirements reflect a more targeted approach, combining user disclosure, crisis-intervention protocols, and minor-focused safeguards within a single framework. As one of the first laws to address youth interaction with companion chatbots, SB 243 may shape how other states craft their own measures, even as policymakers experiment with differing approaches.

A. Scope

California’s SB 243 defines a “companion chatbot,” as an AI system that provides “adaptive, human-like responses to user inputs,” is capable of meeting a “user’s social needs,” exhibits “anthropomorphic features,” and is able to “sustain a relationship across multiple interactions.” Unlike New York’s S-3008C, enacted earlier in 2025, SB 243 does not reference a chatbot’s ability to retain user history or initiate unsolicited prompts, resulting in a slightly broader definition focused on foreseeable use in emotionally oriented contexts.

The law excludes several categories of systems from this definition, including chatbots used solely for customer service, internal research, or operational purposes; bots embedded in video games that cannot discuss mental health, self-harm, or sexually explicit content; and stand-alone consumer devices such as voice-activated assistants. It also defines an “operator” as any person making a companion chatbot platform available to users in California.

Even with these carveouts, however, compliance determinations may hinge on subjective interpretations; for example, whether a chatbot’s repeated customer interactions could still be viewed as “sustained.” As a result, entities may face ongoing uncertainty in determining which products fall within scope, particularly for more general-purpose conversational technologies.

B. Requirements 

SB 243 imposes both disclosure, safety protocol, and minor-specific safeguards, as well as a private right of action that allows individuals to seek damages of at least $1,000, along with injunctive relief and attorney’s fees.

However, these requirements raise familiar concerns regarding data privacy, compliance, and youth safety. To identify and respond to risks of suicidal ideation, operators may need to monitor and analyze user interactions, potentially processing and retaining sensitive mental health information which could create tension with existing privacy obligations. Similarly, what it means for an operator to “know” a user is a minor may depend on what information an operator collects about a user and how SB 243 interacts with other recent California laws–such as AB 1043, which establishes an age assurance framework. 

Aditionally, this law directs operators to use “evidence-based methods” for detecting suicidal ideation, though it does not specify what qualifies as evidence-based or “suicidal ideation.” This language introduces some practical ambiguity, as developers must determine which conversational indicators trigger reporting and what methodologies satisfy this “evidence-based” requirement.

How SB 243 Fits into the Broader Landscape

SB 243 reflects many of the same themes found across state chatbot legislation introduced in 2025. Two central regulatory approaches emerged this year—identity disclosure through user notification and safety protocols to mitigate harm—both of which are incorporated into California’s framework. Across states, lawmakers have emphasized transparency, particularly in emotionally sensitive contexts, to ensure users understand when they are engaging with an AI system rather than a human.

A. Identity Disclosure and User Notification

Six of the seven key chatbot bills in 2025 included a user disclosure requirement, mandating that operators clearly notify users when they are interacting with AI rather than a human. While all require disclosures to be “clear and conspicuous,” states vary in how prescriptive they are about timing and format.

New York’s S-3008C (enacted) and S 5668 (proposed) require disclosure at the start of each chatbot interaction and at least once every three hours during ongoing conversations. California’s SB 243 includes a similar three-hour notification rule, but only when the operator knows the user is a minor. In contrast, Maine’s LD 1727 (enacted) simply requires disclosure “in a clear and conspicuous manner” without specifying frequency, while Utah’s SB 452 (enacted) ties disclosure to user engagement, requiring it before chatbot features are accessed or when a user asks whether AI is being used.

Lawmakers are increasingly treating disclosure as a baseline governance mechanism for AI, as noted in FPF’s State of State AI Report. From a compliance perspective, disclosure standards provide tangible obligations for developers to operationalize. From a consumer protection standpoint, legislators view them as tools to promote transparency, prevent deception, and curb excessive engagement by reminding users, especially minors, that they are interacting with an AI system.

B. Safety Protocols and Risk Mitigation

Alongside disclosure requirements, several 2025 chatbot bills, including California’s SB 243, introduce safety protocol obligations aimed at reducing risks of self-harm or related harms. Similar to SB 243, New York’s S-3008C (enacted) makes it unlawful to offer AI companions without taking “reasonable efforts” to detect and address self-harm, while New York’s S 5668 (proposed) would have expanded the scope to include physical or financial harm to others

These provisions are intended to operate as accountability mechanisms, requiring operators to proactively identify and mitigate risks associated with companion chatbots. However, as discussed above, requiring chatbot operators to make interventions in response to perceived mental health crises or other potential harms increases the likelihood that operators will need to retain chat logs and make potentially sensitive inferences about users. Retention and processing of user data in this way may be inconsistent with users’ expressed privacy preferences and potentially conflict with operators’ obligations under privacy laws.

Notably, safety protocol requirements appeared only in companion chatbot legislation, not in broader chatbot bills such as Maine’s LD 1727 (enacted), reflecting lawmakers’ heightened concern about self-harm and mental health risks linked to ongoing litigation and public scrutiny of companion chatbots.

C. Other Trends and Influences 

California’s SB 243 also reflects other trends within 2025 chatbot legislation. For example, chatbot legislation generally did not include requirements to undertake impact assessments or audits. An earlier draft of SB 243 included a third-party audit requirement for companion chatbot operators, but the provision was removed before enactment, suggesting that state lawmakers continue to favor disclosure and protocols over more prescriptive oversight mechanisms.

Governor Newsom’s signature on SB 243 also coincided with his veto of California’s AB 1064, a more restrictive companion chatbot bill for minors. AB 1064 would have prohibited making companion chatbots available to minors unless they were “not foreseeably capable” of encouraging self-harm or other high-risk behaviors. In his veto message, Newsom cautioned that the measure’s prohibitions were overly broad and could “unintentionally lead to a total ban” on such products, while signaling interest in building on SB 243’s transparency-based framework for future legislation.

As of the close of 2025 legislative sessions, no state had enacted a ban on chatbot availability for minors or adults. SB 243’s emphasis on transparency and safety protocols, rather than outright restrictions, may preview future legislative debates.

Looking Ahead: What to Expect in 2026

The surge of chatbot legislation in 2025 offers a strong signal of where lawmakers may turn next. Companion chatbots are likely to remain central, particularly around youth safety and mental health, with future proposals potentially building on California’s SB 243 by adding youth-specific provisions or linking chatbot oversight to age assurance and data protection frameworks. A key question for 2026 is whether states will continue to favor these disclosure-based frameworks or begin shifting toward use restrictions. While Governor Newsom’s veto of AB 1064 suggested lawmakers may prioritize transparency and safety standards over outright bans, the newly introduced federal “Guidelines for User Age-Verification and Responsible Dialogue (GUARD) Act,” which includes both disclosure requirements and a ban on AI companions for minors, may reopen that debate.

The scope of regulation could also expand as states begin to explore sector-specific chatbots, particularly in mental health, where new legislation in New York and Massachusetts would prohibit AI chatbots for therapeutic use. Other areas such as education and employment, already the focus of broader AI legislation, may also draw attention as lawmakers consider how conversational AI shapes consumer and workplace experiences. Taken together, these developments suggest that 2026 may be the “year of the chatbots,” with states prepared to test new approaches to transparency, safety, and youth protection while continuing to define responsible chatbot governance.

  1. Other bills enacted in 2025 include provisions that would cover chatbots within their broader scope of AI technologies; however, these figures reflect legislation that focused narrowly on chatbots. ↩︎
  2. See FPF’s Issue Brief: Daniel Berrick and Stacey Grey, “Concepts in AI Governance: Personality vs. Personalization,” September 2025, https://fpf.org/wp-content/uploads/2025/09/Concepts-in-AI-Governance_-Personality-vs.-Personalization.pdf ↩︎