Future of Privacy Forum Launches the FPF Center for Artificial Intelligence
The FPF Center for Artificial Intelligence will serve as a catalyst for AI policy and compliance leadership globally, advancing responsible data and AI practices for public and private stakeholders
Today, the Future of Privacy Forum (FPF) launched the FPF Center for Artificial Intelligence, established to better serve policymakers, companies, non-profit organizations, civil society, and academics as they navigate the challenges of AI policy and governance. The Center will expand FPF’s long-standing AI work, introduce large-scale novel research projects, and serve as a source for trusted, nuanced, nonpartisan, and practical expertise.
FPF’s Center work will be international as AI continues to deploy globally and rapidly. Cities, states, countries, and international bodies are already grappling with implementing laws and policies to manage the risks.“Data, privacy, and AI are intrinsically interconnected issues that we have been working on at FPF for more than 15 years, and we remain dedicated to collaborating across the public and private sectors to promote their ethical, responsible, and human-centered use,” saidJules Polonetsky, FPF’s Chief Executive Officer. “But we have reached a tipping point in the development of the technology that will affect future generations for decades to come. At FPF, the word Forum is a core part of our identity. We are a trusted convener positioned to build bridges between stakeholders globally, and we will continue to do so under the new Center for AI, which will sit within FPF.”
The Center will help the organization’s 220+ members navigate AI through the development of best practices, research, legislative tracking, thought leadership, and public-facing resources. It will be a trusted evidence-based source of information for policymakers, and it will collaborate with academia and civil society to amplify relevant research and resources.
“Although AI is not new, we have reached an unprecedented moment in the development of the technology that marks a true inflection point. The complexity, speed and scale of data processing that we are seeing in AI systems can be used to improve people’s lives and spur a potential leapfrogging of societal development, but with that increased capability comes associated risks to individuals and to institutions,” saidAnne J. Flanagan, Vice President for Artificial Intelligence at FPF. “The FPF Center for AI will act as a collaborative force for shared knowledge between stakeholders to support the responsible development of AI, including its fair, safe, and equitable use.”
The Center will officially launch at FPF’s inaugural summit DC Privacy Forum: AI Forward. The in-person and public-facing summit will feature high-profile representatives from the public and private sectors in the world of privacy, data and AI.
FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
See the full list of founding FPF Center for AI Leadership Council members here.
I am excited about the launch of the Future of Privacy Forum’s new Center for Artificial Intelligence and honored to be part of its leadership council. This announcement builds on many years of partnership and collaboration between Workday and FPF to develop privacy best practices and advance responsible AI, which has already generated meaningful outcomes, including last year’s launch of best practices to foster trust in this technology in the workplace. I look forward to working alongside fellow members of the Council to support the Center’s mission to build trust in AI and am hopeful that together we can map a path forward to fully harness the power of this technology to unlock human potential.
Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday
I’m honored to be a founding member of the Leadership Council of the Future of Privacy Forum’s new Center for Artificial Intelligence. AI’s impact transcends borders, and I’m excited to collaborate with a diverse group of experts around the world to inform companies, civil society, policymakers, and academics as they navigate the challenges and opportunities of AI governance, policy, and existing data protection regulations.
Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden
“As we enter this era of AI, we must require the right balance between allowing innovation to flourish and keeping enterprises accountable for the technologies they create and put on the market. IBM believes it will be crucial that organizations such as the Future of Privacy Forum help advance responsible data and AI policies, and we are proud to join others in industry and academia as part of the Leadership Council.”
Christina Montgomery, Chief Privacy & Trust Officer, AI Ethics Board Chair, IBM
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.
FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance
FPF’s Youth and Education team has developed a checklist and accompanying policy brief to help schools vet generative AI tools for compliance with student privacy laws. Vetting Generative AI Tools for Use in Schools is a crucial resource as the use of generative AI tools continues to increase in educational settings. It’s critical for school leaders to understand how existing federal and state student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) apply to the complexities of machine learning systems to protect student privacy. With these resources, FPF aims to provide much-needed clarity and guidance to educational institutions grappling with these issues.
“AI technology holds immense promise in enhancing educational experiences for students, but it must be implemented responsibly and ethically,” said David Sallay, the Director for Youth & Education Privacy at the Future of Privacy Forum. “With our new checklist, we aim to empower educators and administrators with the knowledge and tools necessary to make informed decisions when selecting generative AI tools for classroom use while safeguarding student privacy.”
The checklist, designed specifically for K -12 schools, outlines key considerations when incorporating generative AI into a school or district’s edtech vetting checklist.
These include:
assessing the requirements for vetting all edtech;
describing the specific use cases;
preparing to address transparency and explainability; and
determining if student PII will be used to train the large language model (LLM).
By prioritizing these steps, educational institutions can promote transparency and protect student privacy while maximizing the benefits of technology-driven learning experiences for students.
The in-depth policy brief outlines the relevant laws and policies a school should consider, the unique compliance considerations of generative AI tools (including data collection, transparency and explainability, product improvement, and high-risk decision-making), and their most likely use cases (student, teacher, and institution-focused).
The brief also encourages schools and districts to update their existing edtech vetting policies to address the unique considerations of AI technologies (or to create a comprehensive policy if one does not already exist) instead of creating a separate vetting process for AI. It also highlights the role that state legislatures can play in ensuring the efficiency of school edtech vetting and oversight and calls on vendors to be proactively transparent with schools about their use of AI.
Check out the LinkedIn Live with CEO Jules Polonetsky and Youth & Education Director David Sallay about the Checklist and Policy Brief.
To read more of the Future of Privacy Forum’s youth and student privacy resources, visitwww.StudentPrivacyCompass.org.
FPF Releases “The Playbook: Data Sharing for Research” Report and Infographic
Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific insights and drive progress in public health, education, social science, and a myriad of other fields for the betterment of the broader society. Academic researchers use this data to consider consumer, commercial, and scientific questions at a scale they cannot reach using conventional research data-gathering techniques alone. This data also helped researchers answer questions on topics ranging from bias in targeted advertising and the influence of misinformation on election outcomes to early diagnosis of diseases through data collected by fitness and health apps.
The playbook addresses vital steps for data management, sharing, and program execution between companies and researchers. Creating a data-sharing ecosystem that positively advances scientific research requires a better understanding of the established risks, opportunities to address challenges, and the diverse stakeholders involved in data-sharing decisions. This report aims to encourage safe, responsible data-sharing between industries and researchers.
“Corporate data sharing connects companies with research institutions, by extension increasing the quantity and quality of research for social good,” said Shea Swauger, Senior Researcher for Data Sharing and Ethics. “This Playbook showcases the importance, and advantages, of having appropriate protocols in place to create safe and simple data sharing processes.”
In addition to the Playbook, FPF created a companion infographic summarizing the benefits, challenges, and opportunities of data sharing for research outlined in the larger report.
As a longtime advocate for facilitating the privacy-protective sharing of data by industry to the research community, FPF is proud to have created this set of best practices for researchers, institutions, policymakers, and data-holding companies. In addition to the Playbook, the Future of Privacy Forum has also opened nominations for its annual Award for Research Data Stewardship.
“Our goal with these initiatives is to celebrate the successful research partnerships transforming how corporations and researchers interact with each other,” Swauger said. “Hopefully, we can continue to engage more audiences and encourage others to model their own programs with solid privacy safeguards.”
Shea Swauger, Senior Researcher for Data Sharing and Ethics, Future of privacy Forum
Established by FPF in 2020 with support from The Alfred P. Sloan Foundation, the Award for Research Data Stewardship recognizes excellence in the privacy-protective stewardship of corporate data shared with academic researchers. The call for nominations is open and closes on Tuesday, January 17, 2023. To submit a nomination, visit the FPF site.
FPF has also launched a newly formed Ethics and Data in Research Working Group; this group receives late-breaking analyses of emerging US legislation affecting research and data, meets to discuss the ethical and technological challenges of conducting research, and collaborates to create best practices to protect privacy, decrease risk, and increase data sharing for research, partnerships, and infrastructure. Learn more and join here.
FPF Testifies Before House Subcommittee on Energy and Commerce, Supporting Congress’s Efforts on the “American Data Privacy and Protection Act”
This week, FPF’s Senior Policy Counsel Bertram Lee testified before the U.S. House Energy and Commerce Subcommittee on Consumer Protection and Commerce hearing, “Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security” regarding the bipartisan, bicameral privacy discussion draft bill, “American Data Privacy and Protection Act” (ADPPA). FPF has a history of supporting the passage of a comprehensive federal consumer privacy law, which would provide businesses and consumers alike with the benefit of clear national standards and protections.
Lee’s testimony opened by applauding the Committee on its efforts towards comprehensive federal privacy legislation and emphasized the “time is now” for its passage. As it is written, the ADPPA would address gaps in the sectoral approach to consumer privacy, establish strong national civil rights protections, and establish new rights and safeguards for the protection of sensitive personal information.
“The ADPPA is more comprehensive in scope, inclusive of civil rights protections, and provides individuals with more varied enforcement mechanisms in comparison to some states’ current privacy regimes,” Lee said in his testimony. “It also includes corporate accountability mechanisms, such as the requiring privacy designations, data security offices, and executive certifications showing compliance, which is missing from current states’ laws. Notably, the ADPPA also requires ‘short-form’ privacy notices to aid consumers of how their data will be used by companies and their rights — a provision that is not found in any state law.”
Lee’s testimony also provided four recommendations to strengthen the bill, which include:
Additional funding and resources for the FTC;
Developing a more iterative process to ensure that the bill can keep up with evolving technologies;
Clarifying the intersection of ADPPA with other federal privacy laws (COPPA, FERPA, HIPAA, etc.); and
Establishing clear definitions and distinctions between different types of covered entities, including service providers.
Many of the recommendations would ensure that the legislation gives individuals meaningful privacy rights and places clear obligations on businesses and other organizations that collect, use and share personal data. The legislation would expand civil rights protections for individuals and communities harmed by algorithmic discrimination as well as require algorithmic assessments and evaluations to better understand how these technologies can impact communities.
Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework
The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield.
This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached.
The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.
This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases.
If you are curious about what the legal process will look like both on the US and EU sides after the agreement “in principle”, check out this blog post by Laila Abdelaziz of the “Privacy across borders project” at American University.
After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement.
In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.
Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.
Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.
The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.
The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”.
Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.
One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.
Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side.
Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.
What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.
Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.
In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.
Privacy Best Practices for Rideshare Drivers Using Dashcams
FPF & Uber Publish Guide Highlighting Privacy Best Practices for Drivers who Record Video and Audio on Rideshare Journeys
FPF and Uber have created a guide for US-based rideshare drivers who install “dashcams” – video cameras mounted on a vehicle’s dashboard or windshield. Many drivers install dashcams to improve safety, security, and accountability; the cameras can capture crashes or other safety-related incidents outside and inside cars. Dashcam footage can be helpful to drivers, passengers, insurance companies, and others when adjudicating legal claims. At the same time, dashcams can pose substantial privacy risks if appropriate safeguards are not in place to limit the collection, use, and disclosure of personal data.
Dashcams typically record video outside a vehicle. Many dashcams also record in-vehicle audio and some record in-vehicle video. Regardless of the particular device used, ride-hail drivers who use dashcams must comply with applicable audio and video recording laws.
The guide explains relevant laws and provides practical tips to help drivers be transparent, limit data use and sharing, retain video and audio-only for practical purposes, and use strict security controls. The guide highlights ways that drivers can employ physical signs, in-app notices, and other means to ensure passengers are informed about dashcam use and can make meaningful choices about whether to travel in a dashcam-equipped vehicle. Drivers seeking advice concerning specific legal obligations or incidents should consult legal counsel.
Privacy best practices for dashcams include:
Give individuals notice that they are being recorded
Place recording notices inside and on the vehicle.
Mount the dashcam in a visible location.
Consider, in some situations, giving an oral notification that recording is taking place.
Determine whether the ride sharing service provides recording notifications in the app, and utilize those in-app notices.
Only record audio and video for defined, reasonable purposes
Only keep recordings for as long as needed for the original purpose.
Inform passengers as to why video and/or audio is being recorded.
Limit sharing and use of recorded footage
Only share video and audio with third parties for relevant reasons that align with the original reason for recording.
Thoroughly review the rideshare service’s privacy policy and community guidelines if using an app-based rideshare service, and be aware that many rideshare companies maintain policies against widely disseminating recordings.
Safeguard and encrypt recordings and delete unused footage
Identify dashcam vendors that provide the highest privacy and security safeguards.
Carefully read the terms and conditions when buying dashcams to understand the data flows.
Uber will be making these best practices available to drivers in their app and website.
Many ride-hail drivers use dashcams in their cars, and the guidance and best practices published today provide practical guidance to help drivers implement privacy protections. But driver guidance is only one aspect of ensuring individuals’ privacy and security when traveling. Dashcam manufacturers must implement privacy-protective practices by default and provide easy-to-use privacy options. At the same time, ride-hail platforms must provide drivers with the appropriate tools to notify riders, and carmakers must safeguard drivers’ and passengers’ data collected by OEM devices.
In addition, dashcams are only one example of increasingly sophisticated sensors appearing in passenger vehicles as part of driver monitoring systems and related technologies. Further work is needed to apply comprehensive privacy safeguards to emerging technologies across the connected vehicle sector, from carmakers and rideshare services to mobility services providers and platforms. Comprehensive federal privacy legislation would be a good start. And in the absence of Congressional action, FPF is doing further work to identify key privacy risks and mitigation strategies for the broader class of driver monitoring systems that raise questions about technologies beyond the scope of this dashcam guide.
12th Annual Privacy Papers for Policymakers Awardees Explore the Nature of Privacy Rights & Harms
The winners of the 12th annual Future of Privacy (FPF) Privacy Papers for Policymakers Award ask big questions about what should be the foundational elements of data privacy and protection and who will make key decisions about the application of privacy rights. Their scholarship will inform policy discussions around the world about privacy harms, corporate responsibilities, oversight of algorithms, and biometric data, among other topics.
“Policymakers and regulators in many countries are working to advance data protection laws, often seeking in particular to combat discrimination and unfairness,” said FPF CEO Jules Polonetsky. “FPF is proud to highlight independent researchers tackling big questions about how individuals and society relate to technology and data.”
This year’s papers also explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and among international data protection authorities. The winning papers will be presented at a virtual event on February 10, 2022.
The winners of the 2022 Privacy Papers for Policymakers Award are:
Privacy Harms, by Danielle Keats Citron, University of Virginia School of Law; and Daniel J. Solove, George Washington University Law School
This paper looks at how courts define harm in cases involving privacy violations and how the requirement of proof of harm impedes the enforcement of privacy law due to the dispersed and minor effects that most privacy violations have on individuals. However, when these minor effects are suffered at a vast scale, individuals, groups, and society can feel significant harm. This paper offers language for courts to refer to when litigating privacy cases and provides advice as to when privacy harm should be considered in a lawsuit.
In this paper, Green analyzes the use of human oversight of government algorithmic decisions. From this analysis, he concludes that humans are unable to perform the desired oversight responsibilities, and that by continuing to use human oversight as a check on these algorithms, the government legitimizes the use of these faulty algorithms without addressing the associated issues. The paper offers a more stringent approach to determining whether an algorithm should be incorporated into a certain government decision, which includes critically considering the need for the algorithm and evaluating whether people are capable of effectively overseeing the algorithm.
The Surprising Virtues of Data Loyalty, by Woodrow Hartzog, Northeastern University School of Law and Khoury College of Computer Sciences, Stanford Law School Center for Internet and Society; and Neil M. Richards, Washington University School of Law, Yale Information Society Project, Stanford Center for Internet and Society
The data loyalty responsibilities for companies that process human information are now being seriously considered in both the U.S. and Europe. This paper analyzes criticisms of data loyalty that argue that such duties are unnecessary, concluding that data loyalty represents a relational approach to data that allows us to deal substantively with the problem of platforms and human information at both systemic and individual levels. The paper argues that the concept of data loyalty has some surprising virtues, including checking power and limiting systemic abuse by data collectors.
Smartphone Platforms as Privacy Regulators, by Joris van Hoboken, Vrije Universiteit Brussels, Institute for Information Law, University of Amsterdam; and Ronan Ó Fathaigh, Institute for Information Law, University of Amsterdam
In this paper, the authors look at the role of online platforms and their impact on data privacy in today’s digital economy. The paper first distinguishes the different roles that platforms can have in protecting privacy in online ecosystems, including governing access to data, design of relevant interfaces, and policing the behavior of the platform’s users. The authors then provide an argument as to what platforms’ role should be in legal frameworks. They advocate for a compromise between direct regulation of platforms and mere self-regulation, arguing that platforms should be required to make official disclosures about their privacy-related policies and practices for their respective ecosystems.
China enacted the first codified personal information protection law in China in late 2021, the Personal Information Protection Law (PIPL). In this paper, Wang compares China’s PIPL with data protection laws in nine regions to assist overseas Internet companies and personnel who deal with personal information in better understanding the similarities and differences in data protection and compliance between each country and region.
Cameras are everywhere, and with the innovation of video analytics, there are questions being raised about how individuals should be notified that they are being recorded. This paper studied 123 individuals’ sentiments across 2,328 video analytics deployments scenarios to inform their conclusion. In their conclusion, the researchers advocate for the development of interfaces that simplify the task of managing notices and configuring controls, which would allow individuals to communicate their opt-in/opt-out preference to video analytics operators.
From the record number of nominated papers submitted this year, these six papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were selected based on the research and solutions that are relevant for policymakers and regulators in the U.S. and abroad.
In addition to the winning papers, FPF has selected two papers for Honorable Mention: Verification Dilemmas and the Promise of Zero-Knowledge Proofs by Kenneth Bamberger, University of California, Berkeley – School of Law; Ran Canetti, Boston University, Department of Computer Science, Boston University, Faculty of Computing and Data Science, Boston University, Center for Reliable Information Systems and Cybersecurity; Shafi Goldwasser, University of California, Berkeley – Simons Institute for the Theory of Computing; Rebecca Wexler, University of California, Berkeley – School of Law; and Evan Zimmerman, University of California, Berkeley – School of Law; and A Taxonomy of Police Technology’s Racial Inequity Problems by Laura Moy, Georgetown University Law Center.
FPF also selected a paper for the Student Paper Award, A Fait Accompli? An Empirical Study into the Absence of Consent to Third Party Tracking in Android Apps by Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford. The Student Paper Award Honorable Mention was awarded to Yeji Kim, University of California, Berkeley – School of Law, for her paper, Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent.
The winning authors will join FPF staff to present their work at a virtual event with policymakers from around the world, academics, and industry privacy professionals. The event will be held on February 10, 2022, from 1:00 – 3:00 PM EST. The event is free and open to the general public. To register for the event, visit https://bit.ly/3qmJdL2.
Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release
A New FPF and IBM Report and Live Event Explores Questions About Transparency, Consent, Security, and Accuracy of Data
The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users.
“We have a prime opportunity now to implement strong privacy and human rights protections as brain-computer interfaces become more widely used,” said Jeremy Greenberg, Policy Counsel at the Future of Privacy Forum. “Among other uses, these technologies have tremendous potential to treat people with diseases and conditions like epilepsy or paralysis and make it easier for people with disabilities to communicate, but these benefits can only be fully realized if meaningful privacy and ethical safeguards are in place.”
Brain-computer interfaces are computer-based systems that are capable of directly recording, processing, analyzing, or modulating human brain activity. The sensitivity of data that BCIs collect and the capabilities of the technology raise concerns over consent, as well as the transparency, security, and accuracy of the data. The report offers a number of policy and technical solutions to mitigate the risks of BCIs and highlights their positive uses.
“Emerging innovations like neurotechnology hold great promise to transform healthcare, education, transportation, and more, but they need the right guardrails in place to protect individuals’ privacy,” said IBM Chief Privacy Officer Christina Montgomery. “Working together with the Future of Privacy Forum, the IBM Policy Lab is pleased to release a new framework to help policymakers and businesses navigate the future of neurotechnology while safeguarding human rights.”
FPF and IBM have outlined several key policy recommendations to mitigate the privacy risks associated with BCIs, including:
Rethinking transparency, notice, terms of use, and consent frameworks to empower people around uses of their neurodata;
Ensuring that BCI devices are not allowed for uses to influence decisions about individuals that have legal effects, livelihood effects, or similar significant impacts—such as assessing the truthfulness of statements in legal proceedings; inferring thoughts, emotions or psychological state, or personality attributes as part of hiring or school admissions decisions; or assessing individuals’ eligibility for legal benefits;
Promoting an open and inclusive research ecosystem by encouraging the adoption of open standards for the collection and analysis of neurodata and the sharing of research data with appropriate safeguards in place.
Policymakers and other BCI stakeholders should carefully evaluate how existing policy frameworks apply to neurotechnologies and identify potential areas where existing laws and regulations may be insufficient for the unique risks of neurotechnologies.
FPF and IBM have also included several technical recommendations for BCI devices, including:
Providing hard on/off controls for users;
Allowing users to manage the collection, use, and sharing of personal neurodata on devices and in companion apps;
Offering heightened transparency and control for BCIs that send signals to the brain, rather than merely receive neurodata;
Utilizing best practices for privacy and security to store and process neurodata and use privacy enhancing technologies where appropriate; and
Encrypting sensitive personal neurodata in transit and at rest.
FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.
Read FPF’s four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
FPF Launches Asia-Pacific Region Office, Global Data Protection Expert Clarisse Girot Leads Team
The Future of Privacy Forum (FPF) has appointed Clarisse Girot, PhD, LLM, an expert on Asian and European privacy legislation, to lead its new FPF Asia-Pacific office based in Singapore as Director. This new office expands FPF’s international reach in Asia and complements FPF’s offices in the U.S., Europe, and Israel, as well as partnerships around the globe.
Dr. Clarisse Girot is a privacy professional with over twenty years of experience in the privacy and data protection fields. Since 2017, Clarisse has been leading the Asian Business Law Institute’s (ABLI) Data Privacy Project, focusing on the regulations on cross-border data transfers in 14 Asian jurisdictions. Prior to her time at ABLI, Clarisse served as the Counsellor to the President of the French Data Protection Authority (CNIL) and Chair of the Article 29 Working Party. She previously served as head of CNIL’s Department of European and International Affairs, where she sat on the Article 29 Working Party, the group of EU Data Protection Authorities, and was involved in major international cases in data protection and privacy.
“Clarisse is joining FPF at an important time for data protection in the Asia-Pacific region. The two most populous countries in the world, India, and China, are introducing general privacy laws, and established data protection jurisdictions, like Singapore, Japan, South Korea, and New Zealand, have recently updated their laws,” said FPF CEO Jules Polonetsky. “Her extensive knowledge of privacy law will provide vital insights for those interested in compliance with regional privacy frameworks and their evolution over time.”
FPF Asia-Pacific will focus on several priorities by the end of the year including hosting an event at this year’s Singapore Data Protection Week. The office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and legal professionals. Rajah & Tann Singapore LLP is supporting the work of the FPF Asia-Pacific office.
“The FPF global team will greatly benefit from the addition of Clarisse. She will advise FPF staff, advisory board members, and the public on the most significant privacy developments in the Asia-Pacific region, including data protection bills and cross-border data flows,” said Gabriela Zanfir-Fortuna, Director for Global Privacy at FPF. “Her past experience in both Asia and Europe gives her a unique ability to confront the most complex issues dealing with cross-border data protection.”
As over 140 countries have now enacted a privacy or data protection law, FPF continues to expand its international presence to help data protection experts grapple with the challenges of ensuring responsible uses of data. Following the appointment of Malavika Raghavan as Senior Fellow for India in 2020, the launch of the FPF Asia-Pacific office further expands FPF’s international reach.
Dr. Gabriela Zanfir-Fortuna leads FPF’s international efforts and works on global privacy developments and European data protection law and policy. The FPF Europe office is led by Dr. Rob van Eijk, who prior to joining FPF worked at the Dutch Data Protection Authority as Senior Supervision Officer and Technologist for nearly ten years. FPF has created thriving partnerships with leading privacy research organizations in the European Union, such as Dublin City University and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB). FPF continues to serve as a leading voice in Europe on issues of international data flows, the ethics of AI, and emerging privacy issues. FPF Europe recently published a report comparing the regulatory strategy for 2021-2022 of 15 Data Protection Authorities to provide insights into the future of enforcement and regulatory action in the EU.
Outside of Europe, FPF has launched a variety of projects to advance tech policy leadership and scholarship in regions around the world, including Israel and Latin America. The work of the Israel Tech Policy Institute (ITPI), led by Managing Director Limor Shmerling Magazanik, includes publishing a report on AI Ethics in Government Services and organizing an OECD workshop with the Israeli Ministry of Health on access to health data for research.
In Latin America, FPF has partnered with the leading research association Data Privacy Brasil, provided in-depth analysis on Brazil’s LGPD privacy legislation and various data privacy cases decided in the Brazilian Supreme Court. FPF recently organized a panel during the CPDP LatAm Conference which explored the state of Latin American data protection laws alongside experts from Uber, the University of Brasilia, and the Interamerican Institute of Human Rights.
FPF and Leading Health & Equity Organizations Issue Principles for Privacy & Equity in Digital Contact Tracing Technologies
With support from the Robert Wood Johnson Foundation, FPF engaged leaders within the privacy and equity communities to develop actionable guiding principles and a framework to help bolster the responsible implementation of digital contact tracing technologies (DCTT). Today, seven privacy, civil rights, and health equity organizations signed on to these guiding principles for organizations implementing DCTT.
“We learned early in our Privacy and Pandemics initiative that unresolved ethical, legal, social, and equity issues may challenge the responsible implementation of digital contact tracing technologies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “So we engaged leaders within the civil rights, health equity, and privacy communities to create a set of actionable principles to help guide organizations implementing digital contact tracing that respects individual rights.”
Contact tracing has long been used to monitor the spread of various infectious diseases. In light of COVID-19, governments and companies began deploying digital exposure notification using Bluetooth and geolocation data on mobile devices to boost contact tracing efforts and quickly identify individuals who may have been exposed to the virus. However, as DCTT begins to play an important role in public health, it is important to take necessary steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that might accompany its implementation today and in the future. Governance efforts that seek to better understand these risks will be better able to bolster public trust in DCTT technologies.
“LGBT Tech is proud to have participated in the development of the Principles and Framework alongside FPF and other organizations. We are heartened to see that the focus of these principles is on historically underserved and under-resourced communities everywhere, like the LGBTQ+ community. We believe the Principles and Framework will help ensure that the needs and vulnerabilities of these populations are at the forefront during today’s pandemic and future pandemics.”
Carlos Gutierrez, Deputy Director, and General Counsel, LGBT Tech
“If we establish practices that protect individual privacy and equity, digital contact tracing technologies could play a pivotal role in tracking infectious diseases,” said Dr. Rachele Hendricks-Sturrup, Research Director at the Duke-Margolis Center for Health Policy. “These principles allow organizations implementing digital contact tracing to take ethical and responsible approaches to how their technology collects, tracks, and shares personal information.”
FPF, together with Dialogue on Diversity, the National Alliance Against Disparities in Patient Health (NADPH), BrightHive, and LGBT Tech, developed the principles, which advise organizations implementing DCTT to commit to the following actions:
Be Transparent About How Data Is Used and Shared.
Apply Strong De-Identification Techniques and Solutions.
Empower Users Through Tiered Opt-in/Opt-out Features and Data Minimization.
Acknowledge and Address Privacy, Security, and Nondiscrimination Protection Gaps.
Create Equitable Access to DCTT.
Acknowledge and Address Implicit Bias Within and Across Public and Private Settings.
Democratize Data for Public Good While Employing Appropriate Privacy Safeguards.
Adopt Privacy-By-Design Standards That Make DCTT Broadly Accessible.
Additional supporters of these principles include the Center for Democracy and Technology and Human Rights First.
To learn more and sign on to the DCTT Principles visit fpf.org/DCTT.
Support for this program was provided by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the Foundation.
Navigating Preemption through the Lens of Existing State Privacy Laws
This post is the second of two posts on federal preemption and enforcement in United States federal privacy legislation. See Preemption in US Privacy Laws (June 14, 2021).
In drafting a federal baseline privacy law in the United States, lawmakers must decide to what extent the law will override state and local privacy laws. In a previous post, we discussed a survey of 12 existing federal privacy laws passed between 1968-2003, and the extent to which they are preemptive of similar state laws.
Another way to approach the same question, however, is to examine the hundreds of existing state privacy laws currently on the books in the United States. Conversations around federal preemption inevitably focus on comprehensive laws like the California Consumer Privacy Act, or the Virginia Consumer Data Protection Act — but there are hundreds of other state privacy laws on the books that regulate commercial and government uses of data.
In reviewing existing state laws, we find that they can be categorized usefully into: laws that complement heavily regulated sectors (such as health and finance); laws of general applicability; common law; laws governing state government activities (such as schools and law enforcement); comprehensive laws; longstanding or narrowly applicable privacy laws; and emerging sectoral laws (such as biometrics or drones regulations). As a resource, we recommend: Robert Ellis Smith, Compilation of State and Federal Privacy Laws (last supplemented in 2018).
Heavily Regulated Sectoral Silos. Most federal proposals for a comprehensive privacy law would not supersede other existing federal laws that contain privacy requirements for businesses, such as the Health Insurance Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act (GLBA). As a result, a new privacy law should probably not preempt state sectoral laws that: (1) supplement their federal counterparts and (2) were intentionally not preempted by those federal regimes. In many cases, robust compliance regimes have been built around federal and state parallel requirements, creating entrenched privacy expectations, privacy tools, and compliance practices for organizations (“lock in”).
Laws of General Applicability. All 50 states have laws barring unfair and deceptive commercial and trade practices (UDAP), as well as generally applicable laws against fraud, unconscionable contracts, and other consumer protections. In cases where violations involve the mis-use of personal information, such claims could be inadvertently preempted by a national privacy law.
State Common Law. Privacy claims have been evolving in US common law over the last hundred years, and claims vary from state to state. A federal privacy law might preempt (or not preempt) claims brought under theories of negligence, breach of contract, product liability, invasions of privacy, or other “privacy torts.”
State Laws Governing State Government Activities. In general, states retain the right to regulate their own government entities, and a commercial baseline privacy law is unlikely to affect such state privacy laws. These include, for example, state “mini Privacy Acts” applying to state government agencies’ collection of records, state privacy laws applicable to public schools and school districts, and state regulations involving law enforcement — such as government facial recognition bans.
Comprehensive or Non-Sectoral State Laws. Lawmakers considering the extent of federal preemption should take extra care to consider the effect on different aspects of omnibus or comprehensive consumer privacy laws, such as the California Consumer Privacy Act (CCPA), the Colorado Privacy Act, and the Virginia Consumer Data Protection Act. In addition, however, there are a number of other state privacy laws that can be considered “non-sectoral” because they apply broadly to businesses that collect or use personal information. These include, for example, CalOPPA (requiring commercial privacy policies), the California “Shine the Light” law (requiring disclosures from companies that share personal information for direct marketing), data breach notification laws, and data disposal laws.
Congressional intent is the “ultimate touchstone” of preemption. Lawmakers should consider long-term effects on current and future state laws, including how they will be impacted by a preemption provision, as well as how they might be expressly preserved through a Savings Clause. In order to help build consensus, lawmakers should work with stakeholders and experts in the numerous categories of laws discussed above, to consider how they might be impacted by federal preemption.
Manipulative Design: Defining Areas of Focus for Consumer Privacy
In consumer privacy, the phrase “dark patterns” is everywhere. Emerging from a wide range of technical and academic literature, it now appears in at least two US privacy laws: the California Privacy Rights Act and the Colorado Privacy Act (which, if signed by the Governor, will come into effect in 2025).
Under both laws, companies will be prohibited from using “dark patterns,” or “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision‐making, or choice,” to obtain user consent in certain situations–for example, for the collection of sensitive data.
When organizations give individuals choices, some forms of manipulation have long been barred by consumer protection laws, with the Federal Trade Commission and state Attorneys General prohibiting companies from deceiving or coercing consumers into taking actions they did not intend or striking bargains they did not want. But consumer protection law does not typically prohibit organizations from persuading consumers to make a particular choice. And it is often unclear where the lines fall between cajoling, persuading, pressuring, nagging, annoying, or bullying consumers. The California and Colorado laws seek to do more than merely bar deceptive practices; they prohibit design that “subverts or impairs user autonomy.”
What does it mean to subvert user autonomy, if a design does not already run afoul of traditional consumer protections law? Just as in the physical world, the design of digital platforms and services always influences behavior — what to pay attention to, what to read and in what order, how much time to spend, what to buy, and so on. To paraphrase Harry Brignull (credited with coining the term), not everything “annoying” can be a dark pattern. Some examples of dark patterns are both clear and harmful, such as a design that tricks users into making recurring payments, or a service that offers a “free trial” and then makes it difficult or impossible to cancel. In other cases, the presence of “nudging” may be clear, but harms may be less clear, such as in beta-testing what color shades are most effective at encouraging sales. Still others fall in a legal grey area: for example, is it ever appropriate for a company to repeatedly “nag” users to make a choice that benefits the company, with little or no accompanying benefit to the user?
In Fall 2021, Future of Privacy Forum will host a series of workshops with technical, academic, and legal experts to help define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops will feature experts on manipulative design in at least three contexts of consumer privacy: (1) Youth & Education; (2) Online Advertising and US Law; and (3) GDPR and European Law.
As lawmakers address this issue, we identify at least four distinct areas of concern:
Designs that cause concrete physical or financial harms to individuals. In some cases, design choices are implicated in concrete physical or financial harms. This might include, for example, a design that tricks users into making recurring payments, or makes unsubscribing from a free trial or other paid service difficult or impossible, leading to unwanted charges.
Designs that impact individual autonomy or dignity (but do not necessarily cause concrete physical or financial harm). In many cases, we observe concerns over autonomy and dignity, even where the use of data would not necessarily cause harm. For the same reasons that there is wide agreement that so-called subliminal messaging in advertising is wrong (as well as illegal), there is a growing awareness that disrespect for user autonomy in consumer privacy is objectionable on its face. As a result, in cases where the law requires consent, such as in the European Union for placement of information onto a user’s device, the law ought to provide a remedy for individuals who have been subject to a violation of that consent.
Designs that persuade, nag, or strongly push users towards a particular outcome, even where it may be possible for users to decline. In many cases, the design of a digital platform or serviceclearlypushes users towards a particular outcome, even if it is possible (if burdensome) for users to make a different choice. In such cases, we observe a wide spectrum of tactics that may be evaluated differently depending on the viewer and the context. Repeated requests may be considered “nagging” or “persuasion”; one person’s “clever marketing,” taken too far, becomes another person’s “guilt-shaming” or “confirm-shaming.” Ultimately, our preference for defaults (“opt in” versus “opt out”), and within those defaults, our level of tolerance for “nudging,” may be driven by the social benefits or values attached to the choice itself.
Designs that exploit biases, vulnerabilities, or heuristics in ways that implicate broader societal harms or values. Finally, we observe that the collection and use of personal information does not always solely impact individual decision-making. Often, the design of online platforms can influence groups in ways that impact societal values, such as the values of privacy, avoidance of “tech addiction,” free speech, the availability of data from or about marginalized groups, or the proliferation of unfair price discrimination or other market manipulation. Understanding how design choices may influence society, even if individuals are minimally impacted, may require examining the issues differently.
This week at the first edition of the annual Dublin Privacy Symposium, FPF will join other experts to discuss principles for transparency and trust. The design of user interfaces for digital products and services pervades modern life and directly impacts the choices people make with respect to sharing their personal information.
recast the conditions to obtain ‘safe harbour’ from liability for online intermediaries, and
unveiled an extensive regulatory regime for a newly defined category of online ‘publishers’, which includes digital news media and Over-The-Top (OTT) services.
The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security.
Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market.
1.Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’
The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’. Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist.
The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act. To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog.
Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats.
Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency”blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.
2.Key issues for intermediaries under the Rules
2.1 A new class of ‘social media intermediaries‘
The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:
“social media intermediary” defined (in Rule 2(w)) as one who “primarily or solely enables online interaction between two or more users and allows them” to exchange information; and
“significant social media intermediary” (SSMI) comprising social media intermediaries with more than five million registered users in India (following this Government notification of the threshold).
Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order anyintermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.
SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.
Extensive new requirements are set out in the new Rule 4 for SSMIs.
In-country employees: SSMIs must appoint in-country employees as (1) Chief Compliance Officer, (2) a nodal contact person for 24×7 coordination with law enforcement agencies and (3) a Resident Grievance Officer specifically responsible for overseeing the internal grievance redress mechanism. Monthly reporting of complaints management is also mandated.
Traceability requirements for SSMIs providing messaging services: Among the most controversial requirements is Rule 4(2) which requires SSMIs providing messaging services to enable the identification of the “first originator” of information on their platforms as required by Government or court orders. This tracing and identification of users is considered incompatible with end-to-end encryption technology employed by messaging applications like Whatsapp and Signal. In their legal challenge to this Rule, Whatsapp has noted that end-to-end encrypted platforms would need to be re-engineered to identify all users since there is no way to predict which user will be the subject of an order seeking first originator information.
Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder.
Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.
Automated content screening: Rule 4(4) mandates that SSMIs must employ technology-based measures including automated tools to proactively identify information depicting (i) rape, child sexual abuse or conduct, or (ii) any information previousy removed following a Government or court order. The latter category is very expansive and allows content take-downs for a broad range of reasons including defamatory or pornographic content, to IP infringements, to content threatening national security or public order (as set out in Rule 3(1)(d)).
Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139).
Tighter timelines for grievance redress, content take down and information sharing with law enforcement:Rule 3 includes enhanced requirements to serve privacy policies and user agreements outlining the terms of use, including annual reminders of these terms and any modifications and of the intermediaries’ right to terminate the user’s access for using the service in contravention of these terms. The Rule also has enhanced grievance redress processes for intermediaries, by expanding these requirements to mandate that the complaints system acknowledge complaints within 24 hours, and dispose of them in 15 days. In the case of certain categories of complaints (where a person complains of inappropriate images or impersonations of them being circulated), the removal of access to the material is mandated within 24 hours based on a prima facie assessment.
Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.
Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j).
Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.
3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players
3.1New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players
Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.
Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below.
3.2 Lack of legal authority and competence to create these Rules
There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels.
Authority and competence to regulate ‘publishers’ of original content is unclear: The definition of ‘intermediary’ in the IT Act does not extend to cover types of entities defined to be publishers. The Rules themselves acknowledge that ‘publishers’ are a new category of regulated entity created by the Rules, as opposed to a sub-category of intermediaries. Further, the commencement of the Rules also confirm that they are passed under statutory provisions in the IT Act related to intermediary regulation. It is a well established principle that subordinate rules cannot go beyond the object and scope of parent statutory provisions (Ajoy Kumar Banerjee v Union of India (1984) 3 SCC 127). Consequently, the authority of MeitY to regulate entities that create original content – like online news sources and OTT platforms – remains unclear at best.
Ability to extend substantive provisions in other statutes through the Rules: The Rules apply two codes of conduct to digital publishers of news and current affairs content, namely (i) the Norms of Journalistic Conduct of the Press Council of India under the Press Council Act, 1978; (ii) Programme Code under section 5 of the Cable Television Networks Regulation) Act, 1995. Many, including petitioners in the LiveLaw matterhave noted that the power to make Rules under the IT Act’s s 87 cannot be used to extend or expand requirements under other statutes and their subordinate rules. To bring digital news media or OTT players into existing regulatory regimes for the Press and television broadcasting, amendments to those regimes will be required led by the Ministry of Information and Broadcasting.
Validity of three-tier ‘quasi-judicial’ adjudicatory mechanism, with final appeal to Committee of solely executive functionaries: Rules 11 – 14 create a three-tier grievance and oversight system which can be used by any person with a grievance against content published by any publisher. Under this model, any person having a grievance with any material published by a publisher can complain through the publisher’s redress process. If any grievance is not satisfactorily dealt with by the publisher entity (Level I) in 15 days, it will be escalated to the self regulatory body of which the publisher is a member (Level II) which must also provide a decision to the complainant within 15 days. If the complainant is unsatisfied, they may appeal to the Oversight Mechanism (in Level III). This can be appreciated as an attempt to create feedback loops that can minimise the spread of misleading or incendiary media, disinformation etc through a more effective grievance mechanism. The structure and design of the three-tier structure have however raised specific concerns.
First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders. Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).
The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.
4. New powers to delete, modify or block information for public access
As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.
More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue.
Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services.
5. Conclusions and future implications
The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market.
For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India.
For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present.
Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.
Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary.
These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.
New FPF Report Highlights Privacy Tech Sector Evolving from Compliance Tools to Platforms for Risk Management and Data Utilization
As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.
“The privacy tech sector is at an inflection point, as its offerings have expanded beyond assisting with regulatory compliance,” said FPF CEO Jules Polonetsky. “Increasingly, companies want privacy tech to help businesses maximize the utility of data while managing ethics and data protection compliance.”
According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance.
The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding.
“The customers buying privacy-enhancing tech used to be primarily Chief Privacy Officers,” said report lead author Tim Sparapani. “Now it’s also Chief Marketing Officers, Chief Data Scientists, and Strategy Officers who value the insights they can glean from de-identified customer data.”
The report highlights five trends in the privacy enhancing tech market:
Buyers desire “enterprise-wide solutions.”
Buyers favor integrated technologies.
Some vendors are moving to either collaborate and integrate or provide fully integrated solutions themselves.
Data is the enterprise asset.
Jurisdiction impacts a shared vernacular problem.
The report also draws seven implications for competition in the market:
Buyers favor integrated solutions over one-off solutions.
Collaborations, partners, cross-selling, and joint ventures between privacy tech vendors are increasing to provide buyers integrated suites of services and to attract additional market share.
Private equity and private equity-backed companies will continue their “roll-up” strategies of buying niche providers to build a package of companies to provide the integrated solutions buyers favor.
Venture capital will continue funding the privacy tech sector, though not every seller has the same level of success fundraising.
Big companies may acquire strategically valuable, niche players.
Small startups may struggle to gain market traction absent a truly novel or superb solution.
Buyers will face challenges in future-proofing their privacy strategies.
The report makes a series of recommendations, including that the industry define as a priority a common vernacular for privacy tech; set standards for technologies in the “privacy stack” such as differential privacy, homomorphic encryption, and federated learning; and explore the needs of companies for privacy tech based upon their size, sector, and structure. It calls on vendors to recognize the need to provide adequate support to customers to increase uptake and speed time from contract signing to successful integration.
The Future of Privacy Forum launched the Privacy Tech Alliance (PTA) as a global initiative with a mission to define, enhance and promote the market for privacy technologies. The PTA brings together innovators in privacy tech with customers and key stakeholders.
Members of the PTA Advisory Board, which includes Anonos, BigID, D-ID, Duality, Ethyca, Immuta, OneTrust, Privacy Analytics, Privitar, SAP, Truata, TrustArc, Wirewheel, and ZL Tech, have formed a working group to address impediments to growth identified in the report. The PTA working group will define a common vernacular and typology for privacy tech as a priority project with chief privacy officers and other industry leaders who are members of FPF. Other work will seek to develop common definitions and standards for privacy-enhancing technologies such as differential privacy, homomorphic encryption, and federated learning and identify emerging trends for venture capitalists and other equity investors in this space. Privacy Tech companies can apply to join the PTA by emailing [email protected].
Perspectives on the Privacy Tech Market
Quotes from Members of the Privacy Tech Alliance Advisory Board on the Release of the “Privacy Tech’s Third Generation” Report
“The ‘Privacy Tech Stack’ outlined by the FPF is a great way for organizations to view their obligations and opportunities to assess and reconcile business and privacy objectives. The Schrems II decision by the Court of Justice of the European Union highlights that skipping the second ‘Process’ layer can result in desired ‘Outcomes’ in the third layer (e.g., cloud processing of, or remote access to, cleartext data) being unlawful – despite their global popularity – without adequate risk management controls for decentralized processing.” — Gary LaFever, CEO & General Counsel, Anonos
“As a founding member of this global initiative, we are excited by the conclusions drawn from this foundational report – we’ve seen parallels in our customer base, from needing an enterprise-wide solution to the rich opportunity for collaboration and integration. The privacy tech sector continues to mature as does the imperative for organizations of all sizes to achieve compliance in light of the increasingly complicated data protection landscape.’’—Heather Federman, VP Privacy and Policy at BigID
“There is no doubt of the massive importance of the privacy sector, an area which is experiencing huge growth. We couldn’t be more proud to be part of the Privacy Tech Alliance Advisory Board and absolutely support the work they are doing to create alignment in the industry and help it face the current set of challenges. In fact we are now working on a similar initiative in the synthetic media space to ensure that ethical considerations are at the forefront of that industry too.” — Gil Perry, Co-Founder & CEO, D-ID
“We congratulate the Future of Privacy Forum and the Privacy Tech Alliance on the publication of this highly comprehensive study, which analyzes key trends within the rapidly expanding privacy tech sector. Enterprises today are increasingly reliant on privacy tech, not only as a means of ensuring regulatory compliance but also in order to drive business value by facilitating secure collaborations on their valuable and often sensitive data. We are proud to be part of the PTA Advisory Board, and look forward to contributing further to its efforts to educate the market on the importance of privacy-tech, the various tools available and their best utilization, ultimately removing barriers to successful deployments of privacy-tech by enterprises in all industry sectors” — Rina Shainski, Chairwoman, Co-founder, Duality
“Since the birth of the privacy tech sector, we’ve been helping companies find and understand the data they have, compare it against applicable global laws and regulations, and remediate any gaps in compliance. But as the industry continues to evolve, privacy tech also is helping show business value beyond just compliance. Companies are becoming more transparent, differentiating on ethics and ESG, and building businesses that differentiate on trust. The privacy tech industry is growing quickly because we’re able to show value for compliance as well as actionable business insights and valuable business outcomes.” — Kabir Barday, CEO, OneTrust
“Leading organizations realize that to be truly competitive in a rapidly evolving marketplace, they need to have a solid defensive footing. Turnkey privacy technologies enable them to move onto the offense by safely leveraging their data assets rapidly at scale.” — Luk Arbuckle, Chief Methodologist, Privacy Analytics
“We appreciate FPF’s analysis of the privacy tech marketplace and we’re looking forward to further research, analysis, and educational efforts by the Privacy Tech Alliance. Customers and consumers alike will benefit from a shared understanding and common definitions for the elements of the privacy stack.” — Corinna Schulze, Director, EU Government Relations, Global Corporate Affairs, SAP
“The report shines a light on the evolving sophistication of the privacy tech market and the critical need for businesses to harness emerging technologies that can tackle the multitude of operational challenges presented by the big data economy. Businesses are no longer simply turning to privacy tech vendors to overcome complexities with compliance and regulation; they are now mapping out ROI-focused data strategies that view privacy as a key commercial differentiator. In terms of market maturity, the report highlights a need to overcome ambiguities surrounding new privacy tech terminology, as well as discrepancies in the mapping of technical capabilities to actual business needs. Moving forward, the advantage will sit with those who can offer the right blend of technical and legal expertise to provide the privacy stack assurances and safeguards that buyers are seeking – from a risk, deployment and speed-to-value perspective. It’s worth noting that the growing importance of data privacy to businesses sits in direct correlation with the growing importance of data privacy to consumers. Trūata’s Global Consumer State of Mind Report 2021 found that 62% of global consumers would feel more reassured and would be more likely to spend with companies if they were officially certified to a data privacy standard. Therefore, in order to manage big data in a privacy-conscious world, the opportunity lies with responsive businesses that move with agility and understand the return on privacy investment. The shift from manual, restrictive data processes towards hyper automation and privacy-enhancing computation is where the competitive advantage can be gained and long-term consumer loyalty—and trust— can be retained.” — Aoife Sexton, Chief Privacy Officer and Chief of Product Innovation, Trūata
“As early pioneers in this space, we’ve had a unique lens on the evolving challenges organizations have faced in trying to integrate technology solutions to address dynamic, changing privacy issues in their organizations, and we believe the Privacy Technology Stack introduced in this report will drive better organizational decision-making related to how technology can be used to sustainably address the relationships among the data, processes, and outcomes.” — Chris Babel, CEO, TrustArc
“It’s important for companies that use data to do so ethically and in compliance with the law, but those are not the only reasons why the privacy tech sector is booming. In fact, companies with exceptional privacy operations gain a competitive advantage, strengthen customer relationships, and accelerate sales.” — Justin Antonipillai, Founder & CEO, Wirewheel
The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?
The Brazilian Supreme Federal Court, or “STF” in its Brazilian acronym, recently took a landmark decision concerning the right to be forgotten (RTBF), finding that it is incompatible with the Brazilian Constitution. This attracted international attention to Brazil for a topic quite distant than the sadly frequent environmental, health, and political crises.
Readers should be warned that while reading this piece they might experience disappointment, perhaps even frustration, then renewed interest and curiosity and finally – and hopefully – an increased open-mindedness, understanding a new facet of the RTBF debate, and how this is playing out at constitutional level in Brazil.
This might happen because although the STF relies on the “RTBF” label, the content behind such label is quite different from what one might expect after following the same debate in Europe. From a comparative law perspective, this landmark judgment tellingly shows how similar constitutional rights play out in different legal cultures and may lead to heterogeneous outcomes based on the constitutional frameworks of reference.
How it started: insolvency seasoned with personal data
As it is well-known, the first global debate on what it means to be “forgotten” in the digital environment arose in Europe, thanks to Mario Costeja Gonzalez, a Spaniard who, paradoxically, will never be forgotten by anyone due to his key role in the construction of the RTBF.
Costeja famously requested to deindex from Google Search information about himself that he considered to be no longer relevant. Indeed, when anyone “googled” his name, the search engine provided as the top results some link to articles reporting Costeja’s past insolvency as a debtor. Costeja argued that, despite having been convicted for insolvency, he had already paid his debt with Justice and society many years before and it was therefore unfair that his name would continue to be associated ad aeternum with a mistake he made in the past.
The follow up is well known in data protection circles. The case reached the Court of Justice of the European Union (CJEU), which, in its landmark Google Spain Judgment (C-131/12), established that search engines shall be considered as data controllers and, therefore, they have an obligation to de-index information that is inappropriate, excessive, not relevant, or no longer relevant, when a data subject to whom such data refer requests it. Such an obligation was a consequence of Article 12.b of Directive 95/46 on the protection of personal data, a pre-GDPR provision that set the basis for the European conception of the RTBF, providing for the “rectification, erasure or blocking of data the processing of which does not comply with the provisions of [the] Directive, in particular because of the incomplete or inaccurate nature of the data.”
The indirect consequence of this historic decision, and the debate it generated, is that we have all come to consider the RTBF in the terms set by the CJEU. However, what is essential to emphasize is that the CJEU approach is only one possible conception and, importantly, it was possible because of the specific characteristics of the EU legal and institutional framework. We have come to think that RTBF means the establishment of a mechanism like the one resulting from the Google Spain case, but this is the result of a particular conception of the RTBF and of how this particular conception should – or could – be implemented.
The fact that the RTBF has been predominantly analyzed and discussed through the European lenses does not mean that this is the only possible perspective, nor that this approach is necessary the best. In fact, the Brazilian conception of the RTBF is remarkably different from a conceptual, constitutional, and institutional standpoint. The main concern of the Brazilian RTBF is not how a data controller might process personal data (this is the part where frustration and disappointment might likely arise in the reader) but the STF itself leaves the door open to such possibility (this is the point where renewed interest and curiosity may arise).
The Brazilian conception of the right to be forgotten
Although the RTBF has acquired a fundamental relevance in digital policy circles, it is important to emphasize that, until recently, Brazilian jurisprudence had mainly focused on the juridical need for “forgetting” only in the analogue sphere. Indeed, before the CJEU Google Spain decision, the Brazilian Supreme Court of Justice or “STJ” – the other Brazilian Supreme Court that deals with the interpretation of the Law, differently from the previously mentioned STF, which deals with the interpretation of constitutional matters – had already considered the RTBF as a right not to be remembered, affirmed by the individual vis-à-vis traditional media outlets.
This interpretation first emerged in the “Candelaria massacre” case, a gloomy page of Brazilian history, featuring a multiple homicide perpetrated in 1993 in front of the Candelaria Church, a beautiful colonial Baroque building in Rio de Janeiro’s downtown. The gravity and the particularly picturesque stage of the massacre led Globo TV, a leading Brazilian broadcaster, to feature the massacre in a TV show called Linha Direta. Importantly, the show included in the narration some details about a man suspected of being one of the perpetrators of the massacre but later discharged.
Understandably, the man filed a complaint arguing that the inclusion of his personal information in the TV show was causing him severe emotional distress, while also reviving suspects against him, for a crime he had already been discharged of many years before. In September 2013, further to Special Appeal No. 1,334,097, the STJ agreed with the plaintiff establishing the man’s “right not to be remembered against his will, specifically with regard to discrediting facts.” This is how the RTBF was born in Brazil.
Importantly for our present discussion, this interpretation is not born out of digital technology and does not impinge upon the delisting of specific type of information as results of search engine queries. In Brazilian jurisprudence the RTBF has been conceived as a general right to effectively limit the publication of certain information. The man included in the Globo reportage had been discharged many years before, hence he had a right to be “let alone,” as Warren and Brandeis would argue, and not to be remembered for something he had not even committed. The STJ, therefore, constructed its vision of the RTBF, based on article 5.X of the Brazilian Constitution, enshrining the fundamental right to intimacy and preservation of image, two fundamental features of privacy.
Hence, although they utilize the same label, the STJ and CJEU conceptualize two remarkably different rights, when they refer to the RTBF. While both conceptions aim at limiting access to specific types of personal information, the Brazilian conception differs from the EU one on at least three different levels.
First, their constitutional foundations. While both conceptions are intimately intertwined with individuals’ informational self-determination, the STJ built the RTBF based on the protection of privacy, honour and image, whereas the CJEU built it upon the fundamental right to data protection, which in the EU framework is a standalone fundamental right. Conspicuously, in the Brazilian constitutional framework an explicit right to data protection did not exist at the time of the Candelaria case and only since 2020 it has been in the process of being recognized.
Secondly, and consequently, the original goal of the Brazilian conception of the RTBF was not to regulate how a controller should process personal data but rather to protect the private sphere of the individual. In this perspective, the goal of STJ was not – and could not have been – to regulate the deindexation of specific incorrect or outdated information, but rather to regulate the deletion of “discrediting facts” so that the private life, honour and image of any individual might be illegitimately violated.
Finally, yet extremely importantly, the fact that, at the time of the decision, an institutional framework dedicated to data protection was simply absent in Brazil did not allow the STJ to have the same leeway of the CJEU. The EU Justices enjoyed the privilege of delegating to search engine the implementation of the RTBF because, such implementation would have received guidance and would have been subject to the review of a well-consolidated system of European Data Protection Authorities. At the EU level, DPAs are expected to guarantee a harmonious and consistent interpretation and application of data protection law. At the Brazilian level, a DPA has just been established in late 2020 and announced its first regulatory agenda only in late January 2021.
This latter point is far from trivial and, in the opinion of this author, an essential preoccupation that might have driven the subsequent RTBF conceptualization of the STJ.
The stress-test
The soundness of the Brazilian definition of the RTBF, however, was going to be tested again by the STJ, in the context of another grim and unfortunate page of Brazilian story, the Aida Curi case. This case originated with the sexual assault and subsequent homicide of the young Aida Curi, in Copacabana, Rio de Janeiro, on the evening of 14 July 1958. At the time the case crystallized considerable media attention, not only because of its mysterious circumstances and the young age of the victim, but also because the sexual assault perpetrators tried to dissimulate it by throwing the body of the victim from the rooftop of a very high building on the Avenida Atlantica, the fancy avenue right in front of the Copacabana beach.
Needless to say, Globo TV considered the case as a perfect story for yet another Linha Direta episode. Aida Curi’s relatives, far from enjoying the TV show, sued the broadcaster for moral damages and demanded the full enjoyment of their RTBF – in the Brazilian conception, of course. According to the plaintiffs, it was indeed not conceivable that, almost 50 years after the murder, Globo TV could publicly broadcast personal information about the victim – and her family – including the victim’s name and address, in addition to unauthorized images, thus bringing back a long-closed and extremely traumatic set of events.
The brothers of Aida Curi claimed reparation against Rede Globo, but the STJ, decided that the time passed was enough to mitigate the effects of anguish and pain on the dignity of Aida Curi’s relatives, while arguing that it was impossible to report the events without mentioning the victim. This decision was appealed by Ms Curi’s family members, who demanded by means of Extraordinary Appeal No. 1,010,606, that STF recognized “their right to forget the tragedy.” It is interesting to note that the way the demand is constructed in this Appeal exemplifies tellingly the Brazilian conception of “forgetting” as erasure and prohibition from divulgation.
At this point, the STF identified in the Appeal the interest of debating the issue “with general repercussion” which is a peculiar judicial process that the Court can utilize when recognizes that a given case has particular relevance and transcendence for the Brazilian legal and judicial system. Indeed, the decision of a case with general repercussion does not only bind the parties but rather establishes a jurisprudence that must be replicated by all lower-level courts.
In February 2021, the STF finally deliberated on the Aida Curi case, establishing that “the idea of a right to be forgotten is incompatible with the Constitution, thus understood as the power to prevent, due to the passage of time, the disclosure of facts or data that are true and lawfully obtained and published in analogue or digital media” and that “any excesses or abuses in the exercise of freedom of expression and information must be analyzed on a case-by-case basis, based on constitutional parameters – especially those relating to the protection of honor, image, privacy and personality in general – and the explicit and specific legal provisions existing in the criminal and civil spheres.”
In other words, what the STF has deemed as incompatible with the Federal Constitution is a specific interpretation of the Brazilian version of the RTBF. What is not compatible with the Constitution is to argue that the RTBF allows to prohibit publishing true facts, lawfully obtained. At the same time, however, the STF clearly states that it remains possible for any Court of law to evaluate, on a case-by-case basis and according to constitutional parameters and existing legal provisions, if a specific episode can allow the use of the RTBF to prohibit the divulgation of information that undermine the dignity, honour, privacy, or other fundamental interests of the individual.
Hence, while explicitly prohibiting the use of the RTBF as a general right to censorship, the STF leaves room for the use of the RTBF for delisting specific personal data in an EU-like fashion, while specifying that this must be done finding guidance in the Constitution and the Law.
What next?
Given the core differences between the Brazilian and EU conception of the RTBF, as highlighted above, it is understandable in the opinion of this author that the STF adopted a less proactive and more conservative approach. This must be especially considered in light of the very recent establishment of a data protection institutional system in Brazil.
It is understandable that the STF might have preferred to de facto delegate the interpretation of when and how the RTBF could be rightfully invoked before Courts, according to constitutional and legal parameters. First, in the Brazilian interpretation of the RTBF, this right fundamentally insist on the protection of privacy – i.e. the private sphere of an individual – and, while admitting the existence of data protection concerns, these are not the main ground on which the Brazilian RTBF conception relays.
It is understandable that in a country and a region where the social need to remember and shed light on what happened in a recent history, marked by dictatorships, well-hidden atrocities, and opacity, outweighs the legitimate individual interest to prohibit the circulation of truthful and legally obtained information. In the digital sphere, however, the RTBF quintessentially translates into an extension of informational self-determination, which the Brazilian General Data Protection Law, better known as “LGPD” (Law No. 13.709 / 2018), enshrines in its article 2 as one of the “foundations” of data protection in the country and that whose fundamental character was recently recognized by the STF itself.
In this perspective, it is useful to remind the dissenting opinion of Justice Luiz Edson Fachin, in the Aida Curi case, stressing that “although it does not expressly name it, the Constitution of the Republic, in its text, contains the pillars of the right to be forgotten, as it celebrates the dignity of the human person (article 1, III), the right to privacy (article 5, X) and the right to informational self-determination – which was recognized, for example, in the disposal of the precautionary measures of the Direct Unconstitutionality Actions No. 6,387, 6,388, 6,389, 6,390 and 6,393, under the rapporteurship of Justice Rosa Weber (article 5, XII).”
It is the opinion of this author that the Brazilian debate on the RTBF in the digital sphere would be clearer if it its dimension as a right to deindexation of search engines results were to be clearly regulated. It is understandable that the STF did not dare regulating this, given its interpretation of the RTBF and the very embryonic data protection institutional framework in Brazil. However, given the increasing datafication we are currently witnessing, it would be naïve not to expect that further RTBF claims concerning the digital environment and, specifically, the way search engines process personal data will keep emerging.
The fact that the STF has left the door open to apply the RTBF in the case-by-case analysis of individual claims may reassure the reader regarding the primacy of constitutional and legal arguments in such case-by-case analysis. It may also lead the reader to – very legitimately – wonder whether such a choice is the facto the most efficient to deal with the potentially enormous number of claims and in the most coherent way, given the margin of appreciation and interpretation that each different Court may have.
An informed debate able to clearly highlight what are the existing options and what might be the most efficient and just ways to implement them, considering the Brazilian context, would be beneficial. This will likely be one of the goals of the upcoming Latin American edition of the Computers, Privacy and Data Protection conference (CPDP LatAm) that will take place in July, entirely online, and will aim at exploring the most pressing issues for Latin American countries regarding privacy and data protection.
If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].
FPF announces appointment of Malavika Raghavan as Senior Fellow for India
The Future of Privacy Forum announces the appointment of Malavika Raghavan as Senior Fellow for India, expanding our Global Privacy team to one of the key jurisdictions for the future of privacy and data protection law.
Malavika is a thought leader and a lawyer working on interdisciplinary research, focusing on the impacts of digitisation on the lives of lower-income individuals. Her work since 2016 has focused on the regulation and use of personal data in service delivery by the Indian State and private sector actors. She has founded and led the Future of Finance Initiative for Dvara Research (an Indian think tank) in partnership with the Gates Foundation from 2016 until 2020, anchoring its research agenda and policy advocacy on emerging issues at the intersection of technology, finance and inclusion. Research that she led at Dvara Research was cited by the India’s Data Protection Committee in its White Paper as well as its final report with proposals for India’s draft Personal Data Protection Bill, with specific reliance placed on such research on aspects of regulatory design and enforcement. See Malavika’s full bio here.
“We are delighted to welcome Malavika to our Global Privacy team. For the following year, she will be our adviser to understand the most significant developments in privacy and data protection in India, from following the debate and legislative process of the Data Protection Bill and the processing of non-personal data initiatives, to understanding the consequences of the publication of the new IT Guidelines. India is one of the most interesting jurisdictions to follow in the world, for many reasons: the innovative thinking on data protection regulation, the potentially groundbreaking regulation of non-personal data and the outstanding number of individuals whose privacy and data protection rights will be envisaged by these developments, which will test the power structures of digital regulation and safeguarding fundamental rights in this new era”, said Dr. Gabriela Zanfir-Fortuna, Global Privacy lead at FPF.
We have asked Malavika to share her thoughts for FPF’s blog on what are the most significant developments in privacy and digital regulation in India and about India’s role in the global privacy and digital regulation debate.
FPF: What are some of the most significant developments in the past couple of years in India in terms of data protection, privacy, digital regulation?
Malavika Raghavan: “Undoubtedly, the turning point for the privacy debate India was the 2017 judgement of the Indian Supreme Court in Justice KS Puttaswamy v Union of India. The judgment affirmed the right to privacy as a constitutional guarantee, protected by Part III (Fundamental Rights) of the Indian Constitution. It was also regenerative, bringing our constitutional jurisprudence into the 21st century by re-interpreting timeless principles for the digital age, and casting privacy as a prerequisite for accessing other rights—including the right to life and liberty, to freedom of expression and to equality—given the ubiquitous digitisation of human experience we are witnessing today.
Overnight, Puttaswamy also re-balanced conversations in favour of privacy safeguards to make these equal priorities for builders of digital systems, rather than framing these issues as obstacles to innovation and efficiency. In addition, it challenged the narrative that privacy is an elite construct that only wealthy or privileged people deserve— since many litigants in the original case that had created the Puttaswamy reference were from marginalised groups. Since then, a string of interesting developments have arisen as new cases are reassessing the impact of digital technology on individuals in India, for e.g. the boundaries case of private sector data sharing (such as between Whatsapp and Facebook), or the State’s use of personal data (as in the case concerning Aadhaar, our national identification system) among others.
Puttaswamy also provided fillip for a big legislative development, which is the creation of an omnibus data protection law in India. A bill to create this framework was proposed by a Committee of Experts under the chairmanship of Justice Srikrishna (an ex-Supreme Court judge), which has been making its way through ministerial and Parliamentary processes. There’s a large possibility that this law will be passed by the Indian parliament in 2021! Definitely a big development to watch.
FPF: How do you see India’s role in the global privacy and digital regulation debate?
Malavika Raghavan: “India’s strategy on privacy and digital regulation will undoubtedly have global impact, given that India is home to 1/7th of the world’s population! The mobile internet revolution has created a huge impact on our society with millions getting access to digital services in the last couple of decades. This has created nuanced mental models and social norms around digital technologies that are slowly being documented through research and analysis.
The challenge for policy makers is to create regulations that match these expectations and the realities of Indian users to achieve reasonable, fair regulations. As we have already seen from sectoral regulations (such as those from our Central Bank around cross border payments data flows) such regulations also have huge consequences for global firms interacting with Indian users and their personal data.
In this context, I think India can have the late-mover advantage in some ways when it comes to digital regulation. If we play our cards right, we can take the best lessons from the experience of other countries in the last few decades and eschew the missteps. More pragmatically, it seems inevitable that India’s approach to privacy and digital regulation will also be strongly influenced by the Government’s economic, geopolitical and national security agenda (both internationally and domestically).
One thing is for certain: there is no path-dependence. Our legislators and courts are thinking in unique and unexpected ways that are indeed likely to result in a fourth way (as described by the Srikrishna Data Protection Committee’s final report), compared to the approach in the US, EU and China.”
If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].
India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content
On February 25, the Indian Government notified and published Information Technology (Guidelines for Intermediaries and Digital media Ethics Code) Rules 2021. These rules mirror the Digital Services Act (DSA) proposal of the EU to some extent, since they propose a tiered approach based on the scale of the platform, they touch on intermediary liability, content moderation, take-down of illegal content from online platforms, as well as internal accountability and oversight mechanisms, but they go beyond such rules by adding a Code of Ethics for digital media, similar to the Code of Ethics classic journalistic outlets must follow, and by proposing an “online content” labelling scheme for content that is safe for children.
The Code of Ethics applies to online news publishers, as well as intermediaries that “enable the transmission of news and current affairs”. This part of the Guidelines (the Code of Ethics) has already been challenged in the Delhi High Court by news publishers this week.
The Guidelines have raised several types of concerns in India, from their impact on freedom of expression, impact on the right to privacy through the automated scanning of content and the imposed traceability of even end-to-end encrypted messages so that the originator can be identified, to the choice of the Government to use executive action for such profound changes. The Government, through the two Ministries involved in the process, is scheduled to testify in the Standing Committee of Information Technology of the Parliament on March 15.
New obligations for intermediaries
“Intermediaries” include “websites, apps and portals of social media networks, media sharing websites, blogs, online discussion forums, and other such functionally similar intermediaries” (as defined in rule 2(1)(m)).
Here are some of the most important rules laid out in Part II of the Guidelines, dedicated to Due Diligence by Intermediaries:
All intermediaries, regardless of size or nature, will be under an obligation to “remove or disable access” as early as possible and no later than 36 hours of content subject to a Court order or an order of a Government agency (see rule 4(1)(d)).
All intermediaries will be under an obligation to inform users at least once per year about their content policies, which must at a minimum include rules such as not uploading, storing or sharing information that “belongs to another person and to which the user does not have any right”, “deceives or misleads the addressee about the origin of the message”, “is patently false and untrue” or “is harmful to minors” (see rules 4(1)(b) and (f)).
All intermediaries will have to provide information to authorities for the purpose of identity verification and for investigating and prosecuting offenses, within 72 hours of receiving an order from an authorised government agency (see rule 4(1)(j)).
All intermediaries will have to take all measures to remove or limit accesswithin 24 hours of receiving a complaint from a user, to any content that reveals nudity, amounts to sexual harassment, or represents a deep fake, and the content is transmitted with the intent to harass, intimidate, threaten or abuse an individual (see rule 4(1)(p)).
“Significant social media intermediaries” have enhanced obligations
“Significant social media intermediaries” are social media services with a number of users above a threshold which will be defined and notified by the Central Government. This concept is similar to the the DSA’s “Very Large Online Platform”, however the DSA includes clear criteria in the proposed act itself on how to identify a VLOP.
As for Significant Social Media Intermediaries” in India, they will have additional obligations (similar to how the DSA proposal in the EU scales obligations):
“Significant social media intermediaries” that provide messaging services will be under an obligation to identify the “first originator” of a message following a Court order or an order from a Competent Authority (see rule 5(2)). This provision raises significant concerns over end-to-end encryption and encryption backdoors.
They will have to appoint a Chief Compliance Officer for the purposes of complying with these rules and who will be liable for failing to ensure that the intermediary observes due diligence obligations; the CCO will have to hold an Indian passport and will have to be based in India;
They will have to appoint a Chief Grievance Officer, who also must be based in India.
Publish compliance reports every 6 months.
Deploy automated scanning to proactively identify all identical information to content removed following an order (under the 36 hours rule), as well as child sexual abuse and related content (see rule 5(4)).
Set up an internal mechanism for receiving complaints.
These “Guidelines” seem to have the legal effect of a statute, and they are being adopted through executive action to replace Guidelines adopted in 2011 by the Government, under powers conferred to it in the Information Technology Act 2000. The new Guidelines would enter into force immediately after publication in the Official Gazette (no information as to when publication is scheduled). The Code of Ethics would enter into force three months after the publication in the Official Gazette. As mentioned above, there are already some challenges in Court against part of these rules.
This analysis by Rahul Matthan, who raises questions with regard to “identifying the first originator” rule, arguing that it is likely the Indian Supreme Court would declare such a measure unconstitutional: “Traceability is Antithetical to Liberty”.
Another jurisdiction to keep your eyes on: Australia
Also note that, while the European Union is starting its heavy and slow legislative machine, by appointing Rapporteurs in the European Parliament and having first discussions on the DSA proposal in the relevant working group of the Council, another country is set to soon adopt digital content rules: Australia. The Government is currently considering an Online Safety Bill, which was open to public consultation until mid February and which would also include a “modernised online content scheme”, creating new classes of harmful online content, as well as take-down requirements for image-based abuse, cyber abuse and harmful content online, requiring removal within 24 hours of receiving a notice from the eSafety Commissioner.
If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].
Russia: New Law Requires Express Consent for Making Personal Data Available to the Public and for Any Subsequent Dissemination
Authors: Gabriela Zanfir-Fortuna and Regina Iminova
Source: Pixabay.Com, by Opsa
Amendments to the Russian general data protection law (Federal Law No. 152-FZ on Personal Data) adopted at the end of 2020 enter into force today (Monday, March 1st), with some of them having the effective date postponed until July 1st. The changes are part of a legislative package that is also seeing the Criminal Code being amended to criminalize disclosure of personal data about “protected persons” (several categories of government officials). The amendments to the data protection law envision the introduction of consent based restrictions for any organization or individual that publishes personal data initially, as well as for those that collect and further disseminate personal data that has been distributed on the basis of consent in the public sphere, such as on social media, blogs or any other sources.
The amendments:
introduce a new category of personal data, defined as “personal data allowed by the data subject to be disseminated” (hereinafter PDD – personal data allowed for dissemination);
include strict rules for initially making personal data available to an unlimited number of persons, but also for further processing PDD by other organizations or individuals, including for further disseminating this type of data – all of this must be done on the basis of specific, affirmative and separately collected consent from the data subject, the existence of which must be proved at any point of the use and further use;
introduce the possibility of the Russian regulator enforcing this law (“Roskomnadzor”) to record in a centralized information system the consent obtained for dissemination of personal data to an unlimited number of persons;
introduce an absolute right to opt out of the dissemination of personal data, “at any time”.
The potential impact of the amendments is broad. The new law prima facie affects social media services, online publishers, streaming services, bloggers, or any other entity who might be considered as making personal data available to “an indefinite number of persons.” They now have to collect and prove they have separate consent for making personal data publicly available, as well as for further publishing or disseminating PDD which has been lawfully published by other parties originally.
Importantly, the new provisions in the Personal Data Law dedicated to PDD do not include any specific exception for processing PDD for journalistic purposes. The only exception recognized is processing PDD “in the state and public interests defined by the legislation of the Russian Federation”. The Explanatory Note accompanying the amendments confirms that consent is the exclusive lawful ground that can justify dissemination and further processing of PDD and that the only exception to this rule is the one mentioned above, for state or public interests as defined by law. It is thus expected that the amendments might create a chilling effect on freedom of expression, especially when also taking into account the corresponding changes to the Criminal Code.
The new rules seem to be part of a broader effort in Russia to regulate information shared online and available to the public. In this context, it is noteworthy that other amendments to Law 149-FZ on Information, IT and Protection of Information solely impacting social media services were also passed into law in December 2020, and already entered into force on February 1st, 2021. Social networks are now required to monitor content and “restrict access immediately” of users that post information about state secrets, justification of terrorism or calls to terrorism, pornography, promoting violence and cruelty, or obscene language, manufacturing of drugs, information on methods to commit suicide, as well as calls for mass riots.
Below we provide a closer look at the amendments to the Personal Data Law that entered into force on March 1st, 2021.
A new category of personal data is defined
The new law defines a category of “personal data allowed by the data subject to be disseminated” (PDD), the definition being added as paragraph 1.1 to Article 3 of the Law. This new category of personal data is defined as “personal data to which an unlimited number of persons have access to, and which is provided by the data subject by giving specific consent for the dissemination of such data, in accordance with the conditions in the Personal Data Law” (unofficial translation).
The old law had a dedicated provision that referred to how this type of personal data could be lawfully processed, but it was vague and offered almost no details. In particular, Article 6(10) of the Personal Data Law (the provision corresponding to Article 6 GDPR on lawful grounds for processing) provided that processing of personal data is lawful when the data subject gives access to their personal data to an unlimited number of persons. The amendments abrogate this paragraph, before introducing an entirely new article containing a detailed list of conditions for processing PDD only on the basis of consent (the new Article 10.1).
Perhaps in order to avoid misunderstanding on how the new rules for processing PDD fit with the general conditions on lawful grounds for processing personal data, a new paragraph 2 is introduced in Article 10 of the law, which details conditions for processing special categories of personal data, to clarify that processing of PDD “shall be carried out in compliance with the prohibitions and conditions provided for in Article 10.1 of this Federal Law”.
Specific, express, unambiguous and separate consent is required
Under the new law, “data operators” that process PDD must obtain specific and express consent from data subjects to process personal data, which includes any use, dissemination of the data. Notably, under the Russian law, “data operators” designate both controllers and processors in the sense of the General Data Protection Regulation (GDPR), or businesses and service providers in the sense of the California Consumer Privacy Act (CCPA).
Specifically, under Article 10.1(1), the data operator must ensure that it obtains a separate consent dedicated to dissemination, other than the general consent for processing personal data or other type of consent. Importantly, “under no circumstances” may individuals’ silence or inaction be taken to indicate their consent to the processing of their personal data for dissemination, under Article 10.1(8).
In addition, the data subject must be provided with the possibility to select the categories of personal data which they permit for dissemination. Moreover, the data subject also must be provided with the possibility to establish “prohibitions on the transfer (except for granting access) of [PDD] by the operator to an unlimited number of persons, as well as prohibitions on processing or conditions of processing (except for access) of these personal data by an unlimited number of persons”, per Article 10.1(9). It seems that these prohibitions refer to specific categories of personal data provided by the data subject to the operator (out of a set of personal data, some categories may be authorized for dissemination, while others may be prohibited from dissemination).
If the data subject discloses personal data to an unlimited number of persons without providing to the operator the specific consent required by the new law, not only the original operator, but all subsequent persons or operators that processed or further disseminated the PDD have the burden of proof to “provide evidence of the legality of subsequent dissemination or other processing”, under Article 10.1(2), which seems to imply that they must prove consent was obtained for dissemination (probatio diabolica in this case). According to the Explanatory Note to the amendments, it seems that the intention was indeed to turn the burden of proof of legality of processing PDD from data subjects to the data operators, since the Note makes a specific reference to the fact that before the amendments the burden of proof rested with data subjects.
If the separate consent for dissemination of personal data is not obtained by the operator, but other conditions for lawfulness of processing are met, the personal data can be processed by the operator, but without the right to distribute or disseminate them – Article 10.1.(4).
A Consent Management Platform for PDD, managed by the Roskomnadzor
The express consent to process PDD can be given directly to the operator or through a special “information system” (which seems to be a consent management platform) of the Roskomnadzor, according to Article 10.1(6). The provisions related to setting up this consent platform for PDD will enter into force on July 1st, 2021. The Roskomnadzor is expected to provide technical details about the functioning of this consent management platform and guidelines on how it is supposed to be used in the following months.
Absolute right to opt-out of dissemination of PDD
Notably, the dissemination of PDD can be halted at any time, on request of the individual, regardless of whether the dissemination is lawful or not, according to Article 12.1(12). This type of request is akin to a withdrawal of consent. The provision includes some requirements for the content of such a request. For instance, it requires writing contact information and listing the personal data that should be terminated. Consent to the processing of the provided personal data is terminated once the operator receives the opt-out request – Article 10.1(13).
A request to opt-out of having personal data disseminated to the public when this is done unlawfully (without the data subject’s specific, affirmative consent) can also be made through a Court, as an alternative to submitting it directly to the data operator. In this case, the operator must terminate the transmission of or access to personal data within three business days from when such demand was received or within the timeframe set in the decision of the court which has come into effect – Article 10.1(14).
A new criminal offense: The prohibition on disclosure of personal data about protected persons
Sharing personal data or information about intelligence officers and their personal property is now a criminal offense under the new rules, which amended the Criminal Code. The law obliges any operators of personal data, including government departments and mobile operators, to ensure the confidentiality of personal information concerning protected persons, their relatives, and their property. Under the new law, “protected persons” include employees of the Investigative Committee, FSB, Federal Protective Service, National Guard, Ministry of Internal Affairs, and Ministry of Defense judges, prosecutors, investigators, law enforcement officers and their relatives. Moreover, the list of protected persons can be further detailed by the head of the relevant state body in which the specified persons work.
Previously, the law allowed for the temporary prohibition of the dissemination of personal data of protected persons only in the event of imminent danger in connection with official duties and activities. The new amendments make it possible to take protective measures in the absence of a threat of encroachment on their life, health and property.
What to watch next: New amendments to the general Personal Data Law are on their way in 2021
There are several developments to follow in this fast changing environment. First, at the end of January, the Russian President gave the government until August 1 to create a set of rules for foreign tech companies operating in Russia, including a requirement to open branch offices in the country.
Second, a bill (No. 992331-7) proposing new amendments to the overall framework of the Personal Data Law (No. 152-FZ) was introduced in July 2020 and was the subject of a Resolution that passed in the State Duma on February 16, allowing for a period for amendments to be submitted, until March 16. The bill is on the agenda for a potential vote in May. The changes would entail expanding the possibility to obtain valid consent through other unique identifiers which are currently not accepted by the law, such as unique online IDs, changes to purpose limitation, a possible certification scheme for effective methods to erase personal data and new competences for the Roskomnadzor to establish requirements for deidentification of personal data and specific methods for effective deidentification.
If you have any questions on Global Privacy and Data Protection developments, contact Gabriela Zanfir-Fortuna at [email protected]
From Chatbot to Checkout: Who Pays When Transactional Agents Play?
Disclaimer: Please note that nothing below should be construed as legal advice.
If 2025 was the year of agentic systems, 2026 may be the year these technologies reshape e-commerce. Agentic AI systems are defined by the ability to complete more complex, multi-step tasks, and exhibit greater autonomy over how to achieve user goals. As these systems have advanced, technology providers have been exploring the nexus between AI technologies and online commerce, with many launching purchase features and partnering with established retailers to offer shopping experiences within generative AI platforms. In doing so, these companies have also relied on developments in foundational protocols (e.g., Google’s Agent Payment Protocol) that seek to enable agentic systems to make purchases on a person’s behalf (“transactional agents”). But LLM-based systems like transactional agents can make mistakes, which raises questions about what laws apply to transactional agents and who is responsible when these systems make errors.
This blog post examines the emerging ecosystem of transactional agents, including examples of companies that have introduced these technologies and the protocols underpinning them. Existing US laws governing online transactions, such as the Uniform Electronic Transactions Act (UETA), apply to agentic commerce, including in situations where these systems make errors. Transactional agent providers are complying with these laws and otherwise managing risks through various means, including contractual terms, error prevention features, and action logs.
How is the Transactional Agent Ecosystem Evolving?
Several AI and technology companies have unveiled transactional agents over the past year that enable consumers to purchase goods within their interfaces rather than having to visit individual merchants’ websites. For example, OpenAI added native checkout features into its LLM-based chatbot that hundreds of millions of consumers already use, and Perplexity introduced similar features for paid users that can find products and store payment information to enable purchases. Amazon has also released a “Buy For Me” feature, which involves an agentic system that sends payment and shipping address information to third party merchants so that Amazon’s users can buy these merchants’ goods on Amazon’s website.
Application of Existing Laws (such as the Uniform Electronic Transactions Act)
As consumer-facing tools for agentic commerce develop, questions will arise about who is responsible when transactional agents inevitably make mistakes. Are users responsible for erroneous purchases that a transactional agent may make on their behalf? In these cases, long-standing statutes governing electronic transactions apply. The Uniform Electronic Transactions Act (UETA), a model law adopted by 49 out of 50 U.S. states, sets forth rules governing the validity of contracts undertaken by electronic means, and suggests that consumer transactions conducted by an agentic system can be considered valid transactions.
First, the UETA has provisions that apply to “electronic agents,” which are defined as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” This is a broad, technology-neutral definition that is not reserved for AI. It encompasses a range of machine-to-machine and human-to-machine technologies, such as automated supply chain procurement and signing up for subscriptions online. The latest transactional agents can take an increasing set of actions on a user’s behalf without oversight, such as finding and executing purchases, so these technologies could potentially qualify as transactional agents.
This means that transactional agents can probably enter into binding transactions on a person’s behalf. Section 14 of the UETA indicates that this can occur even without human review when two entities use agentic systems to transact on their behalf (e.g., an individual user of a system that buys goods on their behalf and that of an e-commerce platform that can negotiate order quantity and price). At a time where agentic systems representing distinct parties interacting with each other are edging closer to reality, these systems could bind the user to contracts undertaken on their behalf despite the lack of human oversight. However, a significant caveat is that the UETA also says that individuals may avoid transactions entered into by transactional agents if they were not given “an opportunity for the prevention or correction of [an] error . . . .” This is true even if the user made the error.
Finally, even if an agentic transaction is deemed valid and a mistake is not made, other legal protections may apply in the event of consumer harm. For example, a transactional agent provider that requires third parties to pay for their goods to be listed by the agent, or gives preference to its own goods, may violate antitrust and consumer protection law. There is also a growing debate over the application of other longstanding common law protections, such as fiduciary duties and “agency law.”
What Risk Management Steps are Transactional Agent Providers Taking to Manage Responsibility?
Managing responsibility for transactional agents can take varied forms, including contractual disclaimers and limitations, protocols that signal to third parties an agentic system’s authorization to act on a user’s behalf, as well as design decisions that reduce the likelihood of transactions being voided when errors occur (e.g., confirmation prompts that require users to authorize purchases):
Protocols that signal the scope of a user’s authorization to third parties: Transactional agent providers should also evaluate how a third party may perceive their actions, as these may provide the basis for a third party arguing that the agent was not acting on the user’s behalf. This may take the form of using various protocols that can communicate the limits of the agentic system’s authority to conduct a purchase, including those that allow parties to separate benign from undesirable agentic systems and ensure that a system is not impersonating an individual without their authorization.
Error prevention and correction features: Organizations should address the UETA-related risk of contracts being avoided by users in the absence of pre-purchase error and correction measures through the thoughtful design of UI flows and implementation of human review steps. Transactional agent providers and others do this through various means, such as confirmation prompts, alerts, and purchase size limits. These measures are important, as organizations cannot use contractual terms (e.g., the consumer is solely liable for errors made by the system) to circumvent this UETA requirement. For these reasons, many agentic platforms are still not operating totally independently.
Action logs that capture the what, when, and why of an agentic system’s decisions: Companies can create action logs that give users visibility into the system’s decision flow for a purchase to promote trust in transactional agents. Such logs could also help organizations demonstrate that a user authorized an agent to act on their behalf.
Conclusion
Organizations are increasingly rolling out features that enable agentic systems to buy goods and services. These current and near-future technologies introduce uncertainty about who is responsible for agentic system transactions, including when mistakes are made, which is leading providers to integrate error prevention features, contractual disclaimers, and other legal and technical measures to manage and allocate risks.
FPF Retrospective: U.S. Privacy Enforcement in 2025
The U.S. privacy law landscape continues to mature as new laws go into effect, cure periods expire, and regulators interpret the law through enforcement actions and guidance. State attorneys general and the Federal Trade Commission act as the country’s de facto privacy regulators, regularly bringing enforcement actions under legal authorities both old and new. For privacy compliance programs, this steady stream of regulatory activity both clarifies existing responsibilities and raises new questions and obligations. FPF’s U.S. Policy team has compiled a retrospective looking back at enforcement activity in 2025 and outlining key trends and insights.
Looking at both substantive areas of focus in enforcement actions and the level of activity by different enforcers, the retrospective identified four notable trends in 2025:
California and Texas Lead Growing Public Enforcement of Comprehensive Privacy Laws: Comprehensive privacy laws may finally be moving from a period of legislative activity into a new era where enforcement is shaping the laws’ meaning, as 2025 saw a significant increase in the number of public enforcement actions.
States Demonstrate Increasing Concern for Kids’ and Teens’ Online Privacy and Safety: As legislators continue to consider broad youth privacy and online safety legal frameworks, enforcers too are looking at how to protect the youth online. Bringing claims under existing state laws, including privacy and UDAP, regulators are paying close attention to opt-in consent requirements, protections for teenagers in addition to children under 13, and the online safety practices of social media and gaming services.
U.S. Regulators Go Full Speed Ahead on Location and Driving Data Enforcement: Building on recent enforcement actions concerning data brokerage and location privacy, federal and state enforcers have expanded their consumer protection enforcement strategy to focus also on first-party data collectors and the collection of “driving data.”
FTC Prioritizes Enforcement on Harms to Kids and Teens, and Deceptive AI Marketing, Under New Administration: The FTC transitioned leadership in 2025, moving into a new era under Chair Andrew Ferguson that included a shift toward targeted enforcement activity focused on ensuring children’s and teens’ privacy and safety, and “promoting innovation” by addressing deceptive claims about the capabilities of AI-enabled products and services.
There are several practical takeaways that compliance teams can draw from these trends: obtaining required consent prior to processing sensitive data, including through oversight of vendors’ consent practices, identification of known children, and awareness of laws with broader consent requirements; ensuring that consumer controls and rights mechanisms are operational; avoiding design choices that could mislead consumers; considering if and when to deploy age assurance technologies and how to do so in an effective and privacy-protective manner; and avoiding making deceptive claims about AI products.
2026: A Year at the Crossroads for Global Data Protection and Privacy
There are three forces twirling and swirling to create a perfect storm for global data protection and privacy this year: the surprise reopening of the General Data Protection Regulation (GDPR) which will largely play out in Brussels over the following months, the complexity and velocity of AI developments, and the push and pull over the field by increasingly substantial adjacent digital and tech regulations.
All of this will play out with geopolitics taking center stage. At the confluence of some of these developments, the protection of children online and cross-border data transfers – with their other side of the coin, data localization in the broader context of digital sovereignty, will be two major areas of focus.
1. The GDPR reform, with an eye on global ripple effects
The gradual reopening of the GDPR last year came as a surprise, without much debate or public consultation, if any. It passed its periodic evaluation in the summer of 2024 with a recommendation for more guidance and better implementation to suit SMEs and harmonization across the EU, as opposed to re-opening or amending it. Moreover, exactly one year ago, in January 2025, at CPDP-Data Protection Day Conference in Brussels, not one, but two representatives of the European Commission, in two different panels (one of which I moderated) were very clear that the Commission had no intention to re-open the GDPR.
Despite this, a minor intervention was first proposed in May to tweak the size of entities under the obligation to keep a register of processing activities through one of the simplification Omnibus packages of the Commission. But this proved to just crack the door open for more significant amendments to the GDPR proposed later on, under the broad umbrella of competitiveness and regulatory simplification the Commission started to pursue emphatically. Towards the end of the year, in November 2025, major interventions were introduced within another simplification Omnibus dedicated to digital regulation.
There are two significant policy shifts the GDPR Omnibus proposes that should be expected to reverberate in data protection laws around the world in the next few years. First, it entertains the end of technology-neutral data protection law. AI – the technology, is imprinted all over the proposed amendments, from the inconspicuous ones, like the new definition proposed for “scientific research”, to the express mentioning of “AI systems” in new rules created to facilitate their “training and operations” – including in relation to allowing the use of sensitive data and to recognizing a specific legitimate interest for processing personal data for this purpose.
The second policy shift – and perhaps the most consequential for the rest of the data protection world, is the narrowing down of what constitutes “personal data”, by adding several sentences to the existing definition to transpose what resembles the relative approach to de-identification which was confirmed by the Court of Justice of the EU (CJEU) in the SRB case this September. To a certain degree, the proposed changes bring the definition to pre-GDPR days, when some data protection authorities were indeed applying a relative approach in their regulatory activity.
The new definition technically adds that the holder of key-coded data or other information about an identifiable person, which does not have means reasonably likely to be used to identify that person, does not process personal data even if “potential subsequent recipients” can identify the person to whom the data relates. Processing of this data, including publishing it or sharing it with such recipients, would thus be outside of the scope of the GDPR and any accountability obligations that follow from it.
If the language proposed will end up in the GDPR, this would likely mark a narrowing of the scope of application of the law, leaving little room for supervisory authorities to apply the relative approach on a case-by-case basis following the test that the CJEU proposed in SRB. This is particularly notable, considering that the GDPR has successfully exported the current philosophy and much of the wording of the broad definition of personal data (particularly its “identifiability” component) to most data protection laws adopted or updated around the world since 2016, from California, to Brazil, to China, to India.
The ripple effects around the world of such significant modifications of the GDPR would not be felt immediately, but in the years to come. Hence, the legislative process unfolding this year in Brussels on the GDPR Omnibus should be followed closely.
2. The Complexity and Velocity of AI developments: Shifting from regulating data to regulating models?
There is a lot to unpack here, almost too much. And this is at the core of why AI developments have an outsized impact on data protection. There is a lot of complexity related to understanding the data flows and processes underpinning the lifecycle of the various AI technologies, making it very difficult to untangle the ways in which data protection is applicable to them. On top of it, the speed with which AI evolves is staggering. This being said, there are a couple of particularly interesting issues at the intersection of AI and data protection to be necessarily followed this year, with an eye towards the following years too.
One of them is the intriguing question of whether AI models are the new “data” in data protection. Some of you certainly remember the big debate of 2024: do Large Language Models (LLMs) process personal data within the model? While it was largely accepted that personal data is processed during training of LLMs and may be processed as output of queries done within LLMs, it was not at all clear that any of the informational elements related to AI models post-training, like tokens, vectors, embeddings or weights, can amount by themselves or in some combination to personal data (or not). The question was supposed to be solved by an Opinion of the European Data Protection Board (EDPB) solicited by the Irish Data Protection Commission, which was published in December 2024.
Instead, the Opinion painted a convoluted regulatory answer by offering that “AI models trained on personal data cannot, in all cases, be considered anonymous”. The EDPB then dedicated most of the Opinion on laying out criteria that can help assess whether AI models are anonymous or not. While most, if not all of the commentary around the Opinion usually focuses on the merits of these criteria, one should perhaps stop and first reflect on the framework of the analysis – namely assessing the nature of the model itself rather than the nature of the bits and pieces of information within the model.
The EDPB did not offer any exploration of what non-anonymous (so, then, personal?) AI models might mean for the broader application of data protection law, such as data subject rights. But with it, the EDPB may have – intentionally or not, started a paradigm shift for data protection in the context of AI, signaling a possible move from the regulation of personal data items to the regulation of “personal” AI models. However, the Opinion was ostensibly shelved throughout last year as it did not seem to appear in any regulatory action yet. I would have forgotten about it myself if not for a judgment of a Court in Munich in November 2025, in an IP case related to LLMs.
The German Court found that song lyrics in a training dataset for an LLM were “reproducibly contained and fixed in the model weights”, with the judgment specifically referring to how models themselves are “copies” of those lyrics within the meaning of the relevant copyright law. This is because of the “memorization” of the lyrics in the training data by the model, where weights and vectors are “physical fixations” of the lyrics. This judgment is not final, with a pending appeal. But it will be interesting to see whether this perspective of focusing on the models themselves as opposed to bits of data within them will find more ground this year and immediately following ones, pushing for legal reform, or will fizzle out due to over-complexity of making it fit within current legal frameworks.
Key AI developments which might push the limits of existing data protection and privacy frameworks to a breaking point, as they descend from research to market, will be:
hyper-personalization – think of the decades-old debate around targeting and individual profiling, but on steroids;
AI agents by themselves or acting together – for one thing, “control” of a person over their information is at the core of data protection, while the fundamental proposition of AI agents is to take over control in certain contexts;
World models and AI wearables – perhaps a good comparison would be a hyperbolized Internet of Things and all of its implications for by-stander privacy, consent, informational self-determination. Notwithstanding the fact that it is perhaps a naive comparison, particularly if the previous two points will be layered on top of this one, which would also integrate LLMs.
3. A concert of laws adjacent to data protection and privacy steadily becoming the digital regulation establishment
A third force pressing onto data protection for the foreseeable future are all the novel data-and-digital adjacent regulatory efforts solidifying into a new establishment of digital regulation, with their own bureaucracies, vocabulary and compliance infrastructure: online safety laws – including their branch of children’s online safety laws, digital markets laws, data laws focusing on data sharing or data strategies including personal and non-personal data, and the proliferation of AI laws, from baseline acts to sectoral or issue-specific laws (focusing on single issues, like transparency).
It may have started in the EU five years ago, but this is now a global phenomenon. Look, for instance, at Japan’s Mobile Software Competition Act, a law regulating competition in digital markets focusing on mobile environments which became effective in December 2025 and draws strong comparisons with the EU Digital Markets Act. Or at Vietnam’s Data Law which became effective in July 2025 and is a comprehensive framework for the governance of digital data, both personal and non-personal, applying in parallel to its new Data Protection Law.
Children’s online safety is taking increasingly more space in the world of digital regulatory frameworks, and its overlap and interaction with data protection law could not be clearer than in Brazil. A comprehensive law for children’s online safety, the Digital ECA, was passed at the end of last year and it is slated to be enforced by the Brazilian Data Protection Authority starting this spring.
It brings interesting innovations, like a novel standard for such laws to be triggered – “likelihood of access” of a technology service or product by minors, or “age rating” for digital services, requiring providers to maintain age rating policies and continuously assess their content based on it. It also provides for “online safety by design and by default” as an obligation for digital services providers. From state level legislation in the US on “age appropriate design”, to an executive decree in UAE on “child digital safety” – the pace of adopting online safety laws for children is ramping up. What makes these laws more impactful is also the fact that age limits of minors falling under these rules are growing to capture teenagers up until 16 and even 18 year-olds in some places, bringing vastly more service providers in scope than first generation children online safety regulations.
The overlap, intersection and even tensions of all these laws with data protection become increasingly visible. See, for instance, the recent Russmedia judgment of the CJEU, which established that an online marketplace is a joint-controller under the GDPR and it has obligations in relation to sensitive personal data published by a user, with consequences for intermediary liability that are expected to reverberate at the intersection of the GDPR and Digital Services Act in practice.
The compliance infrastructure of this new generation of digital laws and its need for resources (human resources, budget) break their way into an already stretched field of “privacy programs”, “privacy professionals”, and regulators, with the visible risks of moving attention from, and diluting meaningful measures and controls stemming from privacy and data protection laws.
4. Breaking the fourth wall: Geopolitics
While all these developments play out, it is particularly important to be aware that they unfold on a geopolitical stage that is unpredictable and constantly shifting, resulting in various notions of “digital sovereignty” taking root from Europe, to Africa, to elsewhere around the world. From a data protection perspective, and in the absence of a comprehensive understanding of what “digital sovereignty” might mean, this could translate into a realignment of international data transfers rules through more data localization measures, more data transfers arrangements following trade agreements, or more regional free data flows arrangements among aligned countries.
Ten years after the GDPR was adopted as a modern upgrade of 1980s-style data protection laws for the online era, successfully promoting fair information practice principles, data subject rights and the “privacy profession” around the world, data protection and privacy are at an inflection point: either hold the line and evolve to meet these challenges, or melt away in a sea of new digital laws and technological developments.
6 Privacy Tips for the Generative AI Era
Data Privacy Day, or Data Protection Day in Europe, is recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data. The Council of Europe initiated the day in 2006, with the first official celebration held on January 28, 2007, marking this year as the 19th anniversary of celebration. Companies and organizations around the world often devote time for internal privacy training during this week, working to improve awareness of key data protections issues for their staff.
It’s also a good time for all of us to think about our own sharing of personal data. Nowadays, one of the most important decisions we need to make about our data is when and how we use AI-powered services. To raise awareness, we’ve partnered with Snap to create a Data Privacy Day Snapchat Lens. Check it out by scanning the Snapchat code and learn more below about privacy tips for generative AI!
Know When You’re Using Generative AI
As a first step, it’s important to know what generative AI is and when you’re using it. Generative AI is a type of artificial intelligence that creates original text, images, audio, and code in response to input. In addition to visiting dedicated generative AI platforms (such as ChatGPT), you may find that many companies’ existing functionality now also includes generative AI capabilities. For example, a search in Google now provides answers powered by Google’s generative AI, Gemini. Other examples include Snap’s AI Lenses and AI Snaps in creative tools, and Adobe’s Acrobat and Express are now powered with Firefly, Adobe’s generative AI. X’s Grok now assists users and answers questions.
One of the best ways to identify when you’re using generative AI is to look for a symbol or disclaimer. Many organizations provide clues like symbols, and a range of companies like Snap, Github, and many others often use either a sparkle or star icon to denote generative AI features. You might also notice labels like “AI-generated” or “Experimental” alongside results from some companies, including Meta.
Think Carefully Before You Share Sensitive or Private Information
While this is a general rule of thumb for interacting with any product, it’s especially important when using generative AI because most generative AI systems use data that users provide (such as conversation text or images) to allow their models to continuously learn and improve. While your prompts, generated images, and other pieces of data can improve the technology for all users, it also means that if you share sensitive or private information, it could potentially be shared or surfaced in connection with training and developing the algorithm.
Be especially careful when uploading files, images, or screenshots to generative AI tools. Documents, photos, or screenshots can include more information than you realize, such as metadata, background details, or information about third parties. Before uploading, consider redacting, cropping, or otherwise limiting files to include only the information necessary for your task.
Some companies promise to not use your data for training, often if you are using the paid version of their service. Others provide an option to opt-out of use of your data for training or versions that have special protections. For example, ChatGPT’s new health service supports the upload of health records with additional privacy and security commitments, but you need to be sure to be using the specific Health tab that is being rolled out to users.
Manage Your AI’s Memory
Many generative AI tools now feature a memory function that allows them to remember details about you over time, providing more tailored responses. While this can be helpful for maintaining context in long-term projects, such as remembering your writing style, professional background, or specific project goals, it also creates a digital record of your preferences and behaviors. A recent FPF report explores these different kinds of personalization.
Fortunately, you typically have the power to control what Generative AI platforms remember. Most have settings to view, edit, or delete specific memories or to turn the feature off entirely. For instance, in ChatGPT, you can manage these details under Settings > Personalization, and Gemini allows you to toggle off “Your past chats” within its activity settings to prevent long-term tracking. Meta also provides options for deleting all chats and images from the Meta AI app. Another option is to use “Temporary” or “Incognito” modes, so you can enjoy a personalized experience without generative AI compiling data attributed to your profile.
In addition to managing memory features, it’s also helpful to understand how long Generative AI services keep your data. Some platforms store conversations, images, or files for only a short time, while others may keep them longer unless you choose to delete them. Taking a moment to review retention timelines can give you a clearer picture of how long your information sticks around, and help you decide what you’re comfortable sharing.
Define Boundaries for Agentic AI
Agentic AI, a form of generative AI that can complete tasks for users with greater autonomy, is becoming increasingly popular. For example, companies like Perplexity, OpenAI, and Amazon have unveiled agentic systems that can make purchases for consumers. While these systems can take on more tasks, they still require users to review purchases before they are final. As a best practice, you should look over the purchase to check that it aligns with your expectations (e.g., ordering 1 pair of socks and not 10). It is also important to keep in mind that since agentic systems can pull information from third party sources, there is a risk that the system will rely on inaccurate information about a product during purchases (e.g., that an item is in stock).
As agentic systems become more embedded in our lives, you should also be mindful about how much information you share with them. Consumers are already disclosing sensitive details about themselves to more basic chatbots, which businesses, the government, and other third parties may want to access. When interacting with agentic systems, keep this in mind and pay attention to what you disclose about yourself and others. You may similarly want to consider what type of access to provide to the agentic AI product, and rely on the principle of least privilege–only providing the minimum access needed for your use. For example, if an agentic system is going to manage your calendar, think through options for narrowing the access so your entire calendar is not shared, and that other apps connected to your calendar, like your email, are not shared unless necessary.
Review How Generative AI Products Handle Privacy and Safety
It’s important to regularly review the privacy and security practices of any company with which you share information, and this applies similarly to companies offering generative AI products. This can include checking what data is collected and how, as well as how that information is used and stored.
Snap has a Snapchat Privacy Center where you can review your settings. You can find those choices here.
ChatGPT’s privacy controls are available in the ChatGPT display, and OpenAI has a Data Controls FAQ that outlines where to find the settings and what options are available.
Gemini has the Gemini Privacy Hub, as well as an area to read about and configure your settings for Gemini Apps, which includes options for turning your Gemini history off.
Claude has a Privacy Settings & Controls page that outlines how long they store your data, how you can delete it, and more.
Co-Pilot provides an array of options for reviewing and updating your privacy settings, including how to delete specific memories and how your data is used. These settings are available on Microsoft’s website, here. Microsoft also provides a detailed Privacy FAQ page as well.
Keep in mind that Generative AI products change quickly, and new features may introduce new data uses, defaults, or controls. Periodically revisiting privacy and safety settings can help ensure your preferences continue to reflect how the product works today, rather than how it worked when you first configured it.
Explore and Have Fun!
LLMs can often provide useful data protection advice, so ask them questions about AI and privacy. Just be sure to double-check sources and accuracy, especially for important topics!
Data Privacy Day is a reminder that privacy is a shared responsibility. By bringing together FPF’s expertise in privacy research and policy with Snap’s commitment to building products with privacy and safety in mind, this collaboration aims to help people better understand how AI works and how to use it thoughtfully.
FPF Releases Updated Infographic on Age Assurance Technologies, Emerging Standards, and Risk Management
The Future of Privacy Forum is releasing an updated version of its Age Assurance: Technologies and Tradeoffs infographic, reflecting how rapidly the technical and policy landscape has evolved over the past year. As lawmakers, platforms, and regulators increasingly converge on age assurance as a governance tool, the updated infographic sharpens the focus on proportionality, privacy risk, and real-world deployment challenges.
What’s New
The updated infographic introduces several key changes that reflect the current state of age assurance technology and policy:
A Fourth Category: Inference. The original infographic outlined three approaches to age assurance: declaration, estimation, and verification. This update adds a fourth category—inference—which draws reasonable conclusions about a user’s age range based on behavioral signals, account characteristics, or financial transactions. For example, an email address linked to workplace applications, a mortgage lender, or a 401(k) provider, combined with login patterns during business hours, may infer that a user is an adult.
Relatedly, the updated version intentionally downplays age declaration as a standalone solution. While declaration remains useful for low-risk contexts and as an entry point in layered systems, experience and enforcement history continue to show that it is easily bypassed and insufficient where legal or safety obligations attach to age thresholds. The infographic now situates declaration primarily as an initial step within a waterfall or layered approach, rather than as a meaningful assurance mechanism on its own.
The update also highlights several new and emerging potential risks associated with modern age assurance systems. If not addressed properly, these could include loss of anonymity through linkage, increased breach impact from improper secured retained assurance data, secondary data use of assurance data, and circumvention risks such as presentation attacks or shared-device misuse.
In parallel, the infographic expands its coverage of risk management tools that can mitigate these concerns when age assurance is warranted. These include tokenization and zero-knowledge proofs to limit data disclosure, on-device processing and immediate deletion of source data, separation of processing across third parties, user-binding through passkeys or liveness detection, and emerging standards such as ISO/IEC 27566 and IEEE 2089.1. The emphasis is not on eliminating risk—which is rarely possible—but on aligning technical controls with the specific harms a service is attempting to address.
As with prior versions, the updated infographic reinforces a core message: there is no one-size-fits-all age assurance solution. Effective approaches are risk-based, use-case-specific, and privacy-preserving by design, balancing assurance goals against the rights and expectations of users. By clarifying the role of inference, contextualizing declaration, and surfacing both new risks and mitigation strategies, this update aims to support more informed decision-making across policy, product, and engineering teams.
Emerging Age Assurance Concepts. The field has advanced considerably, and the updated infographic now includes a dedicated section on emerging technologies that address Age Signals and Age Tokens, User-Binding, Zero Knowledge Proofs (ZKP), Double-Blind Models and One-Time vs. Reusable Credential.
Updated Risks and Risk Management Approaches. The infographic now presents a more comprehensive view of the risks and challenges associated with age assurance—including excessive data collection and retention, secondary data use, lack of interoperability, false positives and negatives, data breaches, and user acceptance challenges. Correspondingly, the risk management section highlights both established and emerging mitigations: on-device processing, tokenization and zero knowledge proofs, anti-circumvention measures (such as Presentation Attack Detection), standards (ISO/IEC 27566-1, IEEE 2089.1), and certification and auditing.
Practical Example: The updated infographic includes a detailed use case following “Miles,” a 16-year-old accessing an online gaming service. The scenario illustrates how multiple age assurance methods can work together in a layered “waterfall” approach—starting with low-assurance age declaration for basic access, escalating to facial age estimation for age-restricted features, and offering authoritative inference or parental consent as inclusive fallbacks when estimation results are inconclusive and formal id is not available . The example also demonstrates token binding with passkeys, ensuring that even if Miles shares his phone with a younger friend, the age credential cannot be accessed without the correct PIN, pattern, or biometric.
Future of Privacy Forum to Honor Top Scholarship at Annual Privacy Papers for Policymakers Event
Washington D.C. — (January 26th, 2026) — Today, the Future of Privacy Forum (FPF) — a global non-profit that advances principled and pragmatic data protection, AI, and digital governance practices — announced the winners of its 16th annual Privacy Papers for Policymakers (PPPM) Awards.
The PPPM Awards recognize leading research and analytical scholarship in privacy relevant to policymakers in the U.S. and internationally. The award highlights important work that analyzes current and emerging privacy and AI issues and proposes achievable short-term solutions or means of analysis that have the potential to lead to real-world policy solutions. Seven winning papers, two honorable mentions, and one student submission were selected by a select group of FPF staff members and advisors based on originality, applicability to policymaking, and overall quality of writing.
Winning authors will have the opportunity to present their work at virtual webinars scheduled for March 4, 2026, and March 11, 2026.
“As artificial intelligence and data protection increasingly shape global policy discussions, high-quality academic research is more important than ever,” says FPF CEO Jules Polonetsky. “This year’s award recipients offer the kind of careful analysis and independent thinking policymakers rely on to address complex issues in the digital environment. We are pleased to recognize scholars whose work helps ensure that technological innovation develops in ways that remain grounded in privacy and responsible data governance.”
FPF’s 2026 Privacy Papers for Policymakers Award winners are:
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. This paper proposes viewing artificial intelligence as “normal technology,” framing it as a human-controlled tool rather than an autonomous, superintelligent entity. By drawing parallels to historical innovations like electricity, the authors argue that AI’s societal impact will be gradual, institutional, and manageable through resilient policy rather than drastic intervention.
Artificial intelligence runs on data. But the two legal regimes that govern data—information privacy law and copyright law—are under pressure. This Article identifies this phenomenon, which the author calls “inter-regime doctrinal collapse,” and exposes the individual and institutional consequences. Through analysis of pending litigation, discovery disputes, and licensing agreements, this Article highlights two dominant exploitation tactics enabled by collapse: Companies “buy” data through business-to-business deals that sidestep individual privacy interests, or “ask” users for broad consent through privacy policies and terms of service that leverage notice-and-choice frameworks. Left unchecked, the data acquisition status quo favors established corporate players and impedes the law’s ability to constrain the arbitrary exercise of private power.
The shift from chatbots to autonomous agents is underway in both business and consumer applications. These systems are built on a new technical standard called the Model Context Protocol (MCP), which enables AI systems to connect to external tools such as calendars, email, and file storage. By standardizing these connections, MCP enables memory and context to move seamlessly from one app to another. This brief examines how agents and MCP work, where risks emerge, and why current guardrails fall short. It proposes targeted interventions to ensure that the systems remain understandable, accountable, and aligned with the people they serve. Finally, it examines how existing privacy frameworks align with this new architecture and identifies areas where new interpretations or protections may be needed.
As global AI regulations shift from risk assessment to addressing realized harms, “algorithmic disgorgement”—the mandated destruction of models—has emerged as a primary remedy. While this tool has expanded from punishing illegal data collection to addressing harmful model usage, a critical examination reveals that it is often a poor fit for the complexities of the modern algorithmic supply chain. Because AI systems involve interconnected data flows and multiple stakeholders, disgorgement frequently penalizes innocent parties without effectively burdening the blameworthy. To ensure more equitable outcomes, the author argues that appropriate algorithmic remedies must be responsive to the specific harm and account for their full impact across the entire supply chain. This analysis highlights the pressing need for a more comprehensive and nuanced toolkit of legal remedies, one that extends beyond simple model destruction.
As dark patterns have become a primary target for global regulators, skeptics argue that government intervention is unnecessary because motivated consumers can protect themselves. This interdisciplinary study challenges that assumption, providing experimental evidence that manipulative interfaces, including obstruction, preselection, and “nagging”, remain strikingly effective even when users are actively trying to maximize their privacy. The Article argues that although a super-majority of consumers will exercise opt-out rights when clearly presented with a “Do Not Sell” option, the overall persistence of these patterns suggests that consumer self-help alone is insufficient. Consequently, the paper concludes that robust legislation and regulation, such as the California Consumer Privacy Act (CCPA), are crucial in countering digital manipulation.
How the Legal Basis for AI Training Is Framed in Data Protection Guidelines by Wenlong Li, Yueming Zhang, Qingqing Zheng, and Aolan Li (link forthcoming)
This paper investigates how the legal basis for AI training is framed within data protection guidelines and regulatory interventions, drawing on a comparative analysis of approaches taken by authorities across multiple jurisdictions. Focusing on the EU’s General Data Protection Regulation (GDPR) and analogous data protection frameworks globally, the study systematically maps guidance, statements, and actions to identify areas of convergence and divergence in the conceptualisation and operationalisation of lawful grounds for personal data processing—particularly legitimate interest and consent—in the context of AI model development. The analysis reveals a trend toward converging on the recognition of legitimate interest as the predominant legal basis for AI training. However, this convergence is largely superficial, as guidelines rarely resolve deeper procedural and substantive ambiguities, and enforcement interventions often default to minimal safeguards. This disconnect between regulatory rhetoric and practical compliance leaves significant gaps in protection and operational clarity for data controllers, raising questions about the reliability and legitimacy of the existing framework for lawful AI training. It warns that, without clearer operational standards and more coherent cross-border enforcement, there is a risk that legal bases such as legitimate interest will serve as little more than formalities.
As the demand for government-held data increases, institutions require effective processes and techniques for removing personal information. An important tool in this regard is deidentification. These guidelines introduce institutions to the basic concepts and techniques of deidentification. They outline the key issues to consider when de-identifying personal information in the form of structured data and provide a step-by-step process that institutions can follow to remove personal information from datasets. This update of the IPC’s globally recognized guidelines, originally released in 2016, provides practical steps to help organizations maximize the benefits of data while protecting privacy.
In addition to the winning papers, FPF awarded two papers as Honorable Mentions: Brokering Safety by Chinmayi Sharma, Fordham University School of Law; Thomas Kadri, University of Georgia School of Law; and Sam Adler, Fordham University, School of Law; and Focusing Privacy Law by Paul Ohm, Georgetown University Law Center.
In reviewing the submissions, winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.
The 2026 Privacy Papers for Policymakers Awards will take place over two virtual events on March 4 and 11. Attendance is free, and registration is open to the public. Find more information to register for the March 4 webinar here, and the March 11 webinar here.
FPF Releases an Updated Issue Brief on Vietnam’s Law on Protection of Personal Data and the Law on Data
This Issue Brief has been updated to reflect the latest changes introduced by Decree 356/2025, the implementing decree to Vietnam’s Personal Data Protection Law, which was enacted on 31 December 2025.
Vietnam is undergoing a sweeping transformation of its data protection and governance framework. Over the past two years, the country has accelerated its efforts to modernize its regulatory architecture for data, culminating in the passage of two landmark pieces of legislation in 2025: the Law on Personal Data Protection (Law No. 91/2025/QH15) (PDP Law), which elevates the Vietnamese data protection framework from an executive act to a legislative act, while preserving many of the existing provisions, and the Law on Data (Law No. 60/2025/QH15) (Data Law). Notably, the PDP Law is expected to come into effect on January 1st, 2026.
The Data Law is Vietnam’s first comprehensive framework for the governance of digital data (both personal and non-personal), and applies to all Vietnamese agencies, organizations and individuals, as well as foreign agencies, organizations and individuals either in Vietnam or directly participating or are related to digital data activities in Vietnam. The data law became effective in July 2025. Together, these two laws mark a significant legislative shift in how Vietnam approaches data regulation, addressing overlapping domains of data protection, data governance, and emerging technologies.
This Issue Brief analyzes the two laws, which together define a new, comprehensive regime, for data protection and data governance in Vietnam. The key takeaways from this joint analysis show that:
The new PDP Law elevates and enhances data protection in Vietnam by preserving much of the existing regime, while introducing important refinements, such as taking a different, unique approach towards defining “basic” and “sensitive” personal data, or providing more nuance on the cross-border data transfers regime with new exceptions, even if it still revolves around Transfer Impact Assessments (TIAs).
However, the PDP Law continues to adopt a consent-focused regime, even as it provides clearer conditions for what constitutes valid consent.
The PDP Law outlines enhanced sector-specific obligations for high-risk processing activities, such as employment and recruitment, healthcare, banking, finance, advertising and social networking platforms.
The intersection of the PDP Law and the Data Law creates compliance implications for organizations navigating cross-border data transfers, as the present regulatory regime doubles down on the state-supervised model for such transfers.
Finally, risk and impact assessments are emerging as a central, albeit uncertain, aspect of the new regime.
This Issue Brief has three objectives. First, it summarizes key changes between the PDP Law and Vietnam’s existing data protection regime, and draws a comparison between the PDP Law and the EU’s General Data Protection Regulation (GDPR) (Section 1). Second, it analyzes the interplay between the Data Law and the PDP Law (Section 2). We then provide key takeaways for organizations as they navigate the implementation of these laws (Section 3).
You can also view the previous version of the Issue Brief here.
Innovation and Data Privacy Are Not Natural Enemies: Insights from Korea’s Experience
The following is a guest post to the FPF blog authored by Dr. Haksoo Ko, Professor at Seoul National University School of Law, FPF Senior Fellow and former Chairperson of South Korea’s Personal Information Protection Commission. The guest post reflects the opinion of the author only and does not necessarily reflect the position or views of FPF and our stakeholder communities. FPF provides this platform to foster diverse perspectives and informed discussion.
1. Introduction: From “trade-off” rhetoric to mechanism design
I served as Chairman of South Korea’s Personal Information Protection Commission (PIPC) between 2022 and 2025. Nearly every day I felt that I was at the intersection of privacy enforcement, artificial intelligence policy, and innovation strategy. I was asked, repeatedly, whether I was a genuine data protectionist or whether I was fully supportive of unhindered data use for innovation. The question reflects a familiar assumption: that there is a dichotomy between robust privacy protection on one hand and rapid AI/data innovation on the other, and that a country must choose between the two.
This analysis draws on the policy-and-practice vantage point that I gained to argue that innovation and privacy are compatible when institutions establish suitable mechanisms that reduce legal uncertainty, while maintaining constructive engagement and dialogue.
Korea’s recent experience suggests that the “innovation vs. privacy” framing is analytically under-specified. The binding constraint is often not privacy protection as such, but uncertainty as to whether lawful pathways exist for novel data uses. In AI systems, this uncertainty is heightened by the intricate nature of their pipelines. Factors such as large-scale data processing, extensive use of unstructured data, composite modeling approaches, and subsequent fine-tuning or other modifications all contribute to this complexity. The main practical issue is less about choosing among lofty values; it is more about operationalizing workable mechanisms and managing risks under circumstances of rapid technological transformation.
Since 2023, Korea’s trajectory can be read as a pragmatic move toward mechanisms of compatibility—institutional levers that lower the transaction costs of innovative undertakings while preserving proper privacy guardrails. These levers include structured pre-deployment engagement, controlled experimentation environments, risk assessment frameworks that can be translated into repeatable workflows, and a maturing approach to privacy-enhancing technologies (PETs) governance.
Conceptually, the approach aligns with the idea of cooperative regulation: regulators offer clearer pathways and procedural predictability for innovative undertakings, while also deepening their understanding of the technological underpinnings of these new undertakings.
This article distills the mechanisms Korea has attempted in an effort to operationalize compatibility of privacy protection with the AI-and-data economy. The emphasis is pragmatic: to identify which institutional levers reduce legal and regulatory uncertainty without eroding accountability, and how those levers map to the AI lifecycle.
2. Korea’s baseline architecture of privacy protection
2.1 General statutory backbone and regulatory capacity
Korea maintains an extensive legal framework for data privacy, primarily governed by the Personal Information Protection Act (PIPA), and further reinforced through specific guidance and strong institutional capacity of the PIPC. The PIPA supplies durable principles and enforceable obligations, while guidance and engagement tools translate those principles and statutory obligations into implementable controls in emerging contexts such as generative AI.
The PIPA embeds familiar principles into statutory obligations: purpose limitation, data minimization, transparency, and various data subject rights. In AI settings, the central challenge has been their application: how to interpret these obligations in the context of, e.g., model training and fine-tuning, RAG (retrieval augmented generation), automated decision-making, and AI’s extension into physical AI and various other domains.
2.2 Principle-based approach combined with risk-based operationalization
Korea’s move is not “light-touch privacy,” but a principle-based approach combined with risk-based operationalization. The PIPC concluded that, given the uncertain and hard-to-predict nature of technological developments surrounding AI, adopting a principle-based approach was inevitable: an alternative like a rule-based approach would result in undue rigidity and stifle innovative energy in this fledgling field. At the same time, the PIPC recognized that a major drawback of a principle-based approach could be the lack of specificity and that it was imperative to issue sufficient guidance to show how principles are interpreted and applied in practice. Accordingly, the PIPC embarked on a journey of publishing a series of guidelines on AI.
In formulating and issuing these guidelines, an emphasis was consistently placed on the significance of implementing and operationalizing risk-based approaches. Emphasizing risk-based operationalization has several noteworthy implications. First, risk is a constant feature of new technologies, and pursuing zero risk is not realistic. As such, the focus was directed towards minimizing relevant risks, instead of seeking their complete elimination. Second, as technologies evolve, the resulting risk profile would also change continuously. Thus, putting in place procedures for periodic risk assessment would be crucial so that a proper mechanism for risk management could be at play. Third, a ‘one-size-fits-all’ approach would rarely be suitable, and multiple tailored solutions often need to be applied simultaneously. Furthermore, it is advisable to consider the overall risk profile of an AI system rather than concentrating on a few salient individual risks. This is akin to the Swiss cheese approach in cybersecurity: deploying multiple independent security measures at multiple layers on the assumption that every layer may have unknown vulnerabilities.
3. Mechanisms of compatibility: What Korea has deployed
The PIPC devised and deployed multiple mechanisms to convert the “innovation vs. privacy” framework into a tractable governance program. They function as a portfolio: some instruments reduce uncertainty through ex ante engagement, while others enable innovative experimentation under structured constraints. Still, others turn principles into repeatable compliance workflows. The PIPC aimed to offer organizations a set of options, acknowledging that, depending on the type of data and the purposes for which the data would be used, different data processing needs would arise. The PIPC recognized that tailored mechanisms would be necessary to address these diverse requirements effectively.
3.1 Case-by-case assessments to reduce uncertainty
AI services could reach the market before regulators can fully resolve novel interpretive questions. In some cases, regulators may commence investigations after new AI services have been launched. As such, businesses may have to accept that they may face regulatory scrutiny ex post. The uncertainty resulting from this unpredictability could make innovators hesitant to launch new services. Accordingly, the PIPC has implemented targeted engagement mechanisms designed to deliver timely and effective responses on an individual basis. For organizations, this would provide predictability, in an expedited manner. The PIPC, on the other hand, through these mechanisms, would gain in-depth information about the intricate details and inner workings of new AI systems. By adopting this approach, the PIPC could develop the necessary expertise to make well-informed decisions that are consistent with current technological realities. The following provides an overview of several mechanisms that have been implemented.
A “prior adequacy review” refers to a structured pre-deployment engagement pathway. The participating business would, on a voluntary basis, propose a data processing design and safeguard package in consideration of the risks involved; the PIPC would then evaluate the adequacy of the proposal against the identified risks; and, if deemed adequate, the PIPC would provide ex ante comfort that the proposed package aligns with the PIPC’s interpretation of the law.
The discipline is the trade: reduced uncertainty in exchange for concrete safeguards and future audits. Safeguard packages could include structured data sourcing and documentation, minimization and de-identification of data where feasible, strict access control, privacy testing and red-teaming for model outputs, input and output filtering for data privacy, and/or structured handling of data-subjects’ requests.
More than a dozen businesses have used this mechanism as they prepared to launch new services. One example is Meta’s launch of a service in Korea for screening and identifying fraudulent advertisements using celebrities’ images without their authorization. While there was a concern about the legality of processing someone’s images without his/her consent, the issue was resolved, in part, by considering the technological aspect that can be called the “temporary embedding” of images.
(2) “No action letters” and conditional regulatory signaling
A “no action letter” is another form of regulatory signaling: under specified facts and conditions, the PIPC clarifies that it will not initiate an enforcement action. The overall process for a “no action letter” is much simpler than for a prior adequacy review. Its development was informed by the “no action letter” framework, which is widely used in the financial sector.
Where used, its value is to reduce uncertainty significantly to an articulated set of commitments. Although preparatory work had taken place earlier, the mechanism was officially implemented in November 2025. The first no action letter was issued in December 2025 for an international research project that used pseudonymized health data of deceased patients.
(3) “Preliminary fact-finding review”
A “preliminary fact-finding review” serves as an expedited evaluative process particularly suited to rapidly evolving sectors. Its primary objective is to develop a comprehensive understanding of the operational dynamics within an emerging service category and to identify pertinent privacy concerns. Although this review may result in the issuance of a corrective recommendation, which is a form of an administrative sanction, issuing such a corrective recommendation is typically not a principal motivation for conducting a preliminary fact-finding review.
For organizations, the value of this review process lies in gaining directional clarity without having to worry about the possibility of immediate escalation into a formal investigative proceeding. For the PIPC, the value is an enlightened understanding of market practices, which in turn serves to inform guidance and targeted supervision.
In early 2024, the PIPC conducted a comprehensive review of several prominent large language models, including those developed or deployed by OpenAI, Microsoft, Google, Meta, and Naver. The assessment focused on data processing practices across pre-training, training, and post-deployment phases. The PIPC issued several minor corrective recommendations. As a result of this review, the businesses obtained legal and regulatory clarity regarding their data processing practices associated with their large language models.
3.2 Controlled experimentation environments: Providing “playgrounds” for R&D
A second group of mechanisms centers on establishing controlled experimental environments. For instance, in situations requiring direct access to raw data for research and development, policy priorities shift towards enabling experimentation while simultaneously reinforcing safeguards that address the corresponding heightened risks. The following is an overview of several specific mechanisms that were implemented in this regard.
(1) “Personal Data Innovation Zones”
“Personal Data Innovation Zones” provide secure environments where vetted researchers and firms can work with high-quality data in a relatively flexible manner. The underlying idea is an appropriate risk-utility calculus. That is, once a secure data environment—an environment that is more secure than usual with strict technical and procedural controls—is established, research within such a secure environment can be conducted with more room for flexibility than usual.
Within a Personal Data Innovation Zone, for instance, data can be used for a long period of time (up to five years with a renewal possibility), data can be retrieved and reused rather than being disposed of after one-time use, and adequacy review of pseudonymization can be conducted using sampled data, instead of reviewing the entire dataset. So far, seven organizations, such as Statistics Korea and Korea National Cancer Center, have been designated as having satisfied the conditions for establishing secure data environments.
(2) Regulatory sandboxes for personal data
Regulatory sandboxes for personal data permit time-limited experiments under specific conditions designed by regulators. Through this mechanism, approval may be granted to organizations that have implemented suitable safeguard measures. One example of this mechanism that has supported new technological developments is a case involving the use of unobfuscated original video data to develop algorithms for autonomous systems such as self-driving cars and delivery robots. Developing algorithms for self-driving cars and delivery robots would almost inevitably require permitting the use of unobfuscated data since, otherwise, it would be exceedingly cumbersome to obfuscate or otherwise de-identify personal data that can be found in all of the video data to be used. In the review process, certain conditions would be imposed in order to safeguard the data properly, often emphasizing strict access control and the management of data provenance.
(3) Pseudonymized data and synthetic data: From encouragement to proceduralization
The PIPC has also moved from generic endorsement of privacy-enhancing technologies (PETs) to procedural guidance. Pseudonymized data and synthetic data are the clearest examples. A phased process was developed—preparation, generation, safety/utility testing, expert or committee assessment, and controlled utilization—with an emphasis on risk evaluation.
Some organizations, in particular certain research hospitals, established data review boards (DRBs), although doing so was not a statutory requirement. A DRB’s role would include, among others, evaluating the suitability of using pseudonymized data, assessing the identifiability of personal data from a dataset that is derived from multiple pseudonymized datasets, and assessing identifiability risks from synthetic data.
4. Institutional design features that make the mechanisms credible
4.1 Building credibility and maintaining active channels of engagement
Compatibility is not achieved by guidance alone. Pro-innovation tools require institutional credibility. From the perspective of businesses, communicating with regulators can readily trigger anxiety. Businesses may worry that information they share could invite unwanted scrutiny. Given this anxiety, regulators need to be proactive and send out a consistent and coherent signal that information gathered through these mechanisms will not be used against the participating businesses. Maintaining sustained and reliable communication channels is critical.
4.2 Expertise and professionalism as regulatory infrastructure
Case-by-case reviews, sandboxes, and risk models are only credible if the regulator has expertise in data engineering, AI system design, security, and privacy risk measurement—alongside legal and administrative capacity. To be effective, principle-based regulation requires sophisticated interpretive capability.
5. Implications: Why compatibility is plausible
Korea’s experience shows that the “innovation vs. privacy” framing is analytically under-specified. At an operational level, greater challenges tend to occur at the intersection of uncertainty, engagement, and institutional capacity. When legal and regulatory interpretations are vague and enforcement is unpredictable, innovators may perceive privacy as a barrier. When safeguards are demanded but not operationalized, privacy advocates may perceive innovation policy as de facto deregulation.
Korea’s mechanisms have attempted to resolve new challenges by translating principles into implementable controls, creating structured engagement and experimentation pathways. Privacy law does not inherently block innovation; poorly engineered compliance pathways do.
6. Conclusion
Korea’s experience supports a disciplined proposition: innovation and data privacy are compatible when compatibility is properly designed and executed. Compatibility does not come from declaring a balance; it comes from mechanisms that reduce uncertainty for innovators while increasing the credibility of the adopted safeguards for data subjects.
Korea’s toolkit—a principle-based approach combined with risk-based operationalization, structured risk management frameworks, active engagement channels, and credibility supported by professionalism and expertise—offers privacy professionals and policymakers a practical reference point for governance in the AI era.
The RAISE Act vs. SB 53: A Tale of Two Frontier AI Laws
What the enactment of New York’s RAISE Act reveals compared to California’s SB 53, the nation’s first frontier AI law
On December 19, New York Governor Hochul (D) signed the Responsible AI Safety and Education (RAISE) Act, ending months of uncertainty after the bill passed the legislature in June and making New York the second state to enact a statute specifically focused on frontier artificial intelligence (AI) safety and transparency.1 Sponsored by Assemblymember Bores (D) and Senator Gounardes (D), the law closely follows California’s enactment of SB 53 in late September, requiring advanced AI developers to publish governance frameworks and transparency reports, and establishing mechanisms for reporting critical safety incidents. As they moved through their respective legislatures, the RAISE Act and SB 53 shared a focus on transparency and catastrophic risk mitigation but diverged in scope, structure, and enforcement–raising concerns about a compliance patchwork for nationally operating developers.
The New York Governor’s chapter amendments ultimately narrowed those differences, revising the final version of the RAISE Act to more closely align with California’s SB 53, with conforming changes expected to be formally adopted by the Legislature in January. Even so, the two laws are not identical, and the remaining distinctions may be notable for frontier developers navigating compliance in both the Golden and the Empire State.
Understanding the RAISE Act, and how it aligns with and diverges from California’s SB 53, offers a useful lens into how states are approaching frontier AI safety and transparency and where policymaking may be headed in 2026.
At a high level, the two statutes now share largely identical scope and core requirements. Still, several distinctions remain, including:
Scope: Though the scope of regulated technologies and entities are largely identical, the RAISE Act includes explicit carveouts for universities engaged in research, and, importantly, contains a territorial limitation (applying only to models developed or operated in whole or in part in New York) that is not present in SB 53. As a result, should either law be subject to constitutional scrutiny, it may be more likely for RAISE to survive a Dormant Commerce Clause challenge.
Requirements: SB 53 includes employee whistleblower protections, which are absent from the RAISE Act. By contrast, the RAISE Act establishes a frontier developer disclosure program requiring additional information, such as ownership structure, that SB 53 does not mandate.
Safety Incident Reporting: SB 53 offers a longer timeline (15 days), while RAISE sets a 72-hour window and uses stricter qualifiers (establishing reasonable belief that an incident occurred).
Rulemaking Authority: SB 53 empowers the California Department of Technology to recommend definitional updates to the statute and to align with national and international standards. The RAISE Act offers direct rulemaking authority to the Department of Financial Services (DFS), such as considering additional reporting or publication requirements.
Liability and Enforcement: The RAISE Act authorizes slightly higher penalties (up to $1 million for a first violation and $3 million for subsequent ones, compared to SB 53’s cap at $1 million per violation.).
RAISE Act: Scope and Requirements
Despite these distinctions, the RAISE Act largely mirrors California’s SB 53 in how it defines covered models, developers, and risks, resulting in a substantially similar compliance scope across the two states. The sections below summarize the RAISE Act’s scope and key requirements.
Scope:
The law regulates frontier developers, defined as entities that “trained or initiated the training” of high-compute frontier models, or foundation models trained with more than 10^26 computational operations. It separately defines large frontier developers, or those with annual gross revenues above $500 million, targeting compliance towards the largest AI companies.
Like California SB 53, the RAISE Act is focused on preventing catastrophic risk, defined as a foreseeable and material risk that a frontier model could:
Contribute to the death or serious injury of 50 or more people or cause at least $1 billion in damages;
Provide expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon;
Engage in criminal conduct or a cyberattack without meaningful human intervention; or
Evade the control of its developer or user.
Requirements:
The RAISE Act establishes multiple compliance requirements, with certain requirements applying to all frontier developers and additional duties reserved for large frontier developers.
Frontier AI Framework: Large frontier developers must annually publish a Frontier AI Framework describing their governance structures, risk assessment thresholds, mitigation strategies, cybersecurity practices, and alignment with national or international standards. The framework must also address catastrophic risk arising from internal model use. Limited redactions are permitted to protect trade secrets, cybersecurity, or national security.
Transparency Report: Before deploying a frontier model, all frontier developers (not only “large” developers) must publish a transparency report detailing the model (intended uses, modalities, restrictions), as well as summaries of catastrophic risk assessments, their results, and the role of any third-party evaluators.
Safety Incident Reporting: Frontier developers must report critical safety incidents to the DFS within 72 hours of forming a reasonable belief that an incident occurred, shortened to 24 hours where there is an imminent danger of death or serious injury. The law also requires a mechanism for public reporting of incidents.
Frontier Developer Disclosure: Large frontier developers may not operate a frontier model in New York without filing a disclosure statement with DFS. Disclosures must be updated at least every two years, upon ownership transfer or following material changes, and must identify ownership structure, business addresses, and designated points of contact. Large developers are assessed pro rata fees to support administration of the program and DFS may impose penalties of up to $1,000 per day for noncompliance.
Rulemaking: The Department of Financial Services is granted direct rulemaking authority, including the ability to consider additional reporting or publication requirements to advance the statute’s safety and transparency objectives.
Enforcement: The RAISE Act authorizes the Attorney General to bring civil actions for violations, with penalties up to $1 million for a first violation and up to $3 million for subsequent violations, scaled to the severity of the offense. The statute expressly does not create a private right of action. It also clarifies, unlike California’s SB 53, that a large frontier developer may assert that alleged harm or damage was caused by another person, entity, or contributing factor.
Before the Amendments: How the RAISE Act Changed
Before Governor Hochul’s chapter amendments, the RAISE Act would have diverged much more sharply from California’s SB 53. The earlier iteration of the bill that passed out of the Legislature took a more expansive approach, including higher penalties and stricter liability thresholds, raising the prospect of meaningfully different compliance regimes on opposite coasts.
Most notably, the original RAISE Act applied only to “large developers,” defined by annual compute spending above $100 million, rather than distinguishing between frontier developers and large frontier developers as SB 53 does. That threshold would have captured a different (and potentially broader) set of companies than the enacted framework, which now relies on a $500 million revenue benchmark aligned with California’s approach. The bill also originally framed its focus around “critical harm,” rather than the “catastrophic risk” standard now shared with California’s SB 53, and paired that definition with heightened liability requirements, including that harm be a probable consequence, that the developer’s conduct be a substantial factor, and that the harm could not have been reasonably prevented. Those qualifiers were ultimately removed in favor of the “catastrophic risk” standard used in SB 53, including utilizing the same 50-person harm threshold.
The RAISE Act’s requirements evolved as well. Earlier versions lacked both the transparency report obligation (now shared with SB 53) and the frontier developer disclosure program (a new New York-specific addition). While the original RAISE Act did include an obligation to maintain a “safety and security protocol,” that requirement was less prescriptive about governance and mitigation practices than the now enacted “Frontier AI Framework.”
Perhaps the most significant change was the removal of a deployment prohibition. As passed by the Legislature, the RAISE Act would have barred deployment of models posing an unreasonable risk of critical harm, a restriction not found in SB 53. Chapter amendments left the final law focused on transparency and reporting, rather than direct deployment restrictions. Penalties were similarly scaled back, falling from a maximum of $10 million for a first violation and $30 million for subsequent violations to $1 million and $3 million, respectively.
Looking Ahead: What Comes Next in 2026?
With chapter amendments expected to be formally adopted in the coming weeks, the RAISE Act will take effect after California’s SB 53, which became operative on January 1, 2026. As a result, SB 53 will be the first real test of how a frontier AI statute operates in practice, with New York following shortly thereafter.
That rollout comes amid renewed uncertainty over the balance between state and federal AI policymaking. A recent White House executive order, Ensuring a National Policy Framework for Artificial Intelligence, seeks to apply federal pressure against state AI laws deemed excessive, including through an AI Litigation Task Force and funding restrictions tied to state enforcement of certain AI laws. While the practical impact of the EO remains unclear, it adds complexity for states and developers preparing for compliance.
Both SB 53 and the RAISE Act include severability clauses, which preserve the remainder of each statute if individual provisions are invalidated. While standard in complex legislation, those clauses may become more consequential if either law is drawn into these broader federal-state tensions. At the same time, the EO directs the Administration to engage Congress on a federal AI framework, raising the possibility that SB 53 and the RAISE Act could serve as reference points for future federal legislation. With other states, including Michigan, already introducing similar bills, it should become clearer in 2026 whether SB 53 and the RAISE Act function as models for broader adoption or face legal challenge.
Passed by the Legislature as A 6453A and to be enacted through chapter amendments reflected in A 9449. ↩︎
FPF Year in Review 2025
Co-authored by FPF Communications Intern Celeste Valentino with contributions from FPF Global Communications Manager Joana Bala
This year, FPF continued to broaden its footprint across priority areas of data governance, further expanding activities across a range of cross-sector topics, including AI, Youth, Conflict of Laws, AgeTech (seniors), and Cyber-Security. We have engaged extensively at the local and national levels in the United States and are increasingly active in every major global region.
Highlights from FPF work in 2025
2025 saw the release of a range of FPF reports and issue briefs highlighting top data protection and AI developments. A few highlights follow, showing the breadth of comprehensive coverage.
FPF tracked and analyzed 210 bills in 42 states, highlighting five key takeaways which include, (1) states shifted from broad frameworks to narrower, transparency-driven approaches, (2) three main approaches to private sector AI regulation emerged: use or context-based, tech-specific, and liability/accountability, (3) the most commonly enacted frameworks focus on healthcare, chatbots, and innovation safeguards, (4) policymakers signaled an interest in balancing consumer protection with AI growth, (5) definitional uncertainty, agentic AI, and algorithmic pricing are likely to be key topics in 2026. Learn further in a LinkedIn Live event with the report’s authors here.
Several states have enacted “substantive” data minimization rules that aim to place default restrictions on the purposes for which personal data can be collected, used, or shared. What questions do these rules raise, and how might policymakers construct them in a forward-looking manner? FPF covers lawmakers’ turn towards substantive data minimization and addresses the relevant challenges and questions they pose.Watch a LinkedIn Live here on the topic.
The Concepts in AI Governance: Personality vs. Personalization issue brief explores the specific use cases of personalization and personality in AI, identifying their concrete risks to individuals and interactions with U.S. law, and proposes steps that organizations can take to manage these risks. Read Part 1 (exploring concepts), Part 2(concrete uses and risks), and Part 3 (intersection with U.S. law) and Part 4 (Responsible Design and Risk Management).
From India’s DPDPA to Vietnam’s new Decree and Indonesia’s PDPL, the Asia-Pacific region is undergoing a shift in its data protection law landscape. This issue brief provides an updated view of evolving consent requirements and alternative legal bases for data processing across key APAC jurisdictions. The brief also explores how the rise of AI is impacting shifts in lawmaking and policymaking across the region regarding lawful grounds for processing personal data. Watch the LinkedIn Live panel discussion on key legislative developments in APAC since 2022.
This Issue Brief analyzes Brazil’s recently enacted children’s online safety law, summarizing its key provisions and how they interact with existing principles and obligations under the country’s general data protection law (LGPD). It provides insight into an emerging paradigm of protection for minors in online environments through an innovative and strengthened institutional framework, focusing on how it will align with and reinforce data protection and privacy safeguards for minors in Brazil and beyond.
As digital trade accelerates, countries across Africa are adopting varied approaches to data transfers—some incorporating data localization measures, others prioritizing open data flows.
FPF examines the current regulatory landscape and offers a structured analysis of regional efforts, legal frameworks, and opportunities for interoperability, including a comparative annex covering Kenya, Nigeria, South Africa, Rwanda, and the Ivory Coast.
FPF Filings and Comments Throughout the year, FPF provided expertise through filings and comments to government agencies on proposed rules, regulations, and policy changes in the U.S. and abroad.
FPF provided recommendations and filed comments with:
California Privacy Protection Agency (CPPA) concerning draft regulations governing cybersecurity audits, risk assessments, automated decision-making technology (ADMT) access, and opt-out rights under the California Consumer Privacy Act.
Colorado Attorney Generalregarding draft regulations for implementing the heightened minor protections within the Colorado Privacy Act (“CPA”).
The Consumer Finance Protection Bureau’s Advance Notice of Proposed Rulemaking (ANPR) for its Personal Financial Data Rights Reconsideration exploring certain significant components of the final rule, with a view to improve the regulation for consumers and industry.
New York Office of the Governor to highlight certain ambiguities in the proposed New York Health Information Privacy Act.
New Jersey Division of Consumer Affairs onimplementing the New Jersey Data Privacy Act (‘NJDPA’).
India’s Ministry of Electronics and Information Technology (MeitY) on the draft Digital Personal Data Protection Rules.
Kenya’s Office of the Data Protection Commissioner (ODPC) on the Draft Data Sharing Code.
The White House Office of Science and Technology Policy (OSTP)providing feedback on how the U.S. federal government’s proposed Development of an Artificial Intelligence Action Plan could include provisions to protect consumer privacy.
The FPF Center for Artificial Intelligence
This year, the FPF Center for Artificial Intelligence expanded its resources, releasing insightful blogs, comprehensive issue briefs, detailed infographics, and a flagship report on issues related to AI agents, assessment, and risk, as well as key concepts in AI governance.
In addition, the Center for AI hosted two events, convening top scholars specializing in complex technical questions that impact law and policy:
July’s Technologist Roundtable covered AI Unlearning and Technical Guardrails. Welcoming a range of academic and technical experts, as well as data protection regulators from around the world, the team explored the extent to which information can be “removed” or “forgotten” from an LLM or similar generative AI model, or from an overall generative AI system.
This fall, the Center hosted the “Responsible AI Management & CRAIG Webinar”, exploring responsible AI and the role of industry-academic research in promoting responsible AI management. Guests from the recently established Center on Responsible AI and Governance (CRAIG), the first industry-university cooperative research center to focus exclusively on responsible AI supported by the National Science Foundation in collaboration with Ohio State University, Baylor University, Northeastern University, and Rutgers University.
Check out some other highlights of FPF’s AI work this year:
Defined the key distinction between two trends emerging in hyperpersonalizing conversational AI technologies through a four-part blog series and accompanying issue brief discussing “personalization” and “personality.”
Produced a comprehensive brief breaking down the key technologies, business practices, and policy implications of Data-Driven Pricing.
Released a report examining the considerations, emerging practices, and challenges that organizations face in attempting to harness AI’s potential while mitigating potential harms.
Discussed the landscape of U.S. states working to enact regulations concerning “neural data” or “neurotechnology data”, information about people’s thoughts and mental activity.
Explored the concept of “regulatory sandboxes” in a blog covering key characteristics, justifications, and policy considerations for the development of frameworks that offer participating organizations the opportunity to experiment with emerging technologies within a controlled environment.
Released an updated guide on Conformity Assessments under the EU AI Act, providing a step-by-step roadmap for organizations that are seeking to understand whether they must conduct a Conformity Assessment.
Highlighted the wide range of current use cases for AI in education and future possibilities and constraints through a new infographic
Summarized the 47th Global Privacy Assembly (GPA), specifically its three resolutions throughout the five-day agenda with regulators focusing on how AI shapes and interact with personal data and individual rights.
Global
In 2025, FPF’s global work focused on how jurisdictions worldwide are adapting privacy and data protection frameworks to keep pace with AI and shifting geopolitical and regulatory landscapes. From children’s privacy and online safety to cross-border data flows and emerging AI governance frameworks, FPF’s teams engaged across regions to provide thought leadership, practical guidance, and stakeholder engagement, helping governments, organizations, and practitioners navigate complex developments while balancing innovation with fundamental rights.
In APAC, FPF analyzed South Korea’s AI Framework Act and Japan’s AI Promotion Act, highlighting differing approaches to innovation, risk management, and oversight. A comparative overview of the EU, South Korean, and Japanese frameworks provided practical insights into global AI policy trends. The evolution of consent was also a key focus. Our experts examined Vietnam’s rapidly evolving data framework, analyzing the newly adopted Personal Data Protection Law and Law on Data and their implications for a comprehensive approach to data protection and governance. From Japan to New Zealand, the team engaged on timely issues and contributed to major regional forums, demonstrating leadership in advancing privacy and AI governance across the region.
In India, FPF engaged with key stakeholders and conducted peer-to-peer sessions on the Digital Personal Data Protection (DPDP) rules. Notably, FPF’s analysis of the DPDPA and generative AI systems helped inform India’s newly released AI Governance Guidelines, demonstrating the local impact of FPF’s resources.
In Latin America, FPF tracked developments such as Chile’s new data protection law and Brazil’s children’s privacy legislation. FPF also participated in regional events on age verification for minors, discussing technologies like facial recognition and emerging legal trends in the region. We also examined how data protection authorities are responding to AI, reviewing developments across Latin America and Europe.
In Africa, FPF examined cross-border data flows and regulatory interoperability, emphasizing regional coordination for responsible data transfers. This year, we launched the Africa Council Membership, a dedicated platform for companies operating in the continent. FPF also hosted its first in-person side event in Africa at the 2025 NADPA Convening in Abuja, Nigeria, centered on “Securing Safe and Trustworthy Cross-Border Data Flows in Africa.” The positive feedback from the session underscored the value of convening stakeholders around Africa’s evolving data protection landscape.
FPF’s flagship European event, the Brussels Privacy Symposium, co-organized with the Brussels Privacy Hub, brought together stakeholders to examine the GDPR’s role in the EU’s evolving digital framework. In partnership with OneTrust, FPF also published an updated Conformity Assessment under the EU AI Act: A Step-by-Step Guide and infographic, providing a roadmap for organizations to assess high-risk AI systems and meet accountability requirements. FPF closely followed the European Commission’s Digital Omnibus proposals, offering exclusive member analysis and public insights, including a rapid first-reaction LinkedIn Live discussion.
State and Federal U.S. Legislation
In 2025, FPF continued to track and analyze critical legislation in the privacy landscape from AI chatbots to neural data across various states in the U.S.
We unpacked the new wave of state chatbot legislation, focusing specifically on California SB 243, which became the first state to pass legislation governing companion chatbots with protections explicitly tailored to minors, and Utah’s SB 332, SB 226, and HB 452, where the state proved to be an early mover in state AI legislation as lawmakers signed three generative AI bills, amending Utah’s 2024 Artificial Intelligence Policy Act (AIPA) and establishing new regulations for mental health chatbots.
FPF compared California’s SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA) to the New York Raise Act anticipating where U.S. policy on frontier model safety may be headed, as this was signed into law making California the first state to enact a statute specifically targeting frontier AI safety and transparency.
We also looked at how amendments to previous state privacy laws, such as the Montana Consumer Data Privacy Act (MCDPA), were modified to create new protections for minors and examined how SB 1295 will amend the Connecticut Data Privacy Act (CTDPA), including how it expanded its scope, added a new consumer right, heightened the already strong protections for minors, and more.
Data-driven pricing also became a critical topic as states across the U.S. are introducing new legislation to regulate how companies use algorithms and personal data to set consumer prices as these modern pricing models can personalize pricing over time at scale, and are under increasing scrutiny. FPF looked at how legislation varies from state to state, and potential consequences of legislation, and the future of enforcement against these practices.
We explored “neural data”, or information about people’s central and/or peripheral nervous system activity. As of July 2025, four states have passed laws that seek to regulate “neural data.” FPF detailed in a blog why, given the nature of “neural data,” it is challenging to get the definition just right for the sake of regulation.
Building off of last year’s “Anatomy of State Comprehensive Privacy Law,” our recent report breaks down the critical commonalities and differences in the laws’ components that collectively constitute the “anatomy” of a state comprehensive privacy law.
Also this year, FPF hosted its 15th Annual Privacy Papers for Policymakers Award, recognizing cutting-edge privacy scholarship, bringing together brilliant minds at a critical time for data privacy amid the rise of AI. We listened to insightful discussions between our awardees and an exceptional lineup of privacy academics and industry leaders, while connecting with our awardees through a networking session with privacy professionals, policymakers and others.
U.S. Policy
AgeTech
FPF was awarded a grant from the Alfred P. Sloan Foundation to lead the two-year research project, “Aging at Home: Caregiving, Privacy, and Technology,” in partnership with the University of Arizona’s Eller College of Management. FPF launched the project in April, setting out to explore the complex intersection of privacy, economics, and the use of emerging technologies designed to support aging populations (“AgeTech”). In July, we released our first blog as part of the project, posing five essential privacy questions for older adults and caregivers to consider when utilizing tech to support aging populations.
During the holiday season, FPF also put together three types of AI-enabled agetech and the privacy and data protection considerations to navigate when gift-giving to older individuals and caregivers.
Youth Privacy
The start of 2025 was marked by significant policy activity at both the federal and state levels, focusing on legislative proposals aimed at strengthening online safeguards for minors.
FPF kicked off the year by releasing a redline comparison of the Federal Trade Commission’s notice of proposed changes to the Children’s Online Privacy Protection Act (COPPA) Rule. Later in the spring, an amendment to the COPPA Rule was reintroduced in the Senate and FPF completed a second redline, comparing the newly proposed COPPA 2.0 bill to the original COPPA Rule.
Towards the end of the year, the U.S. House Energy & Commerce Committee introduced a comprehensive bill package to advance child online privacy and safety, including its own version of COPPA 2.0, marking the latest step toward modernizing the nearly 30-year-old Children’s Online Privacy Protection Act.
FPF analyzed how the new House proposal compares to long-standing Senate efforts, what’s changing, and what it means for families, platforms, and policymakers navigating today’s digital landscape.
States across the U.S. also took action, introducing legislation to enhance the privacy and safety of kids’ and teens’ online experiences. Using the federal COPPA framework as a guide, FPF analyzed Arkansas’s proposed “Arkansas Children and Teens’ Online Privacy Protection Act”, describing how the bill establishes new privacy protections for teens aged 13 to 16. Other states, such as Vermont and Nebraska, took a different approach, opting to pass Age-Appropriate Design Code Acts (AADCs). FPF discussed how these new bills take two very different approaches to a common goal, crafting a design code that can withstand First Amendment scrutiny.
We utilized infographics to visually illustrate complex issues related to technology and children’s online experiences. In celebration of Safer Internet Day 2025, we released an infographic explaining how encryption technology plays a crucial role in ensuring data privacy and online safety for a new generation of teens and children. We also illustrated the Spectrum of Artificial Intelligence, exploring the wide range of current use cases for Artificial Intelligence (AI) in education and future possibilities and constraints. Finally, we released an infographic and readiness checklist that details the various types of deepfakes and the varied risks and considerations posed by each in a school setting, ranging from the potential for fabricated phone calls and voice messages impersonating teachers to the sharing of forged, non-consensual intimate imagery (NCII).
As agencies face increasing pressure to leverage sensitive student and institutional data for analysis and research, Privacy Enhancing Technologies (PETs) offer a unique potential solution as they are advanced technologies designed to protect data privacy while maintaining the utility of results yielded from analyses. FPF released a landscape report of the adoption of Privacy Enhancing Technologies (PETs) by State Education Agencies (SEAs).
Data Sharing for Research Tracker In March, we celebrated Open Data Day by launching the Data Sharing for Research Tracker, a growing list of organizations that make data available for researchers. The tracker helps researchers locate data for secondary analysis and organizations seeking to raise awareness about their data-sharing programs, benchmarking them against what other organizations offer.
Foundation Support
FPF’s funding is broad across every industry sector and includes funded competitive projects from the U.S. National Science Foundation and leading private foundations. We work to support ethical access to data by researchers, responsible uses of technology in K-12 education, and we seek to advance the uses of Privacy Enhancing Technologies in the private and public sectors.
FPF Membership
FPF Membership provides the leading community for privacy professionals to meet, network, and engage in discussions on top issues in the privacy landscape.
The Privacy Executives Network (PEN) Summit
We held our 2nd annual PEN Summit in Berkeley, California, which showcased the power of quality peer-to-peer conversations, focusing on the most pressing global privacy and AI issues. The event opened with the latest from CPPA Executive Director Tom Kemp, followed by dynamic peer-to-peer roundtables, and closed with a lively half-day privacy simulation- participants were challenged to pool their knowledge and identify potential solutions to a scenario that privacy executives may face in their career.
New Trainings for FPF Members
FPF Membership expanded its benefits with complimentary trainings for all members. FPF members are able to attend live virtual trainings, along with access to training recordings and presentation slides via the FPF Member Portal. We had our first course for members in late September on De-Identification and subsequent training on running a Responsible AI program. Stay tuned for more courses next year and be sure to join the FPF Training community in the Member Portal to receive updates on future trainings and view existing training materials.
FPF convenes top privacy and data protection minds and can give your company access to our outstanding network through FPF membership. Learn more on how to become an FPF member.
Top-level FPF Convenings and Engagements from 2025
DC Privacy Forum: Governance for Digital Leadership and Innovation
This year, FPF hosted two major events gathering leading experts and policymakers for critical discussions on privacy, AI, and digital regulation. In D.C., FPF hosted our second annual DC Privacy Forum, convening a broad audience of key government, civil society, academic, and corporate privacy leaders to discuss AI policy, critical topics in privacy, and other priority issues for the new administration and policymakers.
Brussels Privacy Symposium: A Data Protection (R)evolution?
Our ninth edition of the Brussels Privacy Symposium focused on the impact of the European Commission’s competitiveness and simplification agenda on digital regulation, including data protection. This year’s event featured bold discussions on refining the GDPR, strengthening regulatory cooperation, and shaping the future of AI governance. Read the report here.
FPF experts also took the stage across the globe:
FPF’s APAC team participated in several events during the Personal Data Protection Commission Singapore’s (PDPC) Personal Data Protection Week, both moderating and speaking on panels focused on the latest developments in data protection and use of data tech.
In South Korea, FPF leadership and experts attended the 47th Global Privacy Assembly, the leading international forum that brings together data protection and privacy authorities from around the world.
In Ghana, we joined leaders at the Africa AI Stakeholder Meeting to discuss AI Governance, data protection & infrastructure for AI in Africa.
FPF leadership spoke at an official side event of the AI Action Summit in Paris, discussing the role of Privacy-Enhancing Technologies (PETs) in data sharing for AI development.
FPF joined Turkey’s first privacy summit, the Istanbul Privacy Summit, contributing to a panel on “Regulating Intelligence” that explored approaches to responsible AI from the EU to the Middle East.
In Washington, we kicked off the IAPP’s Global Privacy Summit (GPS) with our annual Spring Social, a night full of great company, engaging discussions, and new connections. At GPS, our team hosted and spoke on several panels on topics ranging from understanding U.S. state and global privacy governance to the future of technological innovation, policy, and professions.
New initiatives and expanding FPF’s network:
FPF restarted its Privacy Book Club serieswith a special conversation with Professor Simon Chesterman, author of Artifice – a novel of AI. They discussed how speculative fiction can illuminate real-world challenges in AI governance, privacy, and trust, and what policymakers, technologists, and the public can learn from imagining possible futures. Watch the LinkedIn Live discussion, check out previous book club chats, and sign up for our newsletter to receive updates. Also check out FPF CEO Jules Polonetsky’s weekly LinkedIn Live series.
FPF was pleased to announce the election of Anne Bradley, Peter Lefkowitz, Nuala O’Connor, and Harriet Pearson to its Board of Directors. Julie Brill, Jocelyn Aqua, Haksoo Ko, Yeong Zee Kin, and Ann Waldo also joined FPF as senior fellows.
We honored Julie Brill at our 2nd Annual PEN Summit with a Lifetime Achievement Award, which recognizes Brill’s decades of leadership and profound impact on the fields of consumer protection, data protection, and digital trust in her public and private sector roles.
This material is based upon work supported by the Alfred P. Sloan Foundation under Grant No. G-2025-2519, Aging at Home: Caregiving, Privacy, and Technology.