Future of Privacy Forum Launches the FPF Center for Artificial Intelligence

The FPF Center for Artificial Intelligence will serve as a catalyst for AI policy and compliance leadership globally, advancing responsible data and AI practices for public and private stakeholders

Today, the Future of Privacy Forum (FPF) launched the FPF Center for Artificial Intelligence, established to better serve policymakers, companies, non-profit organizations, civil society, and academics as they navigate the challenges of AI policy and governance. The Center will expand FPF’s long-standing AI work, introduce large-scale novel research projects, and serve as a source for trusted, nuanced, nonpartisan, and practical expertise. 

FPF’s Center work will be international as AI continues to deploy globally and rapidly. Cities, states, countries, and international bodies are already grappling with implementing laws and policies to manage the risks.“Data, privacy, and AI are intrinsically interconnected issues that we have been working on at FPF for more than 15 years, and we remain dedicated to collaborating across the public and private sectors to promote their ethical, responsible, and human-centered use,” said Jules Polonetsky, FPF’s Chief Executive Officer. “But we have reached a tipping point in the development of the technology that will affect future generations for decades to come. At FPF, the word Forum is a core part of our identity. We are a trusted convener positioned to build bridges between stakeholders globally, and we will continue to do so under the new Center for AI, which will sit within FPF.”

The Center will help the organization’s 220+ members navigate AI through the development of best practices, research, legislative tracking, thought leadership, and public-facing resources. It will be a trusted evidence-based source of information for policymakers, and it will collaborate with academia and civil society to amplify relevant research and resources. 

“Although AI is not new, we have reached an unprecedented moment in the development of the technology that marks a true inflection point. The complexity, speed and scale of data processing that we are seeing in AI systems can be used to improve people’s lives and spur a potential leapfrogging of societal development, but with that increased capability comes associated risks to individuals and to institutions,” said Anne J. Flanagan, Vice President for Artificial Intelligence at FPF. “The FPF Center for AI will act as a collaborative force for shared knowledge between stakeholders to support the responsible development of AI, including its fair, safe, and equitable use.”

The Center will officially launch at FPF’s inaugural summit DC Privacy Forum: AI Forward. The in-person and public-facing summit will feature high-profile representatives from the public and private sectors in the world of privacy, data and AI. 

FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers. 

See the full list of founding FPF Center for AI Leadership Council members here.

I am excited about the launch of the Future of Privacy Forum’s new Center for Artificial Intelligence and honored to be part of its leadership council. This announcement builds on many years of partnership and collaboration between Workday and FPF to develop privacy best practices and advance responsible AI, which has already generated meaningful outcomes, including last year’s launch of best practices to foster trust in this technology in the workplace.  I look forward to working alongside fellow members of the Council to support the Center’s mission to build trust in AI and am hopeful that together we can map a path forward to fully harness the power of this technology to unlock human potential.

Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday

I’m honored to be a founding member of the Leadership Council of the Future of Privacy Forum’s new Center for Artificial Intelligence. AI’s impact transcends borders, and I’m excited to collaborate with a diverse group of experts around the world to inform companies, civil society, policymakers, and academics as they navigate the challenges and opportunities of AI governance, policy, and existing data protection regulations.

Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden

“As we enter this era of AI, we must require the right balance between allowing innovation to flourish and keeping enterprises accountable for the technologies they create and put on the market. IBM believes it will be crucial that organizations such as the Future of Privacy Forum help advance responsible data and AI policies, and we are proud to join others in industry and academia as part of the Leadership Council.”

Learn more about the FPF Center for AI here.

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. 

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.

FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance

FPF’s Youth and Education team has developed a checklist and accompanying policy brief to help schools vet generative AI tools for compliance with student privacy laws. Vetting Generative AI Tools for Use in Schools is a crucial resource as the use of generative AI tools continues to increase in educational settings. It’s critical for school leaders to understand how existing federal and state student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) apply to the complexities of machine learning systems to protect student privacy. With these resources, FPF aims to provide much-needed clarity and guidance to educational institutions grappling with these issues.

Click here to access the checklist and policy brief.

“AI technology holds immense promise in enhancing educational experiences for students, but it must be implemented responsibly and ethically,” said David Sallay, the Director for Youth & Education Privacy at the Future of Privacy Forum. “With our new checklist, we aim to empower educators and administrators with the knowledge and tools necessary to make informed decisions when selecting generative AI tools for classroom use while safeguarding student privacy.”

The checklist, designed specifically for K -12 schools, outlines key considerations when incorporating generative AI into a school or district’s edtech vetting checklist. 

These include: 

By prioritizing these steps, educational institutions can promote transparency and protect student privacy while maximizing the benefits of technology-driven learning experiences for students. 

The in-depth policy brief outlines the relevant laws and policies a school should consider, the unique compliance considerations of generative AI tools (including data collection, transparency and explainability, product improvement, and high-risk decision-making), and their most likely use cases (student, teacher, and institution-focused).

The brief also encourages schools and districts to update their existing edtech vetting policies to address the unique considerations of AI technologies (or to create a comprehensive policy if one does not already exist) instead of creating a separate vetting process for AI. It also highlights the role that state legislatures can play in ensuring the efficiency of school edtech vetting and oversight and calls on vendors to be proactively transparent with schools about their use of AI.

li live promo

Check out the LinkedIn Live with CEO Jules Polonetsky and Youth & Education Director David Sallay about the Checklist and Policy Brief.

To read more of the Future of Privacy Forum’s youth and student privacy resources, visit www.StudentPrivacyCompass.org

FPF Releases “The Playbook: Data Sharing for Research” Report and Infographic

Today, the Future of Privacy Forum (FPF) published “The Playbook: Data Sharing for Research,” a report on best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report.

Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific insights and drive progress in public health, education, social science, and a myriad of other fields for the betterment of the broader society. Academic researchers use this data to consider consumer, commercial, and scientific questions at a scale they cannot reach using conventional research data-gathering techniques alone. This data also helped researchers answer questions on topics ranging from bias in targeted advertising and the influence of misinformation on election outcomes to early diagnosis of diseases through data collected by fitness and health apps.

The playbook addresses vital steps for data management, sharing, and program execution between companies and researchers. Creating a data-sharing ecosystem that positively advances scientific research requires a better understanding of the established risks, opportunities to address challenges, and the diverse stakeholders involved in data-sharing decisions. This report aims to encourage safe, responsible data-sharing between industries and researchers.

“Corporate data sharing connects companies with research institutions, by extension increasing the quantity and quality of research for social good,” said Shea Swauger, Senior Researcher for Data Sharing and Ethics. “This Playbook showcases the importance, and advantages, of having appropriate protocols in place to create safe and simple data sharing processes.”

In addition to the Playbook, FPF created a companion infographic summarizing the benefits, challenges, and opportunities of data sharing for research outlined in the larger report.

research data sharing infographic

As a longtime advocate for facilitating the privacy-protective sharing of data by industry to the research community, FPF is proud to have created this set of best practices for researchers, institutions, policymakers, and data-holding companies. In addition to the Playbook, the Future of Privacy Forum has also opened nominations for its annual Award for Research Data Stewardship.

“Our goal with these initiatives is to celebrate the successful research partnerships transforming how corporations and researchers interact with each other,” Swauger said. “Hopefully, we can continue to engage more audiences and encourage others to model their own programs with solid privacy safeguards.”

Shea Swauger, Senior Researcher for Data Sharing and Ethics, Future of privacy Forum

Established by FPF in 2020 with support from The Alfred P. Sloan Foundation, the Award for Research Data Stewardship recognizes excellence in the privacy-protective stewardship of corporate data shared with academic researchers. The call for nominations is open and closes on Tuesday, January 17, 2023. To submit a nomination, visit the FPF site.

FPF has also launched a newly formed Ethics and Data in Research Working Group; this group receives late-breaking analyses of emerging US legislation affecting research and data, meets to discuss the ethical and technological challenges of conducting research, and collaborates to create best practices to protect privacy, decrease risk, and increase data sharing for research, partnerships, and infrastructure. Learn more and join here

FPF Testifies Before House Subcommittee on Energy and Commerce, Supporting Congress’s Efforts on the “American Data Privacy and Protection Act” 

This week, FPF’s Senior Policy Counsel Bertram Lee testified before the U.S. House Energy and Commerce Subcommittee on Consumer Protection and Commerce hearing, “Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security” regarding the bipartisan, bicameral privacy discussion draft bill, “American Data Privacy and Protection Act” (ADPPA). FPF has a history of supporting the passage of a comprehensive federal consumer privacy law, which would provide businesses and consumers alike with the benefit of clear national standards and protections.

Lee’s testimony opened by applauding the Committee on its efforts towards comprehensive federal privacy legislation and emphasized the “time is now” for its passage. As it is written, the ADPPA would address gaps in the sectoral approach to consumer privacy, establish strong national civil rights protections, and establish new rights and safeguards for the protection of sensitive personal information. 

“The ADPPA is more comprehensive in scope, inclusive of civil rights protections, and provides individuals with more varied enforcement mechanisms in comparison to some states’ current privacy regimes,” Lee said in his testimony. “It also includes corporate accountability mechanisms, such as the requiring privacy designations, data security offices, and executive certifications showing compliance, which is missing from current states’ laws. Notably, the ADPPA also requires ‘short-form’ privacy notices to aid consumers of how their data will be used by companies and their rights — a provision that is not found in any state law.” 

Lee’s testimony also provided four recommendations to strengthen the bill, which include: 

Many of the recommendations would ensure that the legislation gives individuals meaningful privacy rights and places clear obligations on businesses and other organizations that collect, use and share personal data. The legislation would expand civil rights protections for individuals and communities harmed by algorithmic discrimination as well as require algorithmic assessments and evaluations to better understand how these technologies can impact communities. 

The submitted testimony and a video of the hearing can be found on the House Committee on Energy & Commerce site.

Reading the Signs: the Political Agreement on the New Transatlantic Data Privacy Framework

The President of the United States, Joe Biden, and the President of the European Commission, Ursula von der Leyen, announced last Friday, in Brussels, a political agreement on a new Transatlantic framework to replace the Privacy Shield. 

This is a significant escalation of the topic within Transatlantic affairs, compared to the 2016 announcement of a new deal to replace the Safe Harbor framework. Back then, it was Commission Vice-President Andrus Ansip and Commissioner Vera Jourova who announced at the beginning of February 2016 that a deal had been reached. 

The draft adequacy decision was only published a month after the announcement, and the adequacy decision was adopted 6 months later, in July 2016. Therefore, it should not be at all surprising if another 6 months (or more!) pass before the adequacy decision for the new Framework will produce legal effects and actually be able to support transfers from the EU to the US. Especially since the US side still has to pass at least one Executive Order to provide for the agreed-upon new safeguards.

This means that transfers of personal data from the EU to the US may still be blocked in the following months – possibly without a lawful alternative to continue them – as a consequence of Data Protection Authorities (DPAs) enforcing Chapter V of the General Data Protection Regulation in the light of the Schrems II judgment of the Court of Justice of the EU, either as part of the 101 noyb complaints submitted in August 2020 and slowly starting to be solved, or as part of other individual complaints/court cases. 

After the agreement “in principle” was announced at the highest possible political level, EU Justice Commissioner Didier Reynders doubled down on the point that this agreement is reached “on the principles” for a new framework, rather than on the details of it. Later on he also gave credit to Commerce Secretary Gina Raimondo and US Attorney General Merrick Garland for their hands-on involvement in working towards this agreement. 

In fact, “in principle” became the leitmotif of the announcement, as the first EU Data Protection Authority to react to the announcement was the European Data Protection Supervisor, who wrote that he “Welcomes, in principle”, the announcement of a new EU-US transfers deal – “The details of the new agreement remain to be seen. However, EDPS stresses that a new framework for transatlantic data flows must be sustainable in light of requirements identified by the Court of Justice of the EU”.

Of note, there is no catchy name for the new transfers agreement, which was referred to as the “Trans-Atlantic Data Privacy Framework”. Nonetheless, FPF’s CEO Jules Polonetsky submits the “TA DA!” Agreement, and he has my vote. For his full statement on the political agreement being reached, see our release here.

Some details of the “principles” agreed on were published hours after the announcement, both by the White House and by the European Commission. Below are a couple of things that caught my attention from the two brief Factsheets.

The US has committed to “implement new safeguards” to ensure that SIGINT activities are “necessary and proportionate” (an EU law legal measure – see Article 52 of the EU Charter on how the exercise of fundamental rights can be limited) in the pursuit of defined national security objectives. Therefore, the new agreement is expected to address the lack of safeguards for government access to personal data as specifically outlined by the CJEU in the Schrems II judgment.

The US also committed to creating a “new mechanism for the EU individuals to seek redress if they believe they are unlawfully targeted by signals intelligence activities”. This new mechanism was characterized by the White House as having “independent and binding authority”. Per the White House, this redress mechanism includes “a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed”. The EU Commission mentioned in its own Factsheet that this would be a “two-tier redress system”. 

Importantly, the White House mentioned in the Factsheet that oversight of intelligence activities will also be boosted – “intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards”. Oversight and redress are different issues and are both equally important – for details, see this piece by Christopher Docksey. However, they tend to be thought of as being one and the same. Being addressed separately in this announcement is significant.

One of the remarkable things about the White House announcement is that it includes several EU law-specific concepts: “necessary and proportionate”, “privacy, data protection” mentioned separately, “legal basis” for data flows. In another nod to the European approach to data protection, the entire issue of ensuring safeguards for data flows is framed as more than a trade or commerce issue – with references to a “shared commitment to privacy, data protection, the rule of law, and our collective security as well as our mutual recognition of the importance of trans-Atlantic data flows to our respective citizens, economies, and societies”.

Last, but not least, Europeans have always framed their concerns related to surveillance and data protection as being fundamental rights concerns. The US also gives a nod to this approach, by referring a couple of times to “privacy and civil liberties” safeguards (adding thus the “civil liberties” dimension) that will be “strengthened”. All of these are positive signs for a “rapprochement” of the two legal systems and are certainly an improvement to the “commerce” focused approach of the past on the US side. 

Lastly, it should also be noted that the new framework will continue to be a self-certification scheme managed by the US Department of Commerce.  

What does all of this mean in practice? As the White House details, this means that the Biden Administration will have to adopt (at least) an Executive Order (EO) that includes all these commitments and on the basis of which the European Commission will draft an adequacy decision.

Thus, there are great expectations in sight following the White House and European Commission Factsheets, and the entire privacy and data protection community is waiting to see further details.

In the meantime, I’ll leave you with an observation made by my colleague, Amie Stepanovich, VP for US Policy at FPF, who highlighted that Section 702 of the FISA Act is set to expire on December 31, 2023. This presents Congress with an opportunity to act, building on such an extensive amount of work done by the US Government in the context of the Transatlantic Data Transfers debate.

Privacy Best Practices for Rideshare Drivers Using Dashcams

FPF & Uber Publish Guide Highlighting Privacy Best Practices for Drivers who Record Video and Audio on Rideshare Journeys

FPF and Uber have created a guide for US-based rideshare drivers who install “dashcams” – video cameras mounted on a vehicle’s dashboard or windshield. Many drivers install dashcams to improve safety, security, and accountability; the cameras can capture crashes or other safety-related incidents outside and inside cars. Dashcam footage can be helpful to drivers, passengers, insurance companies, and others when adjudicating legal claims. At the same time, dashcams can pose substantial privacy risks if appropriate safeguards are not in place to limit the collection, use, and disclosure of personal data. 

Dashcams typically record video outside a vehicle. Many dashcams also record in-vehicle audio and some record in-vehicle video. Regardless of the particular device used, ride-hail drivers who use dashcams must comply with applicable audio and video recording laws.

The guide explains relevant laws and provides practical tips to help drivers be transparent, limit data use and sharing, retain video and audio-only for practical purposes, and use strict security controls. The guide highlights ways that drivers can employ physical signs, in-app notices, and other means to ensure passengers are informed about dashcam use and can make meaningful choices about whether to travel in a dashcam-equipped vehicle. Drivers seeking advice concerning specific legal obligations or incidents should consult legal counsel.

Privacy best practices for dashcams include: 

  1. Give individuals notice that they are being recorded
    • Place recording notices inside and on the vehicle.
    • Mount the dashcam in a visible location.
    • Consider, in some situations, giving an oral notification that recording is taking place.
    • Determine whether the ride sharing service provides recording notifications in the app, and utilize those in-app notices.
  2. Only record audio and video for defined, reasonable purposes
    • Only keep recordings for as long as needed for the original purpose.
    • Inform passengers as to why video and/or audio is being recorded.
  3. Limit sharing and use of recorded footage
    • Only share video and audio with third parties for relevant reasons that align with the original reason for recording.
    • Thoroughly review the rideshare service’s privacy policy and community guidelines if using an app-based rideshare service, and be aware that many rideshare companies maintain policies against widely disseminating recordings.
  4. Safeguard and encrypt recordings and delete unused footage
    • Identify dashcam vendors that provide the highest privacy and security safeguards.
    • Carefully read the terms and conditions when buying dashcams to understand the data flows.

Uber will be making these best practices available to drivers in their app and website. 

Many ride-hail drivers use dashcams in their cars, and the guidance and best practices published today provide practical guidance to help drivers implement privacy protections. But driver guidance is only one aspect of ensuring individuals’ privacy and security when traveling. Dashcam manufacturers must implement privacy-protective practices by default and provide easy-to-use privacy options. At the same time, ride-hail platforms must provide drivers with the appropriate tools to notify riders, and carmakers must safeguard drivers’ and passengers’ data collected by OEM devices.

In addition, dashcams are only one example of increasingly sophisticated sensors appearing in passenger vehicles as part of driver monitoring systems and related technologies. Further work is needed to apply comprehensive privacy safeguards to emerging technologies across the connected vehicle sector, from carmakers and rideshare services to mobility services providers and platforms. Comprehensive federal privacy legislation would be a good start. And in the absence of Congressional action, FPF is doing further work to identify key privacy risks and mitigation strategies for the broader class of driver monitoring systems that raise questions about technologies beyond the scope of this dashcam guide.

12th Annual Privacy Papers for Policymakers Awardees Explore the Nature of Privacy Rights & Harms

The winners of the 12th annual Future of Privacy (FPF) Privacy Papers for Policymakers Award ask big questions about what should be the foundational elements of data privacy and protection and who will make key decisions about the application of privacy rights. Their scholarship will inform policy discussions around the world about privacy harms, corporate responsibilities, oversight of algorithms, and biometric data, among other topics.

“Policymakers and regulators in many countries are working to advance data protection laws, often seeking in particular to combat discrimination and unfairness,” said FPF CEO Jules Polonetsky. “FPF is proud to highlight independent researchers tackling big questions about how individuals and society relate to technology and data.”

This year’s papers also explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and among international data protection authorities. The winning papers will be presented at a virtual event on February 10, 2022. 

The winners of the 2022 Privacy Papers for Policymakers Award are:

From the record number of nominated papers submitted this year, these six papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were selected based on the research and solutions that are relevant for policymakers and regulators in the U.S. and abroad.

In addition to the winning papers, FPF has selected two papers for Honorable Mention: Verification Dilemmas and the Promise of Zero-Knowledge Proofs by Kenneth Bamberger, University of California, Berkeley – School of Law; Ran Canetti, Boston University, Department of Computer Science, Boston University, Faculty of Computing and Data Science, Boston University, Center for Reliable Information Systems and Cybersecurity; Shafi Goldwasser, University of California, Berkeley – Simons Institute for the Theory of Computing; Rebecca Wexler, University of California, Berkeley – School of Law; and Evan Zimmerman, University of California, Berkeley – School of Law; and A Taxonomy of Police Technology’s Racial Inequity Problems by Laura Moy, Georgetown University Law Center.

FPF also selected a paper for the Student Paper Award, A Fait Accompli? An Empirical Study into the Absence of Consent to Third Party Tracking in Android Apps by Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford. The Student Paper Award Honorable Mention was awarded to Yeji Kim, University of California, Berkeley – School of Law, for her paper, Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent.

The winning authors will join FPF staff to present their work at a virtual event with policymakers from around the world, academics, and industry privacy professionals. The event will be held on February 10, 2022, from 1:00 – 3:00 PM EST. The event is free and open to the general public. To register for the event, visit https://bit.ly/3qmJdL2.

Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release

The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users.

“We have a prime opportunity now to implement strong privacy and human rights protections as brain-computer interfaces become more widely used,” said Jeremy Greenberg, Policy Counsel at the Future of Privacy Forum. “Among other uses, these technologies have tremendous potential to treat people with diseases and conditions like epilepsy or paralysis and make it easier for people with disabilities to communicate, but these benefits can only be fully realized if meaningful privacy and ethical safeguards are in place.”

Brain-computer interfaces are computer-based systems that are capable of directly recording, processing, analyzing, or modulating human brain activity. The sensitivity of data that BCIs collect and the capabilities of the technology raise concerns over consent, as well as the transparency, security, and accuracy of the data. The report offers a number of policy and technical solutions to mitigate the risks of BCIs and highlights their positive uses.

“Emerging innovations like neurotechnology hold great promise to transform healthcare, education, transportation, and more, but they need the right guardrails in place to protect individuals’ privacy,” said IBM Chief Privacy Officer Christina Montgomery. “Working together with the Future of Privacy Forum, the IBM Policy Lab is pleased to release a new framework to help policymakers and businesses navigate the future of neurotechnology while safeguarding human rights.”

FPF and IBM have outlined several key policy recommendations to mitigate the privacy risks associated with BCIs, including:

FPF and IBM have also included several technical recommendations for BCI devices, including:

FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.

Read FPF’s four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.

FPF Launches Asia-Pacific Region Office, Global Data Protection Expert Clarisse Girot Leads Team

The Future of Privacy Forum (FPF) has appointed Clarisse Girot, PhD, LLM, an expert on Asian and European privacy legislation, to lead its new FPF Asia-Pacific office based in Singapore as Director. This new office expands FPF’s international reach in Asia and complements FPF’s offices in the U.S., Europe, and Israel, as well as partnerships around the globe.
 
Dr. Clarisse Girot is a privacy professional with over twenty years of experience in the privacy and data protection fields. Since 2017, Clarisse has been leading the Asian Business Law Institute’s (ABLI) Data Privacy Project, focusing on the regulations on cross-border data transfers in 14 Asian jurisdictions. Prior to her time at ABLI, Clarisse served as the Counsellor to the President of the French Data Protection Authority (CNIL) and Chair of the Article 29 Working Party. She previously served as head of CNIL’s Department of European and International Affairs, where she sat on the Article 29 Working Party, the group of EU Data Protection Authorities, and was involved in major international cases in data protection and privacy.
 
“Clarisse is joining FPF at an important time for data protection in the Asia-Pacific region. The two most populous countries in the world, India, and China, are introducing general privacy laws, and established data protection jurisdictions, like Singapore, Japan, South Korea, and New Zealand, have recently updated their laws,” said FPF CEO Jules Polonetsky. “Her extensive knowledge of privacy law will provide vital insights for those interested in compliance with regional privacy frameworks and their evolution over time.”
 
FPF Asia-Pacific will focus on several priorities by the end of the year including hosting an event at this year’s Singapore Data Protection Week. The office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and legal professionals. Rajah & Tann Singapore LLP is supporting the work of the FPF Asia-Pacific office.
 
“The FPF global team will greatly benefit from the addition of Clarisse. She will advise FPF staff, advisory board members, and the public on the most significant privacy developments in the Asia-Pacific region, including data protection bills and cross-border data flows,” said Gabriela Zanfir-Fortuna, Director for Global Privacy at FPF. “Her past experience in both Asia and Europe gives her a unique ability to confront the most complex issues dealing with cross-border data protection.”
 
As over 140 countries have now enacted a privacy or data protection law, FPF continues to expand its international presence to help data protection experts grapple with the challenges of ensuring responsible uses of data. Following the appointment of Malavika Raghavan as Senior Fellow for India in 2020, the launch of the FPF Asia-Pacific office further expands FPF’s international reach.
 
Dr. Gabriela Zanfir-Fortuna leads FPF’s international efforts and works on global privacy developments and European data protection law and policy. The FPF Europe office is led by Dr. Rob van Eijk, who prior to joining FPF worked at the Dutch Data Protection Authority as Senior Supervision Officer and Technologist for nearly ten years. FPF has created thriving partnerships with leading privacy research organizations in the European Union, such as Dublin City University and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB). FPF continues to serve as a leading voice in Europe on issues of international data flows, the ethics of AI, and emerging privacy issues. FPF Europe recently published a report comparing the regulatory strategy for 2021-2022 of 15 Data Protection Authorities to provide insights into the future of enforcement and regulatory action in the EU.
 
Outside of Europe, FPF has launched a variety of projects to advance tech policy leadership and scholarship in regions around the world, including Israel and Latin America. The work of the Israel Tech Policy Institute (ITPI), led by Managing Director Limor Shmerling Magazanik, includes publishing a report on AI Ethics in Government Services and organizing an OECD workshop with the Israeli Ministry of Health on access to health data for research.
 
In Latin America, FPF has partnered with the leading research association Data Privacy Brasil, provided in-depth analysis on Brazil’s LGPD privacy legislation and various data privacy cases decided in the Brazilian Supreme Court. FPF recently organized a panel during the CPDP LatAm Conference which explored the state of Latin American data protection laws alongside experts from Uber, the University of Brasilia, and the Interamerican Institute of Human Rights.
 

Read Dr. Girot’s Q&A on the FPF blog. Stay updated: Sign up for FPF Asia-Pacific email alerts.
 

FPF and Leading Health & Equity Organizations Issue Principles for Privacy & Equity in Digital Contact Tracing Technologies

With support from the Robert Wood Johnson Foundation, FPF engaged leaders within the privacy and equity communities to develop actionable guiding principles and a framework to help bolster the responsible implementation of digital contact tracing technologies (DCTT). Today, seven privacy, civil rights, and health equity organizations signed on to these guiding principles for organizations implementing DCTT.

“We learned early in our Privacy and Pandemics initiative that unresolved ethical, legal, social, and equity issues may challenge the responsible implementation of digital contact tracing technologies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “So we engaged leaders within the civil rights, health equity, and privacy communities to create a set of actionable principles to help guide organizations implementing digital contact tracing that respects individual rights.”

Contact tracing has long been used to monitor the spread of various infectious diseases. In light of COVID-19, governments and companies began deploying digital exposure notification using Bluetooth and geolocation data on mobile devices to boost contact tracing efforts and quickly identify individuals who may have been exposed to the virus. However, as DCTT begins to play an important role in public health, it is important to take necessary steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that might accompany its implementation today and in the future. Governance efforts that seek to better understand these risks will be better able to bolster public trust in DCTT technologies. 

“LGBT Tech is proud to have participated in the development of the Principles and Framework alongside FPF and other organizations. We are heartened to see that the focus of these principles is on historically underserved and under-resourced communities everywhere, like the LGBTQ+ community. We believe the Principles and Framework will help ensure that the needs and vulnerabilities of these populations are at the forefront during today’s pandemic and future pandemics.”

Carlos Gutierrez, Deputy Director, and General Counsel, LGBT Tech

“If we establish practices that protect individual privacy and equity, digital contact tracing technologies could play a pivotal role in tracking infectious diseases,” said Dr. Rachele Hendricks-Sturrup, Research Director at the Duke-Margolis Center for Health Policy. “These principles allow organizations implementing digital contact tracing to take ethical and responsible approaches to how their technology collects, tracks, and shares personal information.”

FPF, together with Dialogue on Diversity, the National Alliance Against Disparities in Patient Health (NADPH), BrightHive, and LGBT Tech, developed the principles, which advise organizations implementing DCTT to commit to the following actions:

  1. Be Transparent About How Data Is Used and Shared. 
  1. Apply Strong De-Identification Techniques and Solutions. 
  1. Empower Users Through Tiered Opt-in/Opt-out Features and Data Minimization. 
  1. Acknowledge and Address Privacy, Security, and Nondiscrimination Protection Gaps. 
  1. Create Equitable Access to DCTT. 
  1. Acknowledge and Address Implicit Bias Within and Across Public and Private Settings.
  1. Democratize Data for Public Good While Employing Appropriate Privacy Safeguards. 
  1. Adopt Privacy-By-Design Standards That Make DCTT Broadly Accessible. 

Additional supporters of these principles include the Center for Democracy and Technology and Human Rights First.

To learn more and sign on to the DCTT Principles visit fpf.org/DCTT.

Support for this program was provided by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the Foundation.

Navigating Preemption through the Lens of Existing State Privacy Laws

This post is the second of two posts on federal preemption and enforcement in United States federal privacy legislation. See Preemption in US Privacy Laws (June 14, 2021).

In drafting a federal baseline privacy law in the United States, lawmakers must decide to what extent the law will override state and local privacy laws. In a previous post, we discussed a survey of 12 existing federal privacy laws passed between 1968-2003, and the extent to which they are preemptive of similar state laws. 

Another way to approach the same question, however, is to examine the hundreds of existing state privacy laws currently on the books in the United States. Conversations around federal preemption inevitably focus on comprehensive laws like the California Consumer Privacy Act, or the Virginia Consumer Data Protection Act — but there are hundreds of other state privacy laws on the books that regulate commercial and government uses of data. 

In reviewing existing state laws, we find that they can be categorized usefully into: laws that complement heavily regulated sectors (such as health and finance); laws of general applicability; common law; laws governing state government activities (such as schools and law enforcement); comprehensive laws; longstanding or narrowly applicable privacy laws; and emerging sectoral laws (such as biometrics or drones regulations). As a resource, we recommend: Robert Ellis Smith, Compilation of State and Federal Privacy Laws (last supplemented in 2018). 

  1. Heavily Regulated Sectoral Silos. Most federal proposals for a comprehensive privacy law would not supersede other existing federal laws that contain privacy requirements for businesses, such as the Health Insurance Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act (GLBA). As a result, a new privacy law should probably not preempt state sectoral laws that: (1) supplement their federal counterparts and (2) were intentionally not preempted by those federal regimes. In many cases, robust compliance regimes have been built around federal and state parallel requirements, creating entrenched privacy expectations, privacy tools, and compliance practices for organizations (“lock in”).
  1. Laws of General Applicability. All 50 states have laws barring unfair and deceptive commercial and trade practices (UDAP), as well as generally applicable laws against fraud, unconscionable contracts, and other consumer protections. In cases where violations involve the mis-use of personal information, such claims could be inadvertently preempted by a national privacy law.
  1. State Common Law. Privacy claims have been evolving in US common law over the last hundred years, and claims vary from state to state. A federal privacy law might preempt (or not preempt) claims brought under theories of negligence, breach of contract, product liability, invasions of privacy, or other “privacy torts.”
  2. State Laws Governing State Government Activities. In general, states retain the right to regulate their own government entities, and a commercial baseline privacy law is unlikely to affect such state privacy laws. These include, for example, state “mini Privacy Acts” applying to state government agencies’ collection of records, state privacy laws applicable to public schools and school districts, and state regulations involving law enforcement — such as government facial recognition bans.
  1. Comprehensive or Non-Sectoral State Laws. Lawmakers considering the extent of federal preemption should take extra care to consider the effect on different aspects of omnibus or comprehensive consumer privacy laws, such as the California Consumer Privacy Act (CCPA), the Colorado Privacy Act, and the Virginia Consumer Data Protection Act. In addition, however, there are a number of other state privacy laws that can be considered “non-sectoral” because they apply broadly to businesses that collect or use personal information. These include, for example, CalOPPA (requiring commercial privacy policies), the California “Shine the Light” law (requiring disclosures from companies that share personal information for direct marketing), data breach notification laws, and data disposal laws.
  1. Longstanding, Narrowly Applicable State Privacy Laws. Many states have relatively long-standing privacy statutes on the books that govern narrow use cases, such as: state laws governing library records, social media password laws, mugshot laws, anti-paparazzi laws, state laws governing audio surveillance between private parties, and laws governing digital assets of decedents. In many cases, such laws could be expressly preserved or incorporated into a federal law. 
  1. Emerging Sectoral and Future-Looking Privacy Laws. New state laws have emerged in recent years in response to novel concerns, including for: biometric data; drones; connected and autonomous vehicles; the Internet of Things; data broker registration; and disclosure of intimate images. This trend is likely to continue, particularly in the absence of a federal law.

Congressional intent is the “ultimate touchstone” of preemption. Lawmakers should consider long-term effects on current and future state laws, including how they will be impacted by a preemption provision, as well as how they might be expressly preserved through a Savings Clause. In order to help build consensus, lawmakers should work with stakeholders and experts in the numerous categories of laws discussed above, to consider how they might be impacted by federal preemption.

ICYMI: Read the first blog in this series PREEMPTION IN US PRIVACY LAWS.

Manipulative Design: Defining Areas of Focus for Consumer Privacy

In consumer privacy, the phrase “dark patterns” is everywhere. Emerging from a wide range of technical and academic literature, it now appears in at least two US privacy laws: the California Privacy Rights Act and the Colorado Privacy Act (which, if signed by the Governor, will come into effect in 2025).

Under both laws, companies will be prohibited from using “dark patterns,” or “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision‐making, or choice,” to obtain user consent in certain situations–for example, for the collection of sensitive data.

When organizations give individuals choices, some forms of manipulation have long been barred by consumer protection laws, with the Federal Trade Commission and state Attorneys General prohibiting companies from deceiving or coercing consumers into taking actions they did not intend or striking bargains they did not want. But consumer protection law does not typically prohibit organizations from persuading consumers to make a particular choice. And it is often unclear where the lines fall between cajoling, persuading, pressuring, nagging, annoying, or bullying consumers. The California and Colorado laws seek to do more than merely bar deceptive practices; they prohibit design that “subverts or impairs user autonomy.”

What does it mean to subvert user autonomy, if a design does not already run afoul of traditional consumer protections law? Just as in the physical world, the design of digital platforms and services always influences behavior — what to pay attention to, what to read and in what order, how much time to spend, what to buy, and so on. To paraphrase Harry Brignull (credited with coining the term), not everything “annoying” can be a dark pattern. Some examples of dark patterns are both clear and harmful, such as a design that tricks users into making recurring payments, or a service that offers a “free trial” and then makes it difficult or impossible to cancel. In other cases, the presence of “nudging” may be clear, but harms may be less clear, such as in beta-testing what color shades are most effective at encouraging sales. Still others fall in a legal grey area: for example, is it ever appropriate for a company to repeatedly “nag” users to make a choice that benefits the company, with little or no accompanying benefit to the user?

In Fall 2021, Future of Privacy Forum will host a series of workshops with technical, academic, and legal experts to help define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops will feature experts on manipulative design in at least three contexts of consumer privacy: (1) Youth & Education; (2) Online Advertising and US Law; and (3) GDPR and European Law. 

As lawmakers address this issue, we identify at least four distinct areas of concern:

This week at the first edition of the annual Dublin Privacy Symposium, FPF will join other experts to discuss principles for transparency and trust. The design of user interfaces for digital products and services pervades modern life and directly impacts the choices people make with respect to sharing their personal information. 

India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

New FPF Report Highlights Privacy Tech Sector Evolving from Compliance Tools to Platforms for Risk Management and Data Utilization

As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

“The privacy tech sector is at an inflection point, as its offerings have expanded beyond assisting with regulatory compliance,” said FPF CEO Jules Polonetsky. “Increasingly, companies want privacy tech to help businesses maximize the utility of data while managing ethics and data protection compliance.”

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding. 

“The customers buying privacy-enhancing tech used to be primarily Chief Privacy Officers,” said report lead author Tim Sparapani. “Now it’s also Chief Marketing Officers, Chief Data Scientists, and Strategy Officers who value the insights they can glean from de-identified customer data.”

The report highlights five trends in the privacy enhancing tech market:

The report also draws seven implications for competition in the market:

The report makes a series of recommendations, including that the industry define as a priority a common vernacular for privacy tech; set standards for technologies in the “privacy stack” such as differential privacy, homomorphic encryption, and federated learning; and explore the needs of companies for privacy tech based upon their size, sector, and structure. It calls on vendors to recognize the need to provide adequate support to customers to increase uptake and speed time from contract signing to successful integration.

The Future of Privacy Forum launched the Privacy Tech Alliance (PTA) as a global initiative with a mission to define, enhance and promote the market for privacy technologies. The PTA brings together innovators in privacy tech with customers and key stakeholders.

Members of the PTA Advisory Board, which includes Anonos, BigID, D-ID, Duality, Ethyca, Immuta, OneTrust, Privacy Analytics, Privitar, SAP, Truata, TrustArc, Wirewheel, and ZL Tech, have formed a working group to address impediments to growth identified in the report. The PTA working group will define a common vernacular and typology for privacy tech as a priority project with chief privacy officers and other industry leaders who are members of FPF. Other work will seek to develop common definitions and standards for privacy-enhancing technologies such as differential privacy, homomorphic encryption, and federated learning and identify emerging trends for venture capitalists and other equity investors in this space. Privacy Tech companies can apply to join the PTA by emailing [email protected].


Perspectives on the Privacy Tech Market

Quotes from Members of the Privacy Tech Alliance Advisory Board on the Release of the “Privacy Tech’s Third Generation” Report

anonos feature image 1

“The ‘Privacy Tech Stack’ outlined by the FPF is a great way for organizations to view their obligations and opportunities to assess and reconcile business and privacy objectives. The Schrems II decision by the Court of Justice of the European Union highlights that skipping the second ‘Process’ layer can result in desired ‘Outcomes’ in the third layer (e.g., cloud processing of, or remote access to, cleartext data) being unlawful – despite their global popularity – without adequate risk management controls for decentralized processing.” — Gary LaFever, CEO & General Counsel, Anonos

bigid 1

“As a founding member of this global initiative, we are excited by the conclusions drawn from this foundational report – we’ve seen parallels in our customer base, from needing an enterprise-wide solution to the rich opportunity for collaboration and integration. The privacy tech sector continues to mature as does the imperative for organizations of all sizes to achieve compliance in light of the increasingly complicated data protection landscape.’’—Heather Federman, VP Privacy and Policy at BigID

logo

“There is no doubt of the massive importance of the privacy sector, an area which is experiencing huge growth. We couldn’t be more proud to be part of the Privacy Tech Alliance Advisory Board and absolutely support the work they are doing to create alignment in the industry and help it face the current set of challenges. In fact we are now working on a similar initiative in the synthetic media space to ensure that ethical considerations are at the forefront of that industry too.” — Gil Perry, Co-Founder & CEO, D-ID

dualitytechnologies

“We congratulate the Future of Privacy Forum and the Privacy Tech Alliance on the publication of this highly comprehensive study, which analyzes key trends within the rapidly expanding privacy tech sector. Enterprises today are increasingly reliant on privacy tech, not only as a means of ensuring regulatory compliance but also in order to drive business value by facilitating secure collaborations on their valuable and often sensitive data. We are proud to be part of the PTA Advisory Board, and look forward to contributing further to its efforts to educate the market on the importance of privacy-tech, the various tools available and their best utilization, ultimately removing barriers to successful deployments of privacy-tech by enterprises in all industry sectors” — Rina Shainski, Chairwoman, Co-founder, Duality

onetrustlogo

“Since the birth of the privacy tech sector, we’ve been helping companies find and understand the data they have, compare it against applicable global laws and regulations, and remediate any gaps in compliance. But as the industry continues to evolve, privacy tech also is helping show business value beyond just compliance. Companies are becoming more transparent, differentiating on ethics and ESG, and building businesses that differentiate on trust. The privacy tech industry is growing quickly because we’re able to show value for compliance as well as actionable business insights and valuable business outcomes.” — Kabir Barday, CEO, OneTrust

pa logo iqvia

“Leading organizations realize that to be truly competitive in a rapidly evolving marketplace, they need to have a solid defensive footing. Turnkey privacy technologies enable them to move onto the offense by safely leveraging their data assets rapidly at scale.” — Luk Arbuckle, Chief Methodologist, Privacy Analytics

1024px sap logo.svg

“We appreciate FPF’s analysis of the privacy tech marketplace and we’re looking forward to further research, analysis, and educational efforts by the Privacy Tech Alliance. Customers and consumers alike will benefit from a shared understanding and common definitions for the elements of the privacy stack.” — Corinna Schulze, Director, EU Government Relations, Global Corporate Affairs, SAP

unknown

“The report shines a light on the evolving sophistication of the privacy tech market and the critical need for businesses to harness emerging technologies that can tackle the multitude of operational challenges presented by the big data economy. Businesses are no longer simply turning to privacy tech vendors to overcome complexities with compliance and regulation; they are now mapping out ROI-focused data strategies that view privacy as a key commercial differentiator. In terms of market maturity, the report highlights a need to overcome ambiguities surrounding new privacy tech terminology, as well as discrepancies in the mapping of technical capabilities to actual business needs. Moving forward, the advantage will sit with those who can offer the right blend of technical and legal expertise to provide the privacy stack assurances and safeguards that buyers are seeking – from a risk, deployment and speed-to-value perspective. It’s worth noting that the growing importance of data privacy to businesses sits in direct correlation with the growing importance of data privacy to consumers. Trūata’s Global Consumer State of Mind Report 2021 found that 62% of global consumers would feel more reassured and would be more likely to spend with companies if they were officially certified to a data privacy standard. Therefore, in order to manage big data in a privacy-conscious world, the opportunity lies with responsive businesses that move with agility and understand the return on privacy investment. The shift from manual, restrictive data processes towards hyper automation and privacy-enhancing computation is where the competitive advantage can be gained and long-term consumer loyalty—and trust— can be retained.” — Aoife Sexton, Chief Privacy Officer and Chief of Product Innovation, Trūata

unknown 1

“As early pioneers in this space, we’ve had a unique lens on the evolving challenges organizations have faced in trying to integrate technology solutions to address dynamic, changing privacy issues in their organizations, and we believe the Privacy Technology Stack introduced in this report will drive better organizational decision-making related to how technology can be used to sustainably address the relationships among the data, processes, and outcomes.” — Chris Babel, CEO, TrustArc

wirewheel logo

“It’s important for companies that use data to do so ethically and in compliance with the law, but those are not the only reasons why the privacy tech sector is booming. In fact, companies with exceptional privacy operations gain a competitive advantage, strengthen customer relationships, and accelerate sales.” — Justin Antonipillai, Founder & CEO, Wirewheel

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

Brazilian Supreme Federal Court

Author: Dr. Luca Belli

Dr. Luca Belli is Professor at FGV Law School, Rio de Janeiro, where he leads the CyberBRICS Project and the Latin American edition of the Computers, Privacy and Data Protection (CPDP) conference. The opinions expressed in his articles are strictly personal. The author can be contacted at [email protected].

The Brazilian Supreme Federal Court, or “STF” in its Brazilian acronym, recently took a landmark decision concerning the right to be forgotten (RTBF), finding that it is incompatible with the Brazilian Constitution. This attracted international attention to Brazil for a topic quite distant than the sadly frequent environmental, health, and political crises.

Readers should be warned that while reading this piece they might experience disappointment, perhaps even frustration, then renewed interest and curiosity and finally – and hopefully – an increased open-mindedness, understanding a new facet of the RTBF debate, and how this is playing out at constitutional level in Brazil.

This might happen because although the STF relies on the “RTBF” label, the content behind such label is quite different from what one might expect after following the same debate in Europe. From a comparative law perspective, this landmark judgment tellingly shows how similar constitutional rights play out in different legal cultures and may lead to heterogeneous outcomes based on the constitutional frameworks of reference.   

How it started: insolvency seasoned with personal data

As it is well-known, the first global debate on what it means to be “forgotten” in the digital environment arose in Europe, thanks to Mario Costeja Gonzalez, a Spaniard who, paradoxically, will never be forgotten by anyone due to his key role in the construction of the RTBF.

Costeja famously requested to deindex from Google Search information about himself that he considered to be no longer relevant. Indeed, when anyone “googled” his name, the search engine provided as the top results some link to articles reporting Costeja’s past insolvency as a debtor. Costeja argued that, despite having been convicted for insolvency, he had already paid his debt with Justice and society many years before and it was therefore unfair that his name would continue to be associated ad aeternum with a mistake he made in the past.

The follow up is well known in data protection circles. The case reached the Court of Justice of the European Union (CJEU), which, in its landmark Google Spain Judgment (C-131/12), established that search engines shall be considered as data controllers and, therefore, they have an obligation to de-index information that is inappropriate, excessive, not relevant, or no longer relevant, when a data subject to whom such data refer requests it. Such an obligation was a consequence of Article 12.b of Directive 95/46 on the protection of personal data, a pre-GDPR provision that set the basis for the European conception of the RTBF, providing for the “rectification, erasure or blocking of data the processing of which does not comply with the provisions of [the] Directive, in particular because of the incomplete or inaccurate nature of the data.”

The indirect consequence of this historic decision, and the debate it generated, is that we have all come to consider the RTBF in the terms set by the CJEU. However, what is essential to emphasize is that the CJEU approach is only one possible conception and, importantly, it was possible because of the specific characteristics of the EU legal and institutional framework. We have come to think that RTBF means the establishment of a mechanism like the one resulting from the Google Spain case, but this is the result of a particular conception of the RTBF and of how this particular conception should – or could – be implemented.

The fact that the RTBF has been predominantly analyzed and discussed through the European lenses does not mean that this is the only possible perspective, nor that this approach is necessary the best. In fact, the Brazilian conception of the RTBF is remarkably different from a conceptual, constitutional, and institutional standpoint. The main concern of the Brazilian RTBF is not how a data controller might process personal data (this is the part where frustration and disappointment might likely arise in the reader) but the STF itself leaves the door open to such possibility (this is the point where renewed interest and curiosity may arise).

The Brazilian conception of the right to be forgotten

Although the RTBF has acquired a fundamental relevance in digital policy circles, it is important to emphasize that, until recently, Brazilian jurisprudence had mainly focused on the juridical need for “forgetting” only in the analogue sphere. Indeed, before the CJEU Google Spain decision, the Brazilian Supreme Court of Justice or “STJ” – the other Brazilian Supreme Court that deals with the interpretation of the Law, differently from the previously mentioned STF, which deals with the interpretation of constitutional matters – had already considered the RTBF as a right not to be remembered, affirmed by the individual vis-à-vis traditional media outlets.

This interpretation first emerged in the “Candelaria massacre” case, a gloomy page of Brazilian history, featuring a multiple homicide perpetrated in 1993 in front of the Candelaria Church, a beautiful colonial Baroque building in Rio de Janeiro’s downtown. The gravity and the particularly picturesque stage of the massacre led Globo TV, a leading Brazilian broadcaster, to feature the massacre in a TV show called Linha Direta. Importantly, the show included in the narration some details about a man suspected of being one of the perpetrators of the massacre but later discharged.

Understandably, the man filed a complaint arguing that the inclusion of his personal information in the TV show was causing him severe emotional distress, while also reviving suspects against him, for a crime he had already been discharged of many years before. In September 2013, further to Special Appeal No. 1,334,097, the STJ agreed with the plaintiff establishing the man’s “right not to be remembered against his will, specifically with regard to discrediting facts.” This is how the RTBF was born in Brazil.

Importantly for our present discussion, this interpretation is not born out of digital technology and does not impinge upon the delisting of specific type of information as results of search engine queries. In Brazilian jurisprudence the RTBF has been conceived as a general right to effectively limit the publication of certain information. The man included in the Globo reportage had been discharged many years before, hence he had a right to be “let alone,” as Warren and Brandeis would argue, and not to be remembered for something he had not even committed. The STJ, therefore, constructed its vision of the RTBF, based on article 5.X of the Brazilian Constitution, enshrining the fundamental right to intimacy and preservation of image, two fundamental features of privacy. 

Hence, although they utilize the same label, the STJ and CJEU conceptualize two remarkably different rights, when they refer to the RTBF. While both conceptions aim at limiting access to specific types of personal information, the Brazilian conception differs from the EU one on at least three different levels.

First, their constitutional foundations. While both conceptions are intimately intertwined with individuals’ informational self-determination, the STJ built the RTBF based on the protection of privacy, honour and image, whereas the CJEU built it upon the fundamental right to data protection, which in the EU framework is a standalone fundamental right. Conspicuously, in the Brazilian constitutional framework an explicit right to data protection did not exist at the time of the Candelaria case and only since 2020 it has been in the process of being recognized

Secondly, and consequently, the original goal of the Brazilian conception of the RTBF was not to regulate how a controller should process personal data but rather to protect the private sphere of the individual. In this perspective, the goal of STJ was not – and could not have been – to regulate the deindexation of specific incorrect or outdated information, but rather to regulate the deletion of “discrediting facts” so that the private life, honour and image of any individual might be illegitimately violated.

Finally, yet extremely importantly, the fact that, at the time of the decision, an institutional framework dedicated to data protection was simply absent in Brazil did not allow the STJ to have the same leeway of the CJEU. The EU Justices enjoyed the privilege of delegating to search engine the implementation of the RTBF because, such implementation would have received guidance and would have been subject to the review of a well-consolidated system of European Data Protection Authorities. At the EU level, DPAs are expected to guarantee a harmonious and consistent interpretation and application of data protection law. At the Brazilian level, a DPA has just been established in late 2020 and announced its first regulatory agenda only in late January 2021.

This latter point is far from trivial and, in the opinion of this author, an essential preoccupation that might have driven the subsequent RTBF conceptualization of the STJ.

The stress-test

The soundness of the Brazilian definition of the RTBF, however, was going to be tested again by the STJ, in the context of another grim and unfortunate page of Brazilian story, the Aida Curi case. This case originated with the sexual assault and subsequent homicide of the young Aida Curi, in Copacabana, Rio de Janeiro, on the evening of 14 July 1958. At the time the case crystallized considerable media attention, not only because of its mysterious circumstances and the young age of the victim, but also because the sexual assault perpetrators tried to dissimulate it by throwing the body of the victim from the rooftop of a very high building on the Avenida Atlantica, the fancy avenue right in front of the Copacabana beach.

Needless to say, Globo TV considered the case as a perfect story for yet another Linha Direta episode. Aida Curi’s relatives, far from enjoying the TV show, sued the broadcaster for moral damages and demanded the full enjoyment of their RTBF – in the Brazilian conception, of course. According to the plaintiffs, it was indeed not conceivable that, almost 50 years after the murder, Globo TV could publicly broadcast personal information about the victim – and her family – including the victim’s name and address, in addition to unauthorized images, thus bringing back a long-closed and extremely traumatic set of events.

The brothers of Aida Curi claimed reparation against Rede Globo, but the STJ, decided that the time passed was enough to mitigate the effects of anguish and pain on the dignity of Aida Curi’s relatives, while arguing that it was impossible to report the events without mentioning the victim. This decision was appealed by Ms Curi’s family members, who demanded by means of Extraordinary Appeal No. 1,010,606, that STF recognized “their right to forget the tragedy.” It is interesting to note that the way the demand is constructed in this Appeal exemplifies tellingly the Brazilian conception of “forgetting” as erasure and prohibition from divulgation.

At this point, the STF identified in the Appeal the interest of debating the issue “with general repercussion” which is a peculiar judicial process that the Court can utilize when recognizes that a given case has particular relevance and transcendence for the Brazilian legal and judicial system. Indeed, the decision of a case with general repercussion does not only bind the parties but rather establishes a jurisprudence that must be replicated by all lower-level courts.

In February 2021, the STF finally deliberated on the Aida Curi case, establishing that “the idea of ​​a right to be forgotten is incompatible with the Constitution, thus understood as the power to prevent, due to the passage of time, the disclosure of facts or data that are true and lawfully obtained and published in analogue or digital media” and that “any excesses or abuses in the exercise of freedom of expression and information must be analyzed on a case-by-case basis, based on constitutional parameters – especially those relating to the protection of honor, image, privacy and personality in general – and the explicit and specific legal provisions existing in the criminal and civil spheres.”

In other words, what the STF has deemed as incompatible with the Federal Constitution is a specific interpretation of the Brazilian version of the RTBF. What is not compatible with the Constitution is to argue that the RTBF allows to prohibit publishing true facts, lawfully obtained. At the same time, however, the STF clearly states that it remains possible for any Court of law to evaluate, on a case-by-case basis and according to constitutional parameters and existing legal provisions, if a specific episode can allow the use of the RTBF to prohibit the divulgation of information that undermine the dignity, honour, privacy, or other fundamental interests of the individual.

Hence, while explicitly prohibiting the use of the RTBF as a general right to censorship, the STF leaves room for the use of the RTBF for delisting specific personal data in an EU-like fashion, while specifying that this must be done finding guidance in the Constitution and the Law.

What next?

Given the core differences between the Brazilian and EU conception of the RTBF, as highlighted above, it is understandable in the opinion of this author that the STF adopted a less proactive and more conservative approach. This must be especially considered in light of the very recent establishment of a data protection institutional system in Brazil.

It is understandable that the STF might have preferred to de facto delegate the interpretation of when and how the RTBF could be rightfully invoked before Courts, according to constitutional and legal parameters. First, in the Brazilian interpretation of the RTBF, this right fundamentally insist on the protection of privacy – i.e. the private sphere of an individual – and, while admitting the existence of data protection concerns, these are not the main ground on which the Brazilian RTBF conception relays.

It is understandable that in a country and a region where the social need to remember and shed light on what happened in a recent history, marked by dictatorships, well-hidden atrocities, and opacity, outweighs the legitimate individual interest to prohibit the circulation of truthful and legally obtained information. In the digital sphere, however, the RTBF quintessentially translates into an extension of informational self-determination, which the Brazilian General Data Protection Law, better known as “LGPD” (Law No. 13.709 / 2018), enshrines in its article 2 as one of the “foundations” of data protection in the country and that whose fundamental character was recently recognized by the STF itself.

In this perspective, it is useful to remind the dissenting opinion of Justice Luiz Edson Fachin, in the Aida Curi case, stressing that “although it does not expressly name it, the Constitution of the Republic, in its text, contains the pillars of the right to be forgotten, as it celebrates the dignity of the human person (article 1, III), the right to privacy (article 5, X) and the right to informational self-determination – which was recognized, for example, in the disposal of the precautionary measures of the Direct Unconstitutionality Actions No. 6,387, 6,388, 6,389, 6,390 and 6,393, under the rapporteurship of Justice Rosa Weber (article 5, XII).”

It is the opinion of this author that the Brazilian debate on the RTBF in the digital sphere would be clearer if it its dimension as a right to deindexation of search engines results were to be clearly regulated. It is understandable that the STF did not dare regulating this, given its interpretation of the RTBF and the very embryonic data protection institutional framework in Brazil. However, given the increasing datafication we are currently witnessing, it would be naïve not to expect that further RTBF claims concerning the digital environment and, specifically, the way search engines process personal data will keep emerging.

The fact that the STF has left the door open to apply the RTBF in the case-by-case analysis of individual claims may reassure the reader regarding the primacy of constitutional and legal arguments in such case-by-case analysis. It may also lead the reader to – very legitimately – wonder whether such a choice is the facto the most efficient to deal with the potentially enormous number of claims and in the most coherent way, given the margin of appreciation and interpretation that each different Court may have.  

An informed debate able to clearly highlight what are the existing options and what might be the most efficient and just ways to implement them, considering the Brazilian context, would be beneficial. This will likely be one of the goals of the upcoming Latin American edition of the Computers, Privacy and Data Protection conference (CPDP LatAm) that will take place in July, entirely online, and will aim at exploring the most pressing issues for Latin American countries regarding privacy and data protection.

Photo Credit: “Brasilia – The Supreme Court” by Christoph Diewald is licensed under CC BY-NC-ND 2.0

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

FPF announces appointment of Malavika Raghavan as Senior Fellow for India

The Future of Privacy Forum announces the appointment of Malavika Raghavan as Senior Fellow for India, expanding our Global Privacy team to one of the key jurisdictions for the future of privacy and data protection law. 

Malavika is a thought leader and a lawyer working on interdisciplinary research, focusing on the impacts of digitisation on the lives of lower-income individuals. Her work since 2016 has focused on the regulation and use of personal data in service delivery by the Indian State and private sector actors. She has founded and led the Future of Finance Initiative for Dvara Research (an Indian think tank) in partnership with the Gates Foundation from 2016 until 2020, anchoring its research agenda and policy advocacy on emerging issues at the intersection of technology, finance and inclusion. Research that she led at Dvara Research was cited by the India’s Data Protection Committee in its White Paper as well as its final report with proposals for India’s draft Personal Data Protection Bill, with specific reliance placed on such research on aspects of regulatory design and enforcement. See Malavika’s full bio here.

“We are delighted to welcome Malavika to our Global Privacy team. For the following year, she will be our adviser to understand the most significant developments in privacy and data protection in India, from following the debate and legislative process of the Data Protection Bill and the processing of non-personal data initiatives, to understanding the consequences of the publication of the new IT Guidelines. India is one of the most interesting jurisdictions to follow in the world, for many reasons: the innovative thinking on data protection regulation, the potentially groundbreaking regulation of non-personal data and the outstanding number of individuals whose privacy and data protection rights will be envisaged by these developments, which will test the power structures of digital regulation and safeguarding fundamental rights in this new era”, said Dr. Gabriela Zanfir-Fortuna, Global Privacy lead at FPF. 

We have asked Malavika to share her thoughts for FPF’s blog on what are the most significant developments in privacy and digital regulation in India and about India’s role in the global privacy and digital regulation debate.

FPF: What are some of the most significant developments in the past couple of years in India in terms of data protection, privacy, digital regulation?

Malavika Raghavan: “Undoubtedly, the turning point for the privacy debate India was the 2017 judgement of the Indian Supreme Court in Justice KS Puttaswamy v Union of India. The judgment affirmed the right to privacy as a constitutional guarantee, protected by Part III (Fundamental Rights) of the Indian Constitution. It was also regenerative, bringing our constitutional jurisprudence into the 21st century by re-interpreting timeless principles for the digital age, and casting privacy as a prerequisite for accessing other rights—including the right to life and liberty, to freedom of expression and to equality—given the ubiquitous digitisation of human experience we are witnessing today. 

Overnight, Puttaswamy also re-balanced conversations in favour of privacy safeguards to make these equal priorities for builders of digital systems, rather than framing these issues as obstacles to innovation and efficiency. In addition, it challenged the narrative that privacy is an elite construct that only wealthy or privileged people deserve— since many litigants in the original case that had created the Puttaswamy reference were from marginalised groups. Since then, a string of interesting developments have arisen as new cases are reassessing the impact of digital technology on individuals in India, for e.g. the boundaries case of private sector data sharing (such as between Whatsapp and Facebook), or the State’s use of personal data (as in the case concerning Aadhaar, our national identification system) among others. 

Puttaswamy also provided fillip for a big legislative development, which is the creation of an omnibus data protection law in India. A bill to create this framework was proposed by a Committee of Experts under the chairmanship of Justice Srikrishna (an ex-Supreme Court judge), which has been making its way through ministerial and Parliamentary processes. There’s a large possibility that this law will be passed by the Indian parliament in 2021! Definitely a big development to watch.

FPF: How do you see India’s role in the global privacy and digital regulation debate?

Malavika Raghavan: “India’s strategy on privacy and digital regulation will undoubtedly have global impact, given that India is home to 1/7th of the world’s population! The mobile internet revolution has created a huge impact on our society with millions getting access to digital services in the last couple of decades. This has created nuanced mental models and social norms around digital technologies that are slowly being documented through research and analysis. 

The challenge for policy makers is to create regulations that match these expectations and the realities of Indian users to achieve reasonable, fair regulations. As we have already seen from sectoral regulations (such as those from our Central Bank around cross border payments data flows) such regulations also have huge consequences for global firms interacting with Indian users and their personal data.  

In this context, I think India can have the late-mover advantage in some ways when it comes to digital regulation. If we play our cards right, we can take the best lessons from the experience of other countries in the last few decades and eschew the missteps. More pragmatically, it seems inevitable that India’s approach to privacy and digital regulation will also be strongly influenced by the Government’s economic, geopolitical and national security agenda (both internationally and domestically). 

One thing is for certain: there is no path-dependence. Our legislators and courts are thinking in unique and unexpected ways that are indeed likely to result in a fourth way (as described by the Srikrishna Data Protection Committee’s final report), compared to the approach in the US, EU and China.”

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content

Taj Mahal 1209004 1920

On February 25, the Indian Government notified and published Information Technology (Guidelines for Intermediaries and Digital media Ethics Code) Rules 2021. These rules mirror the Digital Services Act (DSA) proposal of the EU to some extent, since they propose a tiered approach based on the scale of the platform, they touch on intermediary liability, content moderation, take-down of illegal content from online platforms, as well as internal accountability and oversight mechanisms, but they go beyond such rules by adding a Code of Ethics for digital media, similar to the Code of Ethics classic journalistic outlets must follow, and by proposing an “online content” labelling scheme for content that is safe for children.

The Code of Ethics applies to online news publishers, as well as intermediaries that “enable the transmission of news and current affairs”. This part of the Guidelines (the Code of Ethics) has already been challenged in the Delhi High Court by news publishers this week. 

The Guidelines have raised several types of concerns in India, from their impact on freedom of expression, impact on the right to privacy through the automated scanning of content and the imposed traceability of even end-to-end encrypted messages so that the originator can be identified, to the choice of the Government to use executive action for such profound changes. The Government, through the two Ministries involved in the process, is scheduled to testify in the Standing Committee of Information Technology of the Parliament on March 15.

New obligations for intermediaries

“Intermediaries” include “websites, apps and portals of social media networks, media sharing websites, blogs, online discussion forums, and other such functionally similar intermediaries” (as defined in rule 2(1)(m)).

Here are some of the most important rules laid out in Part II of the Guidelines, dedicated to Due Diligence by Intermediaries:

“Significant social media intermediaries” have enhanced obligations

“Significant social media intermediaries” are social media services with a number of users above a threshold which will be defined and notified by the Central Government. This concept is similar to the the DSA’s “Very Large Online Platform”, however the DSA includes clear criteria in the proposed act itself on how to identify a VLOP.

As for Significant Social Media Intermediaries” in India, they will have additional obligations (similar to how the DSA proposal in the EU scales obligations): 

These “Guidelines” seem to have the legal effect of a statute, and they are being adopted through executive action to replace Guidelines adopted in 2011 by the Government, under powers conferred to it in the Information Technology Act 2000. The new Guidelines would enter into force immediately after publication in the Official Gazette (no information as to when publication is scheduled). The Code of Ethics would enter into force three months after the publication in the Official Gazette. As mentioned above, there are already some challenges in Court against part of these rules.

Get smart on these issues and their impact

Check out these resources: 

Another jurisdiction to keep your eyes on: Australia

Also note that, while the European Union is starting its heavy and slow legislative machine, by appointing Rapporteurs in the European Parliament and having first discussions on the DSA proposal in the relevant working group of the Council, another country is set to soon adopt digital content rules: Australia. The Government is currently considering an Online Safety Bill, which was open to public consultation until mid February and which would also include a “modernised online content scheme”, creating new classes of harmful online content, as well as take-down requirements for image-based abuse, cyber abuse and harmful content online, requiring removal within 24 hours of receiving a notice from the eSafety Commissioner.

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Senior Counsel, at [email protected].

Russia: New Law Requires Express Consent for Making Personal Data Available to the Public and for Any Subsequent Dissemination

Authors: Gabriela Zanfir-Fortuna and Regina Iminova

Moscow 2742642 1920 1
Source: Pixabay.Com, by Opsa

Amendments to the Russian general data protection law (Federal Law No. 152-FZ on Personal Data) adopted at the end of 2020 enter into force today (Monday, March 1st), with some of them having the effective date postponed until July 1st. The changes are part of a legislative package that is also seeing the Criminal Code being amended to criminalize disclosure of personal data about “protected persons” (several categories of government officials). The amendments to the data protection law envision the introduction of consent based restrictions for any organization or individual that publishes personal data initially, as well as for those that collect and further disseminate personal data that has been distributed on the basis of consent in the public sphere, such as on social media, blogs or any other sources. 

The amendments:

The potential impact of the amendments is broad. The new law prima facie affects social media services, online publishers, streaming services, bloggers, or any other entity who might be considered as making personal data available to “an indefinite number of persons.” They now have to collect and prove they have separate consent for making personal data publicly available, as well as for further publishing or disseminating PDD which has been lawfully published by other parties originally.

Importantly, the new provisions in the Personal Data Law dedicated to PDD do not include any specific exception for processing PDD for journalistic purposes. The only exception recognized is processing PDD “in the state and public interests defined by the legislation of the Russian Federation”. The Explanatory Note accompanying the amendments confirms that consent is the exclusive lawful ground that can justify dissemination and further processing of PDD and that the only exception to this rule is the one mentioned above, for state or public interests as defined by law. It is thus expected that the amendments might create a chilling effect on freedom of expression, especially when also taking into account the corresponding changes to the Criminal Code.

The new rules seem to be part of a broader effort in Russia to regulate information shared online and available to the public. In this context, it is noteworthy that other amendments to Law 149-FZ on Information, IT and Protection of Information solely impacting social media services were also passed into law in December 2020, and already entered into force on February 1st, 2021. Social networks are now required to monitor content and “restrict access immediately” of users that post information about state secrets, justification of terrorism or calls to terrorism, pornography, promoting violence and cruelty, or obscene language, manufacturing of drugs, information on methods to commit suicide, as well as calls for mass riots. 

Below we provide a closer look at the amendments to the Personal Data Law that entered into force on March 1st, 2021. 

A new category of personal data is defined

The new law defines a category of “personal data allowed by the data subject to be disseminated” (PDD), the definition being added as paragraph 1.1 to Article 3 of the Law. This new category of personal data is defined as “personal data to which an unlimited number of persons have access to, and which is provided by the data subject by giving specific consent for the dissemination of such data, in accordance with the conditions in the Personal Data Law” (unofficial translation). 

The old law had a dedicated provision that referred to how this type of personal data could be lawfully processed, but it was vague and offered almost no details. In particular, Article 6(10) of the Personal Data Law (the provision corresponding to Article 6 GDPR on lawful grounds for processing) provided that processing of personal data is lawful when the data subject gives access to their personal data to an unlimited number of persons. The amendments abrogate this paragraph, before introducing an entirely new article containing a detailed list of conditions for processing PDD only on the basis of consent (the new Article 10.1).

Perhaps in order to avoid misunderstanding on how the new rules for processing PDD fit with the general conditions on lawful grounds for processing personal data, a new paragraph 2 is introduced in Article 10 of the law, which details conditions for processing special categories of personal data, to clarify that processing of PDD “shall be carried out in compliance with the prohibitions and conditions provided for in Article 10.1 of this Federal Law”.

Specific, express, unambiguous and separate consent is required

Under the new law, “data operators” that process PDD must obtain specific and express consent from data subjects to process personal data, which includes any use, dissemination of the data. Notably, under the Russian law, “data operators” designate both controllers and processors in the sense of the General Data Protection Regulation (GDPR), or businesses and service providers in the sense of the California Consumer Privacy Act (CCPA).

Specifically, under Article 10.1(1), the data operator must ensure that it obtains a separate consent dedicated to dissemination, other than the general consent for processing personal data or other type of consent. Importantly, “under no circumstances” may individuals’ silence or inaction be taken to indicate their consent to the processing of their personal data for dissemination, under Article 10.1(8).

In addition, the data subject must be provided with the possibility to select the categories of personal data which they permit for dissemination. Moreover, the data subject also must be provided with the possibility to establish “prohibitions on the transfer (except for granting access) of [PDD] by the operator to an unlimited number of persons, as well as prohibitions on processing or conditions of processing (except for access) of these personal data by an unlimited number of persons”, per Article 10.1(9). It seems that these prohibitions refer to specific categories of personal data provided by the data subject to the operator (out of a set of personal data, some categories may be authorized for dissemination, while others may be prohibited from dissemination).

If the data subject discloses personal data to an unlimited number of persons without providing to the operator the specific consent required by the new law, not only the original operator, but all subsequent persons or operators that processed or further disseminated the PDD have the burden of proof to “provide evidence of the legality of subsequent dissemination or other processing”, under Article 10.1(2), which seems to imply that they must prove consent was obtained for dissemination (probatio diabolica in this case). According to the Explanatory Note to the amendments, it seems that the intention was indeed to turn the burden of proof of legality of processing PDD from data subjects to the data operators, since the Note makes a specific reference to the fact that before the amendments the burden of proof rested with data subjects.

If the separate consent for dissemination of personal data is not obtained by the operator, but other conditions for lawfulness of processing are met, the personal data can be processed by the operator, but without the right to distribute or disseminate them – Article 10.1.(4). 

A Consent Management Platform for PDD, managed by the Roskomnadzor

The express consent to process PDD can be given directly to the operator or through a special “information system” (which seems to be a consent management platform) of the Roskomnadzor, according to Article 10.1(6). The provisions related to setting up this consent platform for PDD will enter into force on July 1st, 2021. The Roskomnadzor is expected to provide technical details about the functioning of this consent management platform and guidelines on how it is supposed to be used in the following months. 

Absolute right to opt-out of dissemination of PDD

Notably, the dissemination of PDD can be halted at any time, on request of the individual, regardless of whether the dissemination is lawful or not, according to Article 12.1(12). This type of request is akin to a withdrawal of consent. The provision includes some requirements for the content of such a request. For instance, it requires writing contact information and listing the personal data that should be terminated. Consent to the processing of the provided personal data is terminated once the operator receives the opt-out request – Article 10.1(13).

A request to opt-out of having personal data disseminated to the public when this is done unlawfully (without the data subject’s specific, affirmative consent) can also be made through a Court, as an alternative to submitting it directly to the data operator. In this case, the operator must terminate the transmission of or access to personal data within three business days from when such demand was received or within the timeframe set in the decision of the court which has come into effect – Article 10.1(14).

A new criminal offense: The prohibition on disclosure of personal data about protected persons

Sharing personal data or information about intelligence officers and their personal property is now a criminal offense under the new rules, which amended the Criminal Code. The law obliges any operators of personal data, including government departments and mobile operators, to ensure the confidentiality of personal information concerning protected persons, their relatives, and their property. Under the new law, “protected persons” include employees of the Investigative Committee, FSB, Federal Protective Service, National Guard, Ministry of Internal Affairs, and Ministry of Defense judges, prosecutors, investigators, law enforcement officers and their relatives. Moreover, the list of protected persons can be further detailed by the head of the relevant state body in which the specified persons work.

Previously, the law allowed for the temporary prohibition of the dissemination of personal data of protected persons only in the event of imminent danger in connection with official duties and activities. The new amendments make it possible to take protective measures in the absence of a threat of encroachment on their life, health and property.

What to watch next: New amendments to the general Personal Data Law are on their way in 2021

There are several developments to follow in this fast changing environment. First, at the end of January, the Russian President gave the government until August 1 to create a set of rules for foreign tech companies operating in Russia, including a requirement to open branch offices in the country.

Second, a bill (No. 992331-7) proposing new amendments to the overall framework of the Personal Data Law (No. 152-FZ) was introduced in July 2020 and was the subject of a Resolution that passed in the State Duma on February 16, allowing for a period for amendments to be submitted, until March 16. The bill is on the agenda for a potential vote in May. The changes would entail expanding the possibility to obtain valid consent through other unique identifiers which are currently not accepted by the law, such as unique online IDs, changes to purpose limitation, a possible certification scheme for effective methods to erase personal data and new competences for the Roskomnadzor to establish requirements for deidentification of personal data and specific methods for effective deidentification.

If you have any questions on Global Privacy and Data Protection developments, contact Gabriela Zanfir-Fortuna at [email protected]

Red Lines under the EU AI Act: Understanding Manipulative Techniques and the Exploitation of Vulnerabilities

Blog 2/ Red Lines under the EU AI Act Series  

This blog is the second of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can read the first episode here and find the whole series here.

Harmful manipulation and deception through AI systems and exploiting certain human vulnerabilities are the first on the list of prohibited practices under Article 5 of the EU AI Act. It is apparent that the underlying goal of these provisions is to ensure that individuals maintain their ability to make autonomous decisions. This is especially important when considering that one of the goals of the AI Act is “to promote the uptake of human-centric and trustworthy AI”, while ensuring respect for safety, health and fundamental rights (see Recital 1, AI Act).

These first two prohibited practices listed in Article 5(1) specifically concern AI systems that could undermine individual autonomy and well-being through:

It is notable, though, that manipulative and deceptive practices based on processing of personal data, and those that specifically occur through online platforms, are already strictly regulated by the EU’s General Data Protection Regulation (GDPR) and Digital Services Act (DSA). Specifically, the GDPR intervenes through obligations like ensuring fairness (Article 6(1)(a)) and data protection by design (Article 25) for all processing of personal data, regardless of whether that processing occurs through AI or not, while the DSA includes a prohibition for providers of online platforms to design, organise or operate their online interfaces in a way that deceives or manipulates their users (Article 25). While the relationship between the DSA obligations and those in the GDPR related to manipulative design is clear, with the DSA only being applicable where the GDPR does not apply, their relationship with the AI Act prohibitions on manipulative techniques and exploiting vulnerabilities requires further guidelines and clarification. 

The Guidelines published by the European Commission to support compliance with Article 5 AI Act highlight that the two prohibitions aim to protect individuals from being reduced to “mere tools for achieving certain ends”, and to protect those who are most vulnerable or susceptible to manipulation and exploitation. Significantly, the Guidelines analyze these two prohibitions together, making it obvious that there is a nexus between them. In this sense, according to the Guidelines, they are both designed to support and protect the right to human dignity, as enshrined in the EU Charter of Fundamental Rights

This second blog in the “Red Lines” series provides an analysis of the scope and content of the Article 5(1)(a) prohibition in Section 2, focusing on the definitions of subliminal, manipulative, and deceptive techniques. Section 3 goes on to explore the notion of vulnerability contained in the Article 5(1)(b) prohibition and in the Guidelines, while Section 4 notes the possible interplay between the two prohibitions. Section 5 takes a broader view by highlighting the interplay between the prohibitions and other EU laws, including the GDPR and the DSA, before the conclusions in Section 6 note the following key takeaways:  

2. Understanding harmful manipulation and deception as a prohibited practice under the AI Act

Article 5(1)(a) AI Act targets those cases in which AI practices subtly manipulate human action without the individual noticing. The final text of the AI Act for this provision underwent several changes from the European Commission’s initial proposal, broadening its scope and clarifying some elements.

Following amendments submitted by the European Parliament, the final text sought to add manipulative and deceptive techniques to the initial “subliminal techniques”, and broaden the scope of the ban to cover not only harmful effects on individuals but also on groups, in order to prevent discriminatory effects. Another modification of the initial proposal added that the prohibition should not be limited to cases where the systems are intended to modify behaviour, but also to cases where the modification of the behaviour that led to a significant harm is a mere “effect”, even when it was not the intended objective of the AI practice in question.

2.1. Defining subliminal, purposefully manipulative or deceptive techniques

The Guidelines list four cumulative conditions to be fulfilled in order for this prohibition to be applicable, even though, in their analysis, they also include a fifth one. 

  1. The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system. 
  2. The AI system must deploy subliminal (beyond a person’s consciousness), purposefully manipulative, or deceptive techniques. 
  3. The techniques deployed by the AI system should have the objective or the effect of materially distorting the behavior of a person or a group of persons. The distortion must appreciably impair their ability to make an informed decision, resulting in a decision that the person or the group of persons would not have otherwise made. 
  4. The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons. 

The four conditions must be met cumulatively for the prohibition to be applicable. Additionally, according to the Guidelines, there must be a plausible causal link between the techniques used, the significant change in the person’s behavior, and the significant harm that resulted or is likely to result from that behavior. While the causal link is not listed among the four conditions, it is analyzed further down in the Guidelines as a self-standing, additional condition to be met, and it should be considered as the fifth point on this list.

The prohibition applies to both providers and deployers of AI systems who, each within their own responsibilities, have an obligation not to place on the market, put into service, or use AI systems that impair an individual’s ability to make an informed decision on the basis of subliminal, manipulative or deceptive techniques. 

The Guidelines note that while the AI Act does not directly define “subliminal techniques”, the text of Article 5(1)(a) and Recital 29 imply that such techniques are inherently covert in that they operate beyond the threshold of conscious awareness, capable of influencing decisions by bypassing a person’s rational defences. However, the Recital also explains that the prohibition covers even those cases where the person is aware that the techniques used are subliminal, but cannot resist their effect. The Guidelines clarify that the prohibition on the use of subliminal techniques is not limited to those practices that influence decision-making only, but rather, it also covers those techniques that influence a person’s value- and opinion-formation, a criterion that seems highly subjective and might raise difficulties in applying it in practice. A relevant example could be an AI system facilitating deepfakes on matters of public interest when spread on platforms without appropriate labeling and in violation of the transparency obligations in place (Article 50 AI Act). Their use could be considered prohibited. 

Subliminal techniques can use audio, visual, or tactile stimuli that are too brief or subtle to be noticed. The following techniques are among several suggested in the Guidelines (p. 20) as potentially triggering a ban, if the other conditions are also met: 

The Guidelines, referring to Recital 29 AI Act, specify that the development of new AI technologies, like neurotechnology, brain-computer interfaces, virtual reality, or even “dream-hacking” increases the potential for sophisticated subliminal manipulation and its ability to influence human behavior subconsciously. 

While “purposefully manipulative techniques” are similarly not defined by the AI Act, the Guidelines fill this gap by noting that such techniques exploit cognitive biases, psychological vulnerabilities, or situational factors that make individuals more susceptible to influence. This provision covers cases where individuals are aware of the presence of a manipulative technique but cannot resist its effect and, as a result, are pushed into decisions or behaviours that they would not have otherwise made (Recital 29). 

Recital 29 of the AI Act also refers to techniques that deceive or nudge individuals “in a way that subverts and impairs their autonomy, decision-making and free choices.” A direct comparison can be made with the DSA which, inter alia, prohibits providers of online platforms from deceiving or nudging recipients of their service and from distorting or impairing their autonomy, decision-making and free choice (Article 25 and Recital 67 DSA). 

The manipulative capability of the technique is a key factor in determining its effect. Indeed, the Guidelines clarify the AI system could manipulate individuals without the provider or deployer intendingto cause harm. However, the provision would still apply, unless the result is incidental and appropriate preventive and mitigating measures were taken. This is consistent with the overall logic and scope of the AI Act’s prohibitions, as explored in Blog 1 of this series, in which deployers have a responsibility to reasonably foresee harms that may arise from the misuse of an AI system. 

Deceptive techniques are techniques that subvert or impair a person’s autonomy, decision-making, or free choice in ways of which the person is not consciously aware or, where they are aware, can still be deceived or cannot control or resist them. In the case of deepfakes, for example, Article 50 of the AI Act requires that the deployer disclose their nature. If this transparency is absent and the deepfake is used to deceive individuals, it could fall under prohibited uses. Notably, according to the Guidelines, this provision applies even if the deception occurs without the intent of the provider or deployer. However, the Guidelines also clarify that a generative AI system that produces misleading information due to hallucinations—provided the provider has communicated this possibility—does not constitute a prohibited practice.

2.2 To fall under the AI Act’s prohibited practices, manipulative techniques have to have the “objective or effect of materially distorting the behavior of a person or a group of persons” 

The subliminal, manipulative and deceptive techniques must have the objective or the effect of materially distorting the behavior of a person or a group of persons. Material distortion involves a degree of coercion, manipulation, or deception that goes beyond lawful persuasion. The Guidelines note that material distortion implies a substantial impact on a person’s behavior, such that their decision-making and free choice are undermined, rather than a minor influence. 

When interpreting “material distortion of behaviour” under Directive 2005/29/EC (the Unfair Commercial Practices Directive or ‘UCPD’), it is sufficient to demonstrate that a commercial practice is likely (i.e., capable) of influencing an average consumer’s transactional decision; there is no need to prove that a consumer’s economic behavior has been distorted. However, this requires a case-by-case assessment, considering specific facts and circumstances. Additionally, the average consumer’s perspective may not be helpful in situations where an AI system delivers highly personalized messages designed to manipulate individual behavior.

The AI Act adopts a similar understanding of “material distortion” as the UCPD, where the prohibition applies even if the material distortion of a person’s behavior occurs without the intent of the provider or deployer. The text specifies that the prohibition covers not only cases in which behavior modification is the object of the system (like in the original text of the European Commission’s proposal) but also those in which it is the mere “effect”. This change, as introduced into the final text, amplifies protection against the possible distorting effects of manipulative AI systems. 

2.3 The subliminal, manipulative and deceptive techniques must be “reasonably likely to cause significant harm” 

The Guidelines define harm under three broad categories:

However, the harm must be significant for the prohibition to apply. The determination of ‘significant harm’ is fact-specific, requiring careful consideration of each case’s circumstances and a case-by-case assessment. Still, the individual effects should always be material and significant in each case. According to the Guidelines, the assessment of the significance of the harm takes into consideration several factors:

When assessing harm, the Guidelines suggest that a comprehensive approach should be taken, which considers both the possible immediate and direct harms that are associated with AI systems that deploy subliminal, deceptive, or manipulative techniques. 

The last requirement for identifying a prohibited practice is determining the likelihood of a causal link between the manipulative technique and the distorting behavior. In that regard, to not fall in the category of prohibited practices, providers and deployers are suggested to take appropriate measures such as:

It is worth reminding that although the concept of significant harm is very similar to the one of “significant effect” that we encounter within Article 22 GDPR on automated decision-making (ADM), they do not overlap perfectly, with the latter providing for a broader interpretation than the former (see here FPF’s Report on ADM case law). For example, profiling through ADM for political targeting could have a significant effect on citizens but not result in significant harm

Not all forms of manipulation fall within the AI Act’s scope. Many persuasive techniques commonly used in advertising are legitimate because they operate transparently and respect individual autonomy. The Guidelines suggest that if an AI system appeals to emotions but remains transparent and provides accurate information, it falls outside the law’s scope.

Additionally, compliance with regulations like the GDPR helps providers and deployers demonstrate that transparency, fairness, and respect for individual rights and autonomy are upheld.

Furthermore, manipulation may be acceptable in some cases if it does not result in significant harm. For instance, in an example, the Guidelines provide – an online music platform might use an emotion recognition system to detect users’ moods and recommend songs that align with their emotions while avoiding excessive exposure to depressive content.

3. The exploitation of vulnerabilities, particularly those due to age, disability or socio-economic status, as prohibited AI practice

Cases in which an AI system exploits the vulnerabilities of a single person or a specific group with the objective of distorting their behavior are designated as prohibited AI practices under Article 5(1)(b) AI Act.

There are four cumulative conditions to be fulfilled for the application of Article 5(1)(b):

  1. The distorted behavior must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons. 
  2. The practice must constitute the ‘placing on the market’, the ‘putting into service’, or the ‘use’ of an AI system. 
  3. The AI system must exploit vulnerabilities due to age, disability, or socio-economic situations. 
  4. The exploitation enabled by the AI system must have the objective or the effect of materially distorting a person’s behavior or a group of persons. 

3.1. Exploitation of vulnerabilities due to age, disability, or a specific socio-economic situation

While vulnerability is not directly defined by the AI Act, according to the Guidelines, the concept covers a wide range of categories, including cognitive, emotional, physical, and other forms of susceptibility that may impact an individual’s or group’s ability to make informed decisions or influence their behavior. 

However, under the AI Act’s prohibited practices, the exploitation of vulnerabilities is only relevant if it involves individuals who are vulnerable due to their age, disability, or socio-economic circumstances. It is worth noting that a reference to an individual’s socio-economic situation was included in the final text of the AI Act after the amendments submitted by the European Parliament, which led to a wider scope of the Article 5(1)(b) prohibition in the final text, as compared to the initial European Commission proposal. 

Exploiting other categories of vulnerabilities than those expressly mentioned falls outside the scope of the Article 5(1)(b) prohibition. The Guidelines note that age, disability, or socio-economic vulnerabilities may, in principle, lead to a limited capacity to recognize or resist manipulative AI practices. The prohibition aims to prevent the exploitation of cognitive limitations stemming from age or health conditions. However, socio-economic status can also reduce an individual’s ability to recognize deceptive practices and may intersect with other discriminatory factors, such as belonging to an ethnic, racial, or religious minority group.

The Guidelines share a number of examples in cases of exploitation of vulnerable people based on their age that fall under prohibited practices, including: 

In the case of exploitation of vulnerable people based on disabilities, the Guidelines include the example mentioned of a therapeutic chatbot aimed to provide mental health support and coping strategies to persons with cognitive disabilities, which can exploit their limited intellectual capacities to influence them to buy expensive medical products. 

When the exploitation concerns vulnerable people based on their socio-economic situation, an example mentioned is an AI-predictive algorithm that could be used to target people who live in low-income post-codes with advertisements for predatory financial products. 

3.2. For the Article 5(1)(b) prohibition to apply, AI practices have to materially distort behavior and be reasonably likely to cause significant harm 

As previously noted, a substantial impact is required to fall within the scope, even though intention is not a necessary element, as the provision also covers merely the effect (see Section 2.3). Similarly to fulfilling the conditions for Article 5(1)(a), as explored above, the AI practice has to be reasonably likely to cause significant harm. It is worth mentioning that the harms in this case may be particularly severe and multifaceted due to the increased susceptibility of the vulnerable group in question. Risks of harm that might be deemed acceptable for adults are often considered unacceptable for children and other vulnerable groups.

4. Areas of interplay between the two prohibitions, and between the prohibitions and other EU laws, including the UCPD, GDPR, and DSA

4.1. Tiered approach to the interplay between Articles 5(1)(a) and (b) 

Where the Article 5(1)(a) prohibition covers mainly the use of subliminal and manipulative techniques, Article 5(1)(b) is focused on the targets of AI exploitation, particularly individuals considered vulnerable due to age, disability or socio-economic circumstances. 

However, there may be instances where both Articles seem applicable. In such cases, examining the predominant aspect of the exploitation is essential. If the exploitation does not explicitly relate to one of the vulnerable groups previously discussed, Article 5(1)(a) applies, taking into consideration that it also covers the exploitation of vulnerabilities in groups outside those listed in Article 5(1)(b). When the exploitation specifically targets the groups identified in Article 5(1)(b), then the practice falls under this latter prohibition.

4.2. Interplay with the GDPR obligations to ensure fairness and data protection by design

The protection of individuals from manipulative processes is also covered in various other European laws, including the GDPR. Under the GDPR, the principle of fairness—enshrined in Article 5(1)(a)—acts as an overarching safeguard ensuring that personal data is not processed in a manner that is unjustifiably detrimental, unlawfully discriminatory, unexpected, or misleading to the data subject. Information and choices about data processing must be presented in an objective and neutral way, strictly avoiding any deceptive, manipulative language or design choices. In fact, the European Data Protection Board (EDPB) explicitly identifies the use of “dark patterns” and “nudging” as violations of this fairness mandate, as these techniques subconsciously manipulate data subjects into making decisions that negatively impact the protection of their personal data. 

In its Guidelines 4/2019 on Data Protection by Design and by Default, the EDPB emphasizes that controllers must incorporate fairness into their system architectures from the outset, proactively recognizing power imbalances and granting users the highest degree of autonomy over their data. This means choices to consent to or abstain from data sharing must be equally visible, and platforms cannot use invasive default options or deceptive interfaces to lock users into unfair processing. 

The profound risks of such subliminal and deceptive techniques are illustrated in the EDPB’s Binding Decision 2/2023 and the Irish Data Protection Commission’s corresponding final decision regarding TikTok. In these rulings, the authorities found that TikTok infringed the principle of fairness by utilizing deceptive design patterns to nudge child users toward public-by-default settings. TikTok has challenged these findings in a case now pending at the CJEU.

Beyond social media interfaces, the EDPB has also stressed the dangers of subliminal manipulation in democratic processes. In its Statement 2/2019 on the use of personal data in political campaigns (the Cambridge Analytica case), the EDPB warns that predictive tools used to profile people’s personality traits, moods, and points of leverage pose severe societal risks. When these sophisticated profiling techniques are used to target voters with highly personalized messaging, they not only infringe upon the fundamental right to privacy but also threaten the integrity of elections, freedom of expression, and the fundamental right to think freely without being subjected to unseen psychological manipulation. 

Synthesizing EDPB decisions and guidelines: to counteract these deceptive techniques across all sectors, the fairness principle mandates that controllers respect data subject autonomy, avoid exploiting user vulnerabilities, and ensure that individuals are never coerced into abandoning their privacy through unfair technological architectures. 

Importantly, these GDPR rules apply in the absence of high thresholds, making them particularly relevant even where the conditions to meet the AI Act prohibitions are not met. This is why clarity about the interplay of the two regulations are essential for practical implementation.

  4.3.  Interplay with other EU laws: UCPD, DSA

The AI Act serves to complement or expand the provisions of existing EU law. For instance, unlike EU consumer protection laws, Articles 5(1)(a) and 5(1)(b) of the AI Act extend protection beyond consumers to encompass any individual. As a result, it must be considered alongside other legal frameworks such as the UCPD, the GDPR, the DSA, the political advertising regulation, and EU product safety legislation. 

For example, the UCPD aims to protect individuals from misleading information that could lead them to purchase goods they would not otherwise have bought. It also offers greater protection to vulnerable individuals, such as the elderly and children. The UCPD overlaps partly with the Article 5(1)(a) and (b) prohibitions, though not entirely. Firstly, the UCPD is a Directive and not a Regulation under EU law, and secondly, it only protects consumers (those “acting outside their trade, business, craft or profession”). In the case of the AI Act, however, the prohibitions in Article 5 serve to protect everyone, irrespective of their “consumer” or other status, such as “patient”, “student”, or “tax payers” to give some examples.

Furthermore, the scope of the UCPD is limited to transactional decisions, not all decisions. For example, a surgeon persuaded by manipulative or deceptive techniques by an AI system to operate on a patient in a certain way rather than another would not be covered by the UCPD. On the contrary, both rules will apply in all cases where AI systems are used to manipulate the consumer’s decision-making autonomy subliminally.

By analogy, the scope of the DSA is also limited to what happens on online platforms, and when it comes to deceptive design and the rules in Article 25 DSA – it is relevant only where the GDPR is not applicable, so the cases in which both the AI Act and the DSA apply are limited. 

But there are other provisions of the DSA that could be relevant at the intersection with prohibited AI practices. For example, the DSA pays special attention to the prohibition of profiling using special categories of personal data (as defined by Article 9 GDPR) on online platforms, given the possible manipulative effect of disinformation campaigns that can lead to a negative impact on public health, public security, civil discourse, political participation, and equality (Recitals 69 and 95 DSA). Therefore, if bots and deepfakes spread information online to convince vulnerable individuals (such as the elderly, children, and economically disadvantaged individuals) to purchase high-profit financial products, both the DSA and the AI Act would apply.

Compliance with these laws can help mitigate harm and reduce manipulative effects. For example, suppose that a very large online platform has conducted a risk assessment to assess systemic risk (as required by Article 34 DSA) and a data protection impact assessment (as required by Article 35 GDPR in certain circumstances). In this case, it will be easier for such a platform to identify whether any of its AI systems may fall under the prohibited uses listed in Article 5 AI Act, and adopt mitigating measures accordingly.

5. Concluding Reflections and Key Takeaways

There is a high threshold for falling under the Articles 5(1)(a) and (b) prohibitions.  

To fall under the prohibitions in Article 5(1)(a) or (b), providers and deployers would have to fulfil several cumulative conditions at once. Interpreting the Guidelines, this high threshold is designed to ensure that only very specific AI use-cases and applications would fall under the scope of the prohibitions. While a high threshold of application exists, it is worth noting that the final text of the AI Act ended up being broader in scope as compared to the European Commission’s initial proposal.

It is important to note that even where this threshold is not met, EU law through provisions of the GDPR regarding fairness and data protection by design when processing personal data, or some of the DSA rules when very large online platforms are involved would still limit some manipulative and deceptive practices. 

The prohibition applies even when there is no intention of manipulation. Even when there is no voluntary intention to influence a person’s decision, Article 5(1) could still apply since the provision also covers the harmful effect of manipulating and exploiting individuals or groups. In order to mitigate potential risks, the provider may adopt transparency measures and implement appropriate safeguards to prevent harmful outcomes or consequences. While doing so, it is important to keep in mind that even though the use of a specific AI system does not meet the cumulative conditions of the Article 5(1) prohibitions, it is nevertheless highly likely to be considered a high-risk AI system under Article 6 AI Act.   

Compliance with other laws can help demonstrate compliance with the AI Act.

The Guidelines highlight that if the AI provider shows compliance with relevant EU legislation on transparency, fairness, risk assessment, and data protection, it may contribute to demonstrating compliance with the AI Act’s requirements.  

Q&A With FPF Vice President for U.S. Policy, Matthew Reisman

In a new Q&A, our new Vice President for U.S. Policy, Matthew Reisman, takes a deeper look at the privacy landscape, particularly his interests in the space, what to look forward to in the U.S. and AI sector, and what is key for stakeholders to pay attention to.

What brought you into the privacy and data policy space? What drew you into working in this field/subject matter in particular? 

I was drawn to working in public policy generally because I hoped to have opportunities to improve people’s lives and the communities and societies we live in–and it’s hard to think of a space where that’s more true than data and technology. In the early years of my career, I was struck by the breathtaking pace of change in technology and the ways it was transforming our lives–and yet so many of the principles to guide its development and use remained nascent. I think that remains true today. All of us who care about building responsible public policy and governance for technology have the opportunity to create the path forward together, and I find that terrifically exciting.

You have an extensive background in the data privacy landscape across a range of issues that continue to evolve. What particular sector is one to watch in the U.S.?

As a community, we have been wrestling with how to approach privacy in the context of AI systems: the challenge is to ensure that these tools benefit as broad a spectrum of people, organizations, and society as possible while protecting the rights, freedom, and dignity of individuals. Even as we continue to work through foundational concepts for privacy in the age of AI, it is important that we anticipate the new challenges we will face as the technology continues to evolve. 

To that end, it feels like we are on the cusp of major steps forward for spatial artificial intelligence – where AI systems are enabling richer interactions with the physical world. There are so many potentially beneficial applications for spatial intelligence, from autonomous vehicles, to logistics, to healthcare, just to name a few. 

What else are you thinking about in the AI sector? What is the most timely issue that lawmakers, practitioners, or policymakers should consider the most in relation to AI? 

AI agents have been on many folks’ minds over the past year, and I think rightly so. 2026 feels like a breakout moment for agents for both enterprise and consumer applications. I was recently experimenting with coding agents for some personal projects and experienced “wow” moments similar to those I felt when first trying text-generation LLM tools several years ago. Agents offer exciting potential benefits for individuals, organizations, and society–and to realize them, we will need to work together on principles and standards for responsible development and deployment.

You have worked within the business, government, and nonprofit sectors. Given the breadth of diverse experience that you are now bringing to FPF, what continues to surprise you about the U.S. data privacy landscape across the board? 

It has been fascinating to me to see how privacy and adjacent policy issues have become prominent in everyday discourse in nearly every sector of the economy and society, and nearly every facet of our lives, from the workplace to the family dinner table. I think the factor driving this is the central role of data in virtually every system we interact with–at home, at school, and in our interactions with businesses and government agencies. It’s hard to imagine a time soon when these issues will lessen in importance, so I anticipate we’ll be talking about them with co-workers, teachers, and family and friends alike for the foreseeable future.

What do you find unique about FPF and its approach to bringing together academics, business, and thought leaders in facilitating discussion in privacy matters in the U.S. and abroad? 

FPF fulfills a unique and critical role by bringing together the full range of stakeholders who are striving to ensure that technology and data are used in ways that are responsible and beneficial for individuals, organizations, and society. It is a place that embodies both timeless values and intellectual rigor: when you meet FPF’ers, you quickly realize that they carry an infectious passion for the subject matter, a commitment to excellence in analysis and research, a gift for facilitation of meaningful and productive conversations, and a deeply held belief in the potential for their work to make a difference. I admired and was inspired by FPF’s work as an external stakeholder, and now that I’m here, I only feel those sentiments more strongly. It’s a special place. 

From Proposal to Passage: Enacted U.S. AI Laws, 2023–2025

Over the past three years, lawmakers across the United States have increasingly enacted AI-related laws that shape the development and deployment of AI systems. Between 2023 and 2025, the Future of Privacy Forum tracked 27 pieces of enacted AI-related legislation across 14 states, along with one federal law (the TAKE IT DOWN Act) that carry direct or indirect implications for private-sector AI developers and deployers. Notably, most enacted AI laws are already effective as of 2026, requiring entities to begin navigating compliance obligations. To support stakeholders, FPF has compiled a resource documenting key AI laws enacted from 2023-2025, which can be found below.

These enacted laws span a wide range of policy areas, reflecting experimentation in regulatory scope among lawmakers. In 2025 alone, states enacted laws addressing frontier model risk (such as California’s SB 53 and New York’s RAISE Act), generative AI transparency, AI use in health care settings, liability standards, data privacy, innovation, and synthetic content. Additionally, one of the clearest trends among enacted laws in 2025 included the growing focus on AI chatbots. Five states (California, Maine, New Hampshire, New York, and Utah) enacted chatbot-specific laws emphasizing transparency and safety protocols, particularly for sensitive use cases involving mental health and emotional companionship.

While the majority of these AI laws have already taken effect, a small number have delayed or phased-in effective dates that stakeholders should continue to track:

The broad diversity within 2025 AI bill categories contrasts with 2024, when laws such as the Colorado AI Act signaled a more uniform legislative emphasis on high-risk AI systems and automated decision-making technologies (ADMT) used in consequential decisionmaking. As analyzed in FPF’s State of State AI reports from 2024 and 2025, AI legislative efforts have shifted away from broad, framework-style laws and toward narrower measures tailored to specific use cases and technologies. This trend may also offer a preview of what is to come for enacted AI regulation in 2026: increased sector-specific regulation, heightened attention to sensitive populations such as minors, and a growing emphasis on substantive requirements.

Red Lines under the EU AI Act: Understanding ‘Prohibited AI Practices’ and their Interplay with the GDPR, DSA

Blog 1/ Red Lines under the EU AI Act Series  

The EU AI Act prohibits certain AI practices in the European Union (hereinafter also “the Union”or “the EU”), at the top of the pyramid of its layered approach: harmful manipulation and deception, social scoring, individual risk assessment, untargeted scraping of facial images, emotion recognition, biometric categorization, and real-time remote biometric identification for law enforcement purposes. These are the “red lines” that the EU has drawn through the AI Act. “Red lines” in AI governance have been generally described as meaning “specific boundaries that AI systems must not cross”, and, in more detail, as “specific, non-negotiable prohibitions on certain AI behaviors or AI uses that are deemed too dangerous, high-risk, or unethical to permit”. Most “red lines” emerge from soft law or self-regulation, with the AI Act being the first law globally drawing such lines, exemplifying the strict AI regulatory approach that the EU is pursuing. 

Prohibited AI practices are regulated by Article 5 of the AI Act, which already became applicable in February 2025 (see a full timeline of when chapters of the AI Act become applicable). Starting on 2 August 2025 this provision also became enforceable by the designated authorities at Member State level, or the European Data Protection Supervisor – the supervisory authority for EU institutions, as the case may be. Non-compliance with it triggers administrative fines of up to 35 million euros or up to 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher. However, the supervision and enforcement landscape is highly fragmented and decentralized. 

This blog is the first of a series which will explore each prohibited AI practice and its interplay with existing EU law, such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), starting from the Guidelines on Prohibited Artificial Intelligence Practices under the AI Act (hereinafter ‘the Guidelines’), published by the European Commission on 4 February 2025. The aim is to understand what AI systems and practices are within the scope of Article 5 of the AI Act, and to highlight potential areas of legislative overlap or lack of clarity. This is increasingly important, at a time where the European Commission has prioritized addressing the interplay of the digital regulation acquis with a view to amending parts of the AI Act and the GDPR through the Digital Omnibus initiative. While the initial proposal for the Digital Omnibus on AI does not seek to amend the AI Act’s prohibited practices requirements, multiple political groups of the European Parliament and several governments of member states are proposing amendments to enhance the list of prohibited practices particularly with regard to intimate deep fakes and Child Sexual Abuse Material. 

This blog continues with an Introduction into the significance of the Guidelines and the place of the prohibited practices into the broader layered architecture of the AI Act, tailored to severity of risks (1), details about the definitions and scope of the prohibited practices (2), and an analysis of the interplay of the prohibited AI practices with the GDPR and DSA (3), before Conclusions (4) highlight takeaways: 

  1. Entry into force of prohibited AI practices under the AI Act: A year on 

Prohibited practices under Article 5 of the AI Act entered into force on 2 February 2025 and became enforceable on 2 August 2025. However, so far, no enforcement or otherwise regulatory action in relation to prohibited AI practices has been announced. 

About a year ago, on 4 February 2025, the European Commission released Guidelines on Prohibited Artificial Intelligence Practices under the AI Act. The AI Act regulates the placing on the market, putting into service, and use of AI systems across the Union on the basis of harmonized rules and a tiered approach based on the severity of the risks posed by some AI systems. While there are four risk categories in the AI Act, the Guidelines provide legal explanations and practical examples on AI practices that are deemed unacceptable due to their potential risks to fundamental rights and freedoms, and are therefore prohibited. 

While the Guidelines are non-binding, they offer the Commission’s first interpretation of the Article 5 prohibitions as well as crucial insights into its own analysis on the interplay between core requirements of the AI Act and other EU law, including (but not limited to) the GDPR and the DSA. In publishing the Guidelines, the Commission explicitly acknowledged that any authoritative interpretations of the AI Act ultimately reside with the Court of Justice of the European Union (CJEU), and notes that these may be reviewed or amended in light of relevant future case law or enforcement actions by market surveillance authorities. However, while enforcement actions under the AI Act are yet to emerge, analysis can be made with regard to the interplay between the Commission’s Guidelines and existing CJEU case law, as well as decisions by Data Protection Authorities (DPAs) under the GDPR. 

This first blog in our series on ‘Red Lines under the EU AI Act’ highlights how the Commission’s Guidelines take a scaled approach to delineating the practices which fall within and outside of the scope of prohibited practices. The Guidelines highlight the close interplay between Articles 5 (on prohibited AI practices) and 6 (on high-risk AI systems) of the AI Act, and note that where an AI system does not fulfil the requirements for prohibition under the AI Act, it may still be unlawful or prohibited under other laws such as the GDPR. 

  1. From emotion recognition, to social scoring via AI systems: Overview of prohibitions under Article 5 of the AI Act

The tiered regulatory approach of the AI Act takes into account four risk categories of AI systems on the basis of which scaled obligations are proposed: unacceptable risk, high risk, transparency risk, and minimal to no risk. This analysis zooms in especially on unacceptable risk, as found in Article 5 AI Act, which prohibits the placing on the EU market, putting into service or use of AI systems for manipulative, exploitative, social control or surveillance practices. Of note, Article 5 is framed as such that technology or AI systems themselves are not prohibited, but “practices” involving specific AI systems that pose unacceptable risks are. This framing is different from the one in Chapter III of the AI Act, which classifies and regulates systems themselves as “high-risk AI systems.”   

The prohibited practices are, by their inherent nature, deemed to be especially harmful and abusive due to their contravention of fundamental rights as enshrined in the EU Charter of Fundamental Rights. The Guidelines issued by the European Commission highlight Recital 28 of the AI Act by reiterating that the impacts of prohibited AI practices are not limited to the right to personal data protection (Article 8 EU Charter) and the right to a private life (Article 7), but they also pose an unacceptable risk to the rights to non-discrimination (Article 21), equality (Article 20), and the rights of the child (Article 24). 

Prohibited AI practices under the AI Act include:

2.1. The Guidelines extend the scope of prohibited AI practices to include those related to general-purpose AI systems 

In defining the material scope of Article 5 AI Act, the Guidelines expand upon the definitions of “placing on the market, putting into service or use” of an AI system. This is important, because all prohibited practices under Article 5(1) AI Act, from letters (a) to (g), refer to “the placing on the market, the putting into service or the use of an AI system that (…)” engages in a specific practice defined under each of the letters of the provision. Therefore, understanding the definitions of these terms is essential for the application of the “prohibitions”.

“Placing on the market” is the first making available of an AI system on the Union market, for distribution or use in the course of a commercial activity, either for a fee or free of charge (see Articles 3(9) and 3(10) AI Act for full definitions). Placing an AI system on the Union market is considered as such regardless of the means of supply, whether through an API, direct downloads, via cloud or physical copies. 

“Putting into service” refers to the supply of an AI system for first use to the deployer or for own use in the Union for its intended purpose (Article 3(11)), and covers both the “supply for first use” to third parties and “in-house development or deployment”1. The inclusion of in-house development to the scope of Article 3(11) is a significant extension introduced by the Guidelines, considering the definition of “putting into service” in the AI Act only refers to “the supply of an AI system for first use directly to the deployer or for own use in the Union.” This interpretation might need further clarification, especially as Article 2(8) AI Act excludes “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” from its scope of application.

Regarding the “use” of an AI system, which is not directly defined by the AI Act, the Guidelines specify that it should be similarly broadly understood to cover the use and deployment of AI systems at any point in their lifecycle, after having been put into service or placed on the market. Importantly, the Guidelines specify that “use” also includes any “misuse” that may amount to a prohibited practice, making deployers responsible for reasonably foreseeable harms that may arise. 

Given the scope of the prohibited practices, the Guidelines focus on both providers and deployers of AI systems and highlight that continuous compliance with the AI Act is required during all phases of the AI lifecycle. For each of the prohibitions, the roles and responsibilities of providers and deployers should be construed in a proportionate manner, “taking into account who in the value chain is best placed” to adopt a mitigating or preventive measure.

The Guidelines acknowledge that while harms may often arise from the ways AI systems are used in practice by deployers, providers also have a responsibility not to place on the market or put into service AI and GPAI systems that are “reasonably likely” to behave or be used in a manner prohibited by Article 5 AI Act. It is important to highlight that the Guidelines extend the scope of Article 5 to general-purpose AI systems as well, even though they are not specifically called out by the provision (see para. 40 of the Guidelines). 

As highlighted above, the provision is drafted as such to target “practices” of AI, which opens the possibility that not only GPAI systems are covered, but also practices of agentic AI or any new shape or form of AI systems that result in a practice described by Article 5 AI Act. Indeed, the Guidelines specifically mention that the “prohibitions apply to any AI system, whether with an ‘intended purpose’ or ‘general purpose.’” It is worth noting, however, that the Guidelines address prohibitions in relation to general-purpose AI systems rather than models, recalling that such systems are indeed based on general-purpose AI models but “have the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems” (Article 3(66) AI Act). 

2.2. Purposes that do not fall within the scope of the AI Act, and practices that do

The Guidelines note that the AI Act expressly excludes from its scope AI systems used for national security, defence, and military purposes (Article 2(3)). For this exclusion to apply, the AI system must be placed on the market, put into service or used exclusively for such purposes. This means that so-called “dual use” AI systems, such as those for civilian or law enforcement purposes, do fall within the scope of the law. A direct example from the Guidelines notes that: “if a company offers a RBI (remote biometric identification – n.). system for various purposes, including law enforcement and national security, that company is the provider of that dual use system and must ensure its compliance” with the AI Act (emphasis added). 

In addition to judicial and law enforcement cooperation with third countries, research and development activities also fall outside the scope of the AI Act. Indeed, as also recalled above, the AI Act does not apply to “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service” (Article 2(8)). The Guidelines view this exemption as a natural continuation of the AI Act’s market-based logic, which applies to AI systems once they are placed on the market. However, this raises consistency issues with how the same Guidelines include “in-house development or deployment” of AI systems in the scope of “putting into service” (see also Section 2.1. above).

It is worth noting that the Guidelines are explicit in their reminder of the fact that the research and development exclusion does not apply to testing in real-world conditions, and in cases where those experimental systems are eventually placed on the Union market. The testing of AI systems in real-world conditions may only be carried out in AI regulatory sandboxes, and in full compliance with other Union law, including the GDPR insofar as personal data processing is concerned. 

The Guidelines also note that purely personal, non-professional activities similarly fall outside of the AI Act’s scope (Article 2(10)). This includes, for example, an individual using a facial recognition system at home. However, they are careful in noting that the facial recognition system as such does remain within the scope of the AI Act as regards the obligations of providers of such systems in ensuring compliance, even in full knowledge that the system is intended to be used by natural persons for purely non-professional purposes or activities. 

The Guidelines take an overall cautious approach in delineating the purposes and practices which fall outside the scope of the AI Act through consistent reference to Recitals 22 to 25. The Recitals recall and make clear that providers and deployers of AI systems which fall outside the scope of the AI Act may nevertheless have to comply with other Union laws that continue to apply. 

  1. Interplay of the AI Act’s Prohibitions with the High-Risk Designation and other Union Law 

3.1. A scaled approach to the interplay between high-risk AI systems and prohibited AI practices  

The Guidelines highlight key areas of interplay between the different risk categories, showing a scaled approach in the AI Act’s risk designation. Importantly, the Guidelines note the close relationship between Article 5 on prohibited practices, and Article 6 on high-risk AI systems. They note that “the use of AI systems classified as high-risk may in some cases qualify as prohibited practices in specific circumstances” and, conversely, most AI systems that fall under an exception from a prohibition listed in Article 5 will qualify as high-risk. This approach clarifies yet again that Article 5 is not meant to prohibit a specific technology, but practices or uses of technology.

An example where Articles 5 and 6 of the AI Act should be considered in relation to each other is in the case of AI-based scoring systems, such as credit scoring, which will be considered high-risk if they do not fulfil the conditions for the credit scoring prohibition as outlined in Article 5(1)(c). While not specifically mentioned by the Guidelines in this context, it is worth noting that Courts and DPAs across the EU have been active in cases involving automated credit scoring practices under Article 22 GDPR on automated-decision making (ADM), as well as in cases that may amount to “profiling”. The notion of “profiling” under the GDPR is particularly relevant in the context of understanding Article 5(1)(d) AI Act. As such, in addition to taking into full account the risk designations under Articles 5 and 6 AI Act, it is also crucial to note the ADM prohibition under Article 22 GDPR, as compliance with one law may not automatically equal to compliance with the other. 

3.2. Interplay between the prohibited AI practices under the AI Act with the GDPR and DSA 

The Guidelines acknowledge the dichotomy between the AI Act and other Union law by recalling that, as a horizontal law applying across all sectors, the Act is without prejudice to legislation on the protection of fundamental rights, consumer protection, employment, the protection of workers and product safety. They also frame the goal of the AI Act and its preventive logic in the sense that it provides additional protection by addressing potential harms arising from AI practices which may not be covered by other laws, including by addressing the earlier stages of an AI system’s lifecycle. 

The Guidelines expressly highlight that where an AI system may not be prohibited under the AI Act, it may still be prohibited or unlawful under other laws because of, for example, “the failure to respect fundamental rights in a given case, such as the lack of a legal basis for the processing of personal data required under data protection law”, where, for instance, the GDPR is applicable, including extra-territorially. 

Crucially, the Guidelines acknowledge that in the context of prohibitions, the interplay between the AI Act and data protection law is particularly relevant, since AI systems often process personal data. They specify that laws including the GDPR, the Law Enforcement Directive, and the EU Data Protection Regulation applying to EU institutions (EUDPR), “remain unaffected and continue to apply alongside the AI Act”, noting the complementarity of the Act with the EU data protection acquis

This statement in the Guidelines about this relationship seems to be weaker than the provision in the AI Act, which states that the AI Act “shall not affect” the GDPR, the EUDPR, the ePrivacy Directive or the Law Enforcement Directive (Article 2(7) AI Act). This technically means that the AI Act is without prejudice to the GDPR and any of the other EU data protection aquis. This fact might create some complex compliance situations in practice, and will require a broad and comprehensive understanding of the EU digital rulebook as a whole, noting that its component parts cannot be read in isolation. For instance, what law prevails if a prohibited AI practice under the AI Act that overlaps with a solely automated decision-making practice involving personal data and legally or significantly affecting an individual, lawfully meets the exceptions under Article 22 GDPR? The AI Act is not designated as lex specialis, based on Article 2(7).

In addition to data protection law, the Digital Services Act (DSA) is similarly deemed relevant in the context of the AI Act’s prohibitions. The Guidelines highlight that these apply in conjunction with the relevant obligations on the providers of intermediary services (defined by Article 3(g) DSA) when AI systems or models are embedded in such services. Further, the AI Act and its prohibitions do not affect the application of the DSA’s provisions on the liability of such providers, as set out in Chapter II DSA, or existing or future liability legislation at Union or national levels. In the context of liability legislation, the Guidelines refer to Directive (EU) 2024/2853 on liability for defective products, and the now withdrawn AI Liability Directive

3.3. Notes on Enforcement of the AI Act’s Prohibitions and Penalties: Fragmentation and Decentralization 

The Guidelines recall that market surveillance authorities (MSAs), as designated by EU Member States, are responsible for enforcing the AI Act and its prohibitions. Member States had until 2 August 2025 to designate one or multiple MSAs, with some countries having already assigned the role to their national DPA with regard to certain parts of the AI Act (e.g., high-risk AI systems). Competent authorities can take enforcement actions in relation to the prohibitions on their own initiative or following a complaint by any affected person or other natural or legal person. The staggered timeline between the date of applicability of the AI Act’s provisions on prohibited uses and the deadline for designating the responsible authorities to enforce them has been causing some legal uncertainty.   

A review of Member States that have already appointed MSAs at the time of writing show, for the most part, a decentralized approach to enforcing the AI Act’s prohibited practices. Such an approach, which assigns supervision and enforcement roles to a variety of authorities depending on the sector they regulate and their area of expertise, is typical for EU product safety legislation. 

For example, on 4 February this year, Ireland published its Regulation of Artificial Intelligence Act 2026, the national law that, once adopted, will implement the AI Act’s provisions. On this basis, the enforcement approach proposed by the Act is to establish the AI Office of Ireland, either on or before 2 August 2026, which will act as the central coordinator and Single Point of Contact (Article 70 AI Act). Under this umbrella, the Act also proposes to assign monitoring and enforcement powers to different existing authorities for different prohibited practices: the Central Bank of Ireland will enforce prohibited practices in respect of financial services regulated by it; the Workplace Relations Commission will enforce prohibited practices used in employment (Article 5(1)(f) AI Act); the Coimisiún na Meán will be responsible for “certain” prohibited practices in respect of online platforms (as defined by the DSA); and the Irish Data Protection Commission (DPC) will also be responsible for “certain parts” of the prohibited practices. While the Act does not yet specify which “certain parts” the Irish DPC will be responsible for monitoring, the draft already gives an indication of the decentralized approach to enforcing the rules on prohibited practices at national level, with responsibility assigned to a variety of authorities. 

In France, the CNIL is responsible for monitoring compliance of the prohibited practices for predictive policing, the untargeted scraping to develop facial recognition databases, emotion recognition in the workplace and education institutions, biometric categorization, and real-time remote biometric identification (Articles 5(1)(d) – (h)). Responsibility for monitoring compliance with Articles 5(1)(a) and (b) lies with the Audiovisual and Digital Communication Regulatory Authority and the Directorate General for Competition, Consumer Affairs and Fraud Control. Here we can also see responsibility for monitoring prohibited practices being assigned to more than one regulator, depending on their existing area(s) of regulatory focus. 

Finally, the Guidelines state that non-compliance with the AI Act’s prohibitions constitute the “most severe infringement” of the law, and is therefore subject to the highest fines. Providers and deployers engaging in prohibited AI practices can be fined up to EUR 35 000 000 or 7% of total worldwide annual turnover, whichever is the highest. 

  1. Closing reflections and key takeaways

The AI Act doesn’t prohibit technology, but uses or practices of technology that pose unacceptable risk

Article 5 of the AI Act is broadly framed as such that technologies or AI systems themselves are not directly prohibited, but “practices” involving specific AI systems that pose unacceptable risk are. Such systems are, in turn, tied to certain actions, specifically to “placing on the market, putting into service or use” of an AI system. These actions are also interpreted broadly such that, for example, the “use” of an AI system also includes its intended use and potential misuse. The broad framing ensures that both providers and deployers of AI systems consider all phases of the AI lifecycle and approach compliance in a proportionate manner, taking into account “who in the value chain is best placed to adopt a mitigating or preventive measure.”

Practices of “General Purpose AI Systems” may also fall under the “prohibitions” of the EU AI Act

Equally of note is that the Guidelines extend the Article 5 prohibitions to practices related to any AI system, including general-purpose AI systems (rather than models themselves), even though such systems are not expressly mentioned in the AI Act provision. The Guidelines acknowledge that while harm often arises from the way specific AI systems are used in practice, both deployers and providers have a responsibility not to place on the market or put into service AI systems, including general-purpose AI systems, that are “reasonably likely” to behave in ways prohibited by Article 5 of the AI Act. 

“In-house development” is at the same time excluded from the application of the AI Act and included in the “putting into service” definition in the Guidelines, needing further clarification

As shown above, the Guidelines provide clarifications about what “placing on the market”, “putting into service” and “use” of an AI system mean, which reveal a broad interpretation of the legal definitions enshrined in the AI Act. Notably, “putting into service” is expanded to mean not only “supply for first use”, but also “in-house development or deployment” (see Section 2.1 above). At the same time, Article 2(8) of the AI Act excludes from the scope of application of the regulation any “testing or development activity” regarding AI systems and models “prior to their being placed on the market or put into service”. Further clarification from the European Commission about this part of the Guidelines is needed for legal certainty.

The interplay of the prohibitions under the AI Act and the GDPR needs legal certainty

The Commission’s Guidelines on the AI Act’s prohibitions adopt a scaled approach to delineating, based on the level of risk, which AI practices or uses may be outright prohibited and which may instead fall under the Article 6 high-risk designation. The logic of the scaled approach also extends beyond the AI Act, as the Guidelines caution that while an AI practice may not fall under the Article 5 prohibitions, it may still be unlawful under other Union laws, such as the GDPR and DSA. What is not as clear, though, is what would happen if an AI practice potentially prohibited under the AI Act would otherwise be allowed by other legislation designated as prevailing over the AI Act, and particularly the GDPR. For example, Data Protection Authorities have allowed, in the past, some facial recognition systems to be used, and have found fixable infractions related to the use of emotion recognition systems, showing that such systems could be lawful under the GDPR if all conditions highlighted in the decision would be met. The European Data Protection Board could support consistency of interpretation and application of the two legal regimes with dedicated guidelines.

The enforcement architecture of prohibited AI practices exhibits significant decentralization and fragmentation, including at national level

There are two layers of decentralization of the enforcement architecture for the prohibited AI practices: first, they are primarily left to national competent authorities as opposed to a centralized authority at EU level; second, at national level, multiple authorities have often been designated within one jurisdiction, as the cases of Ireland and France described above show. This level of decentralization is expected to lead to fragmentation of how the relevant provisions of the AI Act are applied. This landscape is further complicated by the interplay of the prohibitions under the AI Act and the GDPR, through the role of supervisory authorities over processing of personal data and their independence as guaranteed by Article 16(2) of the Treaty on the Functioning of the European Union and Article 8(3) of the EU Charter of Fundamental Rights. 

Finally, besides the close interaction between the various provisions of the AI Act themselves, the Guidelines also highlight the significant interplay between the Act and other Union laws. The ways in which these interactions may play out in the context of the several prohibited practices, such as emotion recognition and real-time biometric surveillance, will be explored in more detail in future blog posts in this series. Meanwhile, a deep dive into the broad framing of the AI Act’s prohibited practices reveals that a similarly broad understanding of the data protection acquis and EU digital rulebook is required in order to fully make sense of, and comply with, key obligations for the development and deployment of AI systems across the Union. 

  1.  See para. 13 of the Guidelines, p. 4. ↩︎

Paradigm Shift in the Palmetto State: A New Approach to Online Protection-by-Design

SSouth Carolina Governor McMaster signed HB 3431, an Age-Appropriate Design Code (AADC) -style law, on February 5, adding to the growing list of new, bipartisan state frameworks fortifying online protections for minors. Although HB 3431 is dubbed an AADC, its divergence from past models and unique blend of requirements that draw upon a variety of other state laws may signal that youth privacy- and safety-by-design frameworks are undergoing a paradigm shift away from “AADCs” and into a new model for online protections entirely. South Carolina’s novel approach evolves the online design code schema from approaches seen in other jurisdictions through its focus on both privacy and safety risks, the way covered services must address those risks, the kinds of safeguards online services should provide to users and minors, enforcement priorities, and navigating constitutional pitfalls. 

For compliance teams, the need to unpack the law’s unique provisions is urgent since the law took effect upon approval by the Governor, meaning these requirements are now in effect. Further complicating the timing of compliance considerations, NetChoice filed a lawsuit on February 9 challenging the constitutionality of the Act on First Amendment and Commerce Clause grounds. NetChoice has requested a preliminary injunction to block enforcement of the law as litigation progresses. However, with an unclear litigation timeline, several newly effective legal obligations, and significant enforcement provisions carrying personal liability for employees, compliance teams may be stuck between two high-stakes options: (1) a risk of insufficient action and consequential liability if entities are slower to come into compliance while monitoring litigation outcomes; or, (2) a risk of sunken compliance costs that could have been invested in other important compliance and trust and safety operations if they invest heavily into compliance now and the law is later overturned.

This blog post covers a few key takeaways, including:

Please see our comparison chart for a full side-by-side analysis of how South Carolina’s approach compares against other state law protections for minors online.

Scope

South Carolina’s Act applies to any legal entity that owns, operates, controls, or provides an online service reasonably likely to be accessed by minors. Whereas prior comparable state laws typically limited the scope to for-profit entities, South Carolina seemingly extends application to non-profit and other non-commercial entities. This approach mirrors the legal entity framing adopted in Vermont’s and Nebraska’s AADCs, though those laws include narrower applicability thresholds. With respect to applicability threshold criteria, South Carolina aligns with the model set out in Maryland’s AADC, applying to entities that meet any one of the following: (1) $25 million or more in gross annual revenue; (2) the buying, selling, receiving, or sharing of personal data of more than 50,000 individuals; or (3) deriving more than 50 percent of annual revenue from the sale or sharing of personal data. 

An Evolving Approach to Design Protections & Enforcement

Duty of Care

Similar to Vermont’s AADC and state comprehensive privacy laws that incorporate heightened protections for minors, such as Connecticut and Colorado, South Carolina imposes a duty of care on covered online services. Significantly, South Carolina’s duty requires entities to exercise reasonable care to prevent heightened risks of harm to minors, including compulsive use, identity theft, discrimination, and severe psychological harm, among others. The obligation to “prevent” harms to minors diverges sharply from comparable duties of care which only require entities to “mitigate” risks–seemingly placing a higher bar on entities’ compliance efforts compared to other online protection frameworks. Moreover, South Carolina includes two disclaimers regarding the application of the duty of care, including: (1) clarifying that “harm” is limited to circumstances not precluded by Section 230; and, (2) clarifying that entities are not required to prevent minors from intentionally “searching for content related to the mitigation of the described harms.” 

Mandatory Tools & Default Settings

South Carolina takes a Nebraska AADC-style approach to requiring comprehensive tools and protective default settings for minors–but with a twist. Notably, South Carolina requires covered services to provide extensive tools to all users of an online service, such as tools for disabling unnecessary design features, opting-out of personalized recommendation systems (except for tailoring based on explicit preferences), and limiting the amount of time spent on a service or platform. For minors, the Act requires covered services to enable all tools by default, functionally achieving the same goals as high default settings requirements in other frameworks, like Vermont’s and Maryland’s AADCs. Additionally, South Carolina includes prescriptive requirements for the kinds of parental tools businesses must build and provide for parents to monitor and further limit minors’ use of online services–seemingly inspired by the parental tools obligations proposed by the KOSA. Importantly, businesses in scope of several minor online protection frameworks should pay close attention to South Carolina’s expansive mandatory tools and default settings requirements–and the range of users for which these tools must be available–when assessing compliance impacts. 

Processing Restrictions

South Carolina’s new law includes a common component of other minor online protection frameworks: normative processing restrictions limiting the way covered online services can collect and use minors’ data, including restrictions on profiling and geolocation data tracking and a prohibition on targeted advertising. Notably, similar to Nebraska’s AADC, South Carolina also broadly prohibits covered entities’ use of dark patterns on a service. This goes far beyond many other privacy laws that instead prohibit dark patterns only insofar as they are used in obtaining consent or collecting personal data. Although the law as a whole is subject to Attorney General enforcement, South Carolina’s Act singles out the dark patterns prohibition as a violation of the South Carolina Unfair Trade Practices Act, which includes a private right of action.

Third Party Audits

One of the key issues hampering states’ implementation of AADC frameworks has been legal challenges to requirements for service providers to perform data protection impact assessments (DPIAs). The DPIA rules typically require covered online services to assess the likelihood of harm to children. For example, California’s AADC has been subject to litigation because, among other things, it included a requirement for businesses to assess and limit the exposure of children to “potentially” harmful content. The Ninth Circuit held that assessments that require a company to opine on content-based harms are constitutionally problematic, but it did not hold that DPIAs are entirely unconstitutional–yet the litigation caused some proponents of AADC-style laws to explore alternatives to DPIAs

Within this dynamic constitutional landscape, South Carolina shifts away from requiring covered entities to internally assess harms through DPIAs and instead requires covered entities to undergo annual third-party audits and publicly disclose the reports. Those audits must include detailed information on various aspects of the online service as it pertains to minors, including the purpose of the online service, for what purpose the online service uses minors’ personal and sensitive data, whether the service uses “covered design features” (e.g., infinite scroll, autoplay, notifications/alerts, appearance-altering filters, etc.), and a description of algorithms (an undefined term) used by the covered online service. This shift towards public disclosure of service assessment information may cause notable compliance difficulties and raise trade secret questions for covered online services, although it is unclear whether this unique ‘third-party audits’ approach addresses the underlying constitutional concerns highlighted in state AADC litigation. 

Enforcement

South Carolina authorizes the Attorney General to enforce the Act, allowing for treble financial damages for violations. Most significantly, South Carolina also authorizes the Attorney General to hold officers and employees personally liable for “willful and wanton” violations–a novel and severe enforcement mechanism not employed in comparable frameworks. However, personal liability for employees and officers is not entirely unheard of in the broader consumer protection and digital services enforcement context. For example, in an aggressive enforcement approach advanced by the Federal Trade Commission (FTC) under Chair Lina Khan, the agency pursued personal liability against senior executives at a public company for violations of the FTC Act. In a more recent example, the Kentucky Attorney General filed a consumer protection lawsuit against Character.AI and its founders alleging the company knowingly harmed minors in the operation of its companion chatbot product, exposing minors to “sexual conduct, exploitation, and substance abuse.” 

Conclusion

By adopting its novel approach, South Carolina adds to a growing state-level experiment that seeks to establish obligations to address and disclose risks of harm in online services and afford greater protections for minors with constitutional constraints. South Carolina’s novel blend of different state-level models, unique take on service assessments, and unusual enforcement approach may signal a broader fragmentation of online youth protection frameworks into three increasingly defined models: (1) data management-oriented heightened protections for minors embedded in state privacy laws; (2) age appropriate design codes that impose a fiduciary duty to act in children’s best interests, require age-appropriate design, and mandate DPIAs to assess foreseeable harms; and, (3) a “protective design” model exemplified by South Carolina, that synthesizes elements observed in first two while uniquely integrating privacy and safety obligations. It remains to be seen how the emerging protective design model may influence ongoing state legislative efforts, impact business compliance efforts, and measure-up against potential constitutional scrutiny.

From Chatbot to Checkout: Who Pays When Transactional Agents Play?

Disclaimer: Please note that nothing below should be construed as legal advice. 

If 2025 was the year of agentic systems, 2026 may be the year these technologies reshape e-commerce. Agentic AI systems are defined by the ability to complete more complex, multi-step tasks, and exhibit greater autonomy over how to achieve user goals. As these systems have advanced, technology providers have been exploring the nexus between AI technologies and online commerce, with many launching purchase features and partnering with established retailers to offer shopping experiences within generative AI platforms. In doing so, these companies have also relied on developments in foundational protocols (e.g., Google’s Agent Payment Protocol) that seek to enable agentic systems to make purchases on a person’s behalf (“transactional agents”). But LLM-based systems like transactional agents can make mistakes, which raises questions about what laws apply to transactional agents and who is responsible when these systems make errors. 

This blog post examines the emerging ecosystem of transactional agents, including examples of companies that have introduced these technologies and the protocols underpinning them. Existing US laws governing online transactions, such as the Uniform Electronic Transactions Act (UETA), apply to agentic commerce, including in situations where these systems make errors. Transactional agent providers are complying with these laws and otherwise managing risks through various means, including contractual terms, error prevention features, and action logs. 

How is the Transactional Agent Ecosystem Evolving? 

Several AI and technology companies have unveiled transactional agents over the past year that enable consumers to purchase goods within their interfaces rather than having to visit individual merchants’ websites. For example, OpenAI added native checkout features into its LLM-based chatbot that hundreds of millions of consumers already use, and Perplexity introduced similar features for paid users that can find products and store payment information to enable purchases. Amazon has also released a “Buy For Me” feature, which involves an agentic system that sends payment and shipping address information to third party merchants so that Amazon’s users can buy these merchants’ goods on Amazon’s website. 

Many of these same companies are developing frameworks and protocols (e.g., A2A, AP2, UCP, ACP, and MCP) that can combine to facilitate transactional agents across e-commerce. At the same time, merchants are modifying their experiences to ensure their goods can reach transactional agent users.  

Application of Existing Laws (such as the Uniform Electronic Transactions Act)

As consumer-facing tools for agentic commerce develop, questions will arise about who is responsible when transactional agents inevitably make mistakes. Are users responsible for erroneous purchases that a transactional agent may make on their behalf? In these cases, long-standing statutes governing electronic transactions apply. The Uniform Electronic Transactions Act (UETA), a model law adopted by 49 out of 50 U.S. states, sets forth rules governing the validity of contracts undertaken by electronic means, and suggests that consumer transactions conducted by an agentic system can be considered valid transactions.

First, the UETA has provisions that apply to “electronic agents,” which are defined as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” This is a broad, technology-neutral definition that is not reserved for AI. It encompasses a range of machine-to-machine and human-to-machine technologies, such as automated supply chain procurement and signing up for subscriptions online. The latest transactional agents can take an increasing set of actions on a user’s behalf without oversight, such as finding and executing purchases, so these technologies could potentially qualify as transactional agents

This means that transactional agents can probably enter into binding transactions on a person’s behalf. Section 14 of the UETA indicates that this can occur even without human review when two entities use agentic systems to transact on their behalf (e.g., an individual user of a system that buys goods on their behalf and that of an e-commerce platform that can negotiate order quantity and price). At a time where agentic systems representing distinct parties interacting with each other are edging closer to reality, these systems could bind the user to contracts undertaken on their behalf despite the lack of human oversight. However, a significant caveat is that the UETA also says that individuals may avoid transactions entered into by transactional agents if they were not given “an opportunity for the prevention or correction of [an] error . . . .” This is true even if the user made the error. 

Finally, even if an agentic transaction is deemed valid and a mistake is not made, other legal protections may apply in the event of consumer harm. For example, a transactional agent provider that requires third parties to pay for their goods to be listed by the agent, or gives preference to its own goods, may violate antitrust and consumer protection law. There is also a growing debate over the application of other longstanding common law protections, such as fiduciary duties and “agency law.”

What Risk Management Steps are Transactional Agent Providers Taking to Manage Responsibility?

Managing responsibility for transactional agents can take varied forms, including contractual disclaimers and limitations, protocols that signal to third parties an agentic system’s authorization to act on a user’s behalf, as well as design decisions that reduce the likelihood of transactions being voided when errors occur (e.g., confirmation prompts that require users to authorize purchases):

Conclusion

Organizations are increasingly rolling out features that enable agentic systems to buy goods and services. These current and near-future technologies introduce uncertainty about who is responsible for agentic system transactions, including when mistakes are made, which is leading providers to integrate error prevention features, contractual disclaimers, and other legal and technical measures to manage and allocate risks. 

Looking ahead, there will be many more privacy, data governance, and risk management challenges to address. The uptake of transactional agents raises data governance considerations. As these technologies become more autonomous, organizations must decide to what extent transactional agents proactively infer consumer preferences and adapt actions based on their impact on a user’s financial wellbeing. Publishers and retailers also face the challenge of how to let transactional agents interact with their websites. This particular issue has fed tensions over who owns the direct consumer relationship in an agentic world (e.g., is it online marketplaces and information aggregators or the agentic system’s provider?). Even with applicable laws for transactional agents, the evolution of these technologies (e.g., less human oversight) and increased investment in these technologies will create new legal challenges for practitioners to address. 

FPF Retrospective: U.S. Privacy Enforcement in 2025

The U.S. privacy law landscape continues to mature as new laws go into effect, cure periods expire, and regulators interpret the law through enforcement actions and guidance. State attorneys general and the Federal Trade Commission act as the country’s de facto privacy regulators, regularly bringing enforcement actions under legal authorities both old and new. For privacy compliance programs, this steady stream of regulatory activity both clarifies existing responsibilities and raises new questions and obligations. FPF’s U.S. Policy team has compiled a retrospective looking back at enforcement activity in 2025 and outlining key trends and insights.

Looking at both substantive areas of focus in enforcement actions and the level of activity by different enforcers, the retrospective identified four notable trends in 2025: 

  1. California and Texas Lead Growing Public Enforcement of Comprehensive Privacy Laws: Comprehensive privacy laws may finally be moving from a period of legislative activity into a new era where enforcement is shaping the laws’ meaning, as 2025 saw a significant increase in the number of public enforcement actions.
  2. States Demonstrate Increasing Concern for Kids’ and Teens’ Online Privacy and Safety: As legislators continue to consider broad youth privacy and online safety legal frameworks, enforcers too are looking at how to protect the youth online. Bringing claims under existing state laws, including privacy and UDAP, regulators are paying close attention to opt-in consent requirements, protections for teenagers in addition to children under 13, and the online safety practices of social media and gaming services. 
  3. U.S. Regulators Go Full Speed Ahead on Location and Driving Data Enforcement: Building on recent enforcement actions concerning data brokerage and location privacy, federal and state enforcers have expanded their consumer protection enforcement strategy to focus also on first-party data collectors and the collection of “driving data.”
  4. FTC Prioritizes Enforcement on Harms to Kids and Teens, and Deceptive AI Marketing, Under New Administration: The FTC transitioned leadership in 2025, moving into a new era under Chair Andrew Ferguson that included a shift toward targeted enforcement activity focused  on ensuring children’s and teens’ privacy and safety, and  “promoting innovation” by addressing deceptive claims about the capabilities of AI-enabled products and services.

There are several practical takeaways that compliance teams can draw from these trends: obtaining required consent prior to processing sensitive data, including through oversight of vendors’ consent practices, identification of known children, and awareness of laws with broader consent requirements; ensuring that consumer controls and rights mechanisms are operational; avoiding design choices that could mislead consumers; considering if and when to deploy age assurance technologies and how to do so in an effective and privacy-protective manner; and avoiding making deceptive claims about AI products.

2026: A Year at the Crossroads for Global Data Protection and Privacy

There are three forces twirling and swirling to create a perfect storm for global data protection and privacy this year: the surprise reopening of the General Data Protection Regulation (GDPR) which will largely play out in Brussels over the following months, the complexity and velocity of AI developments, and the push and pull over the field by increasingly substantial adjacent digital and tech regulations. 

All of this will play out with geopolitics taking center stage. At the confluence of some of these developments, the protection of children online and cross-border data transfers – with their other side of the coin, data localization in the broader context of digital sovereignty, will be two major areas of focus.

1. The GDPR reform, with an eye on global ripple effects

The gradual reopening of the GDPR last year came as a surprise, without much debate or public consultation, if any. It passed its periodic evaluation in the summer of 2024 with a recommendation for more guidance and better implementation to suit SMEs and harmonization across the EU, as opposed to re-opening or amending it. Moreover, exactly one year ago, in January 2025, at CPDP-Data Protection Day Conference in Brussels, not one, but two representatives of the European Commission, in two different panels (one of which I moderated) were very clear that the Commission had no intention to re-open the GDPR. 

Despite this, a minor intervention was first proposed in May to tweak the size of entities under the obligation to keep a register of processing activities through one of the simplification Omnibus packages of the Commission. But this proved to just crack the door open for more significant amendments to the GDPR proposed later on, under the broad umbrella of competitiveness and regulatory simplification the Commission started to pursue emphatically. Towards the end of the year, in November 2025, major interventions were introduced within another simplification Omnibus dedicated to digital regulation. 

There are two significant policy shifts the GDPR Omnibus proposes that should be expected to reverberate in data protection laws around the world in the next few years. First, it entertains the end of technology-neutral data protection law. AI – the technology, is imprinted all over the proposed amendments, from the inconspicuous ones, like the new definition proposed for “scientific research”, to the express mentioning of “AI systems” in new rules created to facilitate their “training and operations” – including in relation to allowing the use of sensitive data and to recognizing a specific legitimate interest for processing personal data for this purpose. 

The second policy shift – and perhaps the most consequential for the rest of the data protection world, is the narrowing down of what constitutes “personal data”, by adding several sentences to the existing definition to transpose what resembles the relative approach to de-identification which was confirmed by the Court of Justice of the EU (CJEU) in the SRB case this September. To a certain degree, the proposed changes bring the definition to pre-GDPR days, when some data protection authorities were indeed applying a relative approach in their regulatory activity. 

The new definition technically adds that the holder of key-coded data or other information about an identifiable person, which does not have means reasonably likely to be used to identify that person, does not process personal data even if “potential subsequent recipients” can identify the person to whom the data relates. Processing of this data, including publishing it or sharing it with such recipients, would thus be outside of the scope of the GDPR and any accountability obligations that follow from it.

If the language proposed will end up in the GDPR, this would likely mark a narrowing of the scope of application of the law, leaving little room for supervisory authorities to apply the relative approach on a case-by-case basis following the test that the CJEU proposed in SRB. This is particularly notable, considering that the GDPR has successfully exported the current philosophy and much of the wording of the broad definition of personal data (particularly its “identifiability” component) to most data protection laws adopted or updated around the world since 2016, from California, to Brazil, to China, to India.

The ripple effects around the world of such significant modifications of the GDPR would not be felt immediately, but in the years to come. Hence, the legislative process unfolding this year in Brussels on the GDPR Omnibus should be followed closely. 

2. The Complexity and Velocity of AI developments: Shifting from regulating data to regulating models?

There is a lot to unpack here, almost too much. And this is at the core of why AI developments have an outsized impact on data protection. There is a lot of complexity related to understanding the data flows and processes underpinning the lifecycle of the various AI technologies, making it very difficult to untangle the ways in which data protection is applicable to them. On top of it, the speed with which AI evolves is staggering. This being said, there are a couple of particularly interesting issues at the intersection of AI and data protection to be necessarily followed this year, with an eye towards the following years too.  

One of them is the intriguing question of whether AI models are the new “data” in data protection. Some of you certainly remember the big debate of 2024: do Large Language Models (LLMs) process personal data within the model? While it was largely accepted that personal data is processed during training of LLMs and may be processed as output of queries done within LLMs, it was not at all clear that any of the informational elements related to AI models post-training, like tokens, vectors, embeddings or weights, can amount by themselves or in some combination to personal data (or not). The question was supposed to be solved by an Opinion of the European Data Protection Board (EDPB) solicited by the Irish Data Protection Commission, which was published in December 2024.

Instead, the Opinion painted a convoluted regulatory answer by offering that “AI models trained on personal data cannot, in all cases, be considered anonymous”. The EDPB then dedicated most of the Opinion on laying out criteria that can help assess whether AI models are anonymous or not. While most, if not all of the commentary around the Opinion usually focuses on the merits of these criteria, one should perhaps stop and first reflect on the framework of the analysis – namely assessing the nature of the model itself rather than the nature of the bits and pieces of information within the model. 

The EDPB did not offer any exploration of what non-anonymous (so, then, personal?) AI models might mean for the broader application of data protection law, such as data subject rights. But with it, the EDPB may have – intentionally or not, started a paradigm shift for data protection in the context of AI, signaling a possible move from the regulation of personal data items to the regulation of “personal” AI models. However, the Opinion was ostensibly shelved throughout last year as it did not seem to appear in any regulatory action yet. I would have forgotten about it myself if not for a judgment of a Court in Munich in November 2025, in an IP case related to LLMs.  

The German Court found that song lyrics in a training dataset for an LLM were “reproducibly contained and fixed in the model weights”, with the judgment specifically referring to how models themselves are “copies” of those lyrics within the meaning of the relevant copyright law. This is because of the “memorization” of the lyrics in the training data by the model, where weights and vectors are “physical fixations” of the lyrics. This judgment is not final, with a pending appeal. But it will be interesting to see whether this perspective of focusing on the models themselves as opposed to bits of data within them will find more ground this year and immediately following ones, pushing for legal reform, or will fizzle out due to over-complexity of making it fit within current legal frameworks. 

Key AI developments which might push the limits of existing data protection and privacy frameworks to a breaking point, as they descend from research to market, will be: 

3. A concert of laws adjacent to data protection and privacy steadily becoming the digital regulation establishment 

A third force pressing onto data protection for the foreseeable future are all the novel data-and-digital adjacent regulatory efforts solidifying into a new establishment of digital regulation, with their own bureaucracies, vocabulary and compliance infrastructure: online safety laws – including their branch of children’s online safety laws, digital markets laws, data laws focusing on data sharing or data strategies including personal and non-personal data, and the proliferation of AI laws, from baseline acts to sectoral or issue-specific laws (focusing on single issues, like transparency). 

It may have started in the EU five years ago, but this is now a global phenomenon. Look, for instance, at Japan’s Mobile Software Competition Act, a law regulating competition in  digital markets focusing on mobile environments which became effective in December 2025 and draws strong comparisons with the EU Digital Markets Act. Or at Vietnam’s Data Law which became effective in July 2025 and is a comprehensive framework for the governance of digital data, both personal and non-personal, applying in parallel to its new Data Protection Law.

Children’s online safety is taking increasingly more space in the world of digital regulatory frameworks, and its overlap and interaction with data protection law could not be clearer than in Brazil. A comprehensive law for children’s online safety, the Digital ECA, was passed at the end of last year and it is slated to be enforced by the Brazilian Data Protection Authority starting this spring. 

It brings interesting innovations, like a novel standard for such laws to be triggered – “likelihood of access” of a technology service or product by minors, or “age rating” for digital services, requiring providers to maintain age rating policies and continuously assess their content based on it. It also provides for “online safety by design and by default” as an obligation for digital services providers. From state level legislation in the US on “age appropriate design”, to an executive decree in UAE on “child digital safety” – the pace of adopting online safety laws for children is ramping up. What makes these laws more impactful is also the fact that age limits of minors falling under these rules are growing to capture teenagers up until 16 and even 18 year-olds in some places, bringing vastly more service providers in scope than first generation children online safety regulations.  

The overlap, intersection and even tensions of all these laws with data protection become increasingly visible. See, for instance, the recent Russmedia judgment of the CJEU, which established that an online marketplace is a joint-controller under the GDPR and it has obligations in relation to sensitive personal data published by a user, with consequences for intermediary liability that are expected to reverberate at the intersection of the GDPR and Digital Services Act in practice. 

The compliance infrastructure of this new generation of digital laws and its need for resources (human resources, budget) break their way into an already stretched field of “privacy programs”, “privacy professionals”, and regulators, with the visible risks of moving attention from, and diluting meaningful measures and controls stemming from privacy and data protection laws. 

4. Breaking the fourth wall: Geopolitics

While all these developments play out, it is particularly important to be aware that they unfold on a geopolitical stage that is unpredictable and constantly shifting, resulting in various notions of “digital sovereignty” taking root from Europe, to Africa, to elsewhere around the world. From a data protection perspective, and in the absence of a comprehensive understanding of what “digital sovereignty” might mean, this could translate into a realignment of international data transfers rules through more data localization measures, more data transfers arrangements following trade agreements, or more regional free data flows arrangements among aligned countries. 

Ten years after the GDPR was adopted as a modern upgrade of 1980s-style data protection laws for the online era, successfully promoting fair information practice principles, data subject rights and the “privacy profession” around the world, data protection and privacy are at an inflection point: either hold the line and evolve to meet these challenges, or melt away in a sea of new digital laws and technological developments.

6 Privacy Tips for the Generative AI Era

Data Privacy Day, or Data Protection Day in Europe, is recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data. The Council of Europe initiated the day in 2006, with the first official celebration held on January 28, 2007, marking this year as the 19th anniversary of celebration. Companies and organizations around the world often devote time for internal privacy training during this week, working to improve awareness of key data protections issues for their staff.

It’s also a good time for all of us to think about our own sharing of personal data. Nowadays, one of the most important decisions we need to make about our data is when and how we use AI-powered services. To raise awareness, we’ve partnered with Snap to create a Data Privacy Day Snapchat Lens. Check it out by scanning the Snapchat code and learn more below about privacy tips for generative AI! 

snap lens 2026

  1. Know When You’re Using Generative AI

As a first step, it’s important to know what generative AI is and when you’re using it. Generative AI is a type of artificial intelligence that creates original text, images, audio, and code in response to input. In addition to visiting dedicated generative AI platforms (such as ChatGPT), you may find that many companies’ existing functionality now also includes generative AI capabilities. For example, a search in Google now provides answers powered by Google’s generative AI, Gemini. Other examples include Snap’s AI Lenses and AI Snaps in creative tools, and Adobe’s Acrobat and Express are now powered with Firefly, Adobe’s generative AI. X’s Grok now assists users and answers questions. 

One of the best ways to identify when you’re using generative AI is to look for a symbol or disclaimer. Many organizations provide clues like symbols, and a range of companies like Snap, Github, and many others often use either a sparkle or star icon to denote generative AI features. You might also notice labels like “AI-generated” or “Experimental” alongside results from some companies, including Meta

  1. Think Carefully Before You Share Sensitive or Private Information

While this is a general rule of thumb for interacting with any product, it’s especially important when using generative AI because most generative AI systems use data that users provide (such as conversation text or images) to allow their models to continuously learn and improve. While your prompts, generated images, and other pieces of data can improve the technology for all users, it also means that if you share sensitive or private information, it could potentially be shared or surfaced in connection with training and developing the algorithm. 

Be especially careful when uploading files, images, or screenshots to generative AI tools. Documents, photos, or screenshots can include more information than you realize, such as metadata, background details, or information about third parties. Before uploading, consider redacting, cropping, or otherwise limiting files to include only the information necessary for your task.

Some companies promise to not use your data for training, often if you are using the paid version of their service. Others provide an option to opt-out of use of your data for training or versions that have special protections. For example, ChatGPT’s new health service supports the upload of health records with additional privacy and security commitments, but you need to be sure to be using the specific Health tab that is being rolled out to users.

  1. Manage Your AI’s Memory

Many generative AI tools now feature a memory function that allows them to remember details about you over time, providing more tailored responses. While this can be helpful for maintaining context in long-term projects, such as remembering your writing style, professional background, or specific project goals, it also creates a digital record of your preferences and behaviors. A recent FPF report explores these different kinds of personalization. 

Fortunately, you typically have the power to control what Generative AI platforms remember. Most have settings to view, edit, or delete specific memories or to turn the feature off entirely. For instance, in ChatGPT, you can manage these details under Settings > Personalization, and Gemini allows you to toggle off “Your past chats” within its activity settings to prevent long-term tracking. Meta also provides options for deleting all chats and images from the Meta AI app. Another option is to use “Temporary” or “Incognito” modes, so you can enjoy a personalized experience without generative AI compiling data attributed to your profile. 

In addition to managing memory features, it’s also helpful to understand how long Generative AI services keep your data. Some platforms store conversations, images, or files for only a short time, while others may keep them longer unless you choose to delete them. Taking a moment to review retention timelines can give you a clearer picture of how long your information sticks around, and help you decide what you’re comfortable sharing.

  1. Define Boundaries for Agentic AI 

Agentic AI, a form of generative AI that can complete tasks for users with greater autonomy, is becoming increasingly popular. For example, companies like Perplexity, OpenAI, and Amazon have unveiled agentic systems that can make purchases for consumers. While these systems can take on more tasks, they still require users to review purchases before they are final. As a best practice, you should look over the purchase to check that it aligns with your expectations (e.g., ordering 1 pair of socks and not 10). It is also important to keep in mind that since agentic systems can pull information from third party sources, there is a risk that the system will rely on inaccurate information about a product during purchases (e.g., that an item is in stock).

As agentic systems become more embedded in our lives, you should also be mindful about how much information you share with them. Consumers are already disclosing sensitive details about themselves to more basic chatbots, which businesses, the government, and other third parties may want to access. When interacting with agentic systems, keep this in mind and pay attention to what you disclose about yourself and others. You may similarly want to consider what type of access to provide to the agentic AI product, and rely on the principle of least privilege–only providing the minimum access needed for your use. For example, if an agentic system is going to manage your calendar, think through options for narrowing the access so your entire calendar is not shared, and that other apps connected to your calendar, like your email, are not shared unless necessary.

  1. Review How Generative AI Products Handle Privacy and Safety

It’s important to regularly review the privacy and security practices of any company with which you share information, and this applies similarly to companies offering generative AI products. This can include checking what data is collected and how, as well as how that information is used and stored. 

Snap has a Snapchat Privacy Center where you can review your settings. You can find those choices here.

ChatGPT’s privacy controls are available in the ChatGPT display, and OpenAI has a Data Controls FAQ that outlines where to find the settings and what options are available.

Gemini has the Gemini Privacy Hub, as well as an area to read about and configure your settings for Gemini Apps, which includes options for turning your Gemini history off. 

Claude has a Privacy Settings & Controls page that outlines how long they store your data, how you can delete it, and more. 

Co-Pilot provides an array of options for reviewing and updating your privacy settings, including how to delete specific memories and how your data is used. These settings are available on Microsoft’s website, here. Microsoft also provides a detailed Privacy FAQ page as well. 

Keep in mind that Generative AI products change quickly, and new features may introduce new data uses, defaults, or controls. Periodically revisiting privacy and safety settings can help ensure your preferences continue to reflect how the product works today, rather than how it worked when you first configured it.

  1. Explore and Have Fun!

LLMs can often provide useful data protection advice, so ask them questions about AI and privacy. Just be sure to double-check sources and accuracy, especially for important topics!

Data Privacy Day is a reminder that privacy is a shared responsibility. By bringing together FPF’s expertise in privacy research and policy with Snap’s commitment to building products with privacy and safety in mind, this collaboration aims to help people better understand how AI works and how to use it thoughtfully.

FPF Releases Updated Infographic on Age Assurance Technologies, Emerging Standards, and Risk Management

The Future of Privacy Forum is releasing an updated version of its Age Assurance: Technologies and Tradeoffs infographic, reflecting how rapidly the technical and policy landscape has evolved over the past year. As lawmakers, platforms, and regulators increasingly converge on age assurance as a governance tool, the updated infographic sharpens the focus on proportionality, privacy risk, and real-world deployment challenges.

What’s New

The updated infographic introduces several key changes that reflect the current state of age assurance technology and policy:

A Fourth Category: Inference. The original infographic outlined three approaches to age assurance: declaration, estimation, and verification. This update adds a fourth category—inference—which draws reasonable conclusions about a user’s age range based on behavioral signals, account characteristics, or financial transactions. For example, an email address linked to workplace applications, a mortgage lender, or a 401(k) provider, combined with login patterns during business hours, may infer that a user is an adult.

Relatedly, the updated version intentionally downplays age declaration as a standalone solution. While declaration remains useful for low-risk contexts and as an entry point in layered systems, experience and enforcement history continue to show that it is easily bypassed and insufficient where legal or safety obligations attach to age thresholds. The infographic now situates declaration primarily as an initial step within a waterfall or layered approach, rather than as a meaningful assurance mechanism on its own.

The update also highlights several new and emerging potential risks associated with modern age assurance systems. If not addressed properly, these could include loss of anonymity through linkage, increased breach impact from improper secured retained assurance data, secondary data use of assurance data, and circumvention risks such as presentation attacks or shared-device misuse.

In parallel, the infographic expands its coverage of risk management tools that can mitigate these concerns when age assurance is warranted. These include tokenization and zero-knowledge proofs to limit data disclosure, on-device processing and immediate deletion of source data, separation of processing across third parties, user-binding through passkeys or liveness detection, and emerging standards such as ISO/IEC 27566 and IEEE 2089.1. The emphasis is not on eliminating risk—which is rarely possible—but on aligning technical controls with the specific harms a service is attempting to address.

As with prior versions, the updated infographic reinforces a core message: there is no one-size-fits-all age assurance solution. Effective approaches are risk-based, use-case-specific, and privacy-preserving by design, balancing assurance goals against the rights and expectations of users. By clarifying the role of inference, contextualizing declaration, and surfacing both new risks and mitigation strategies, this update aims to support more informed decision-making across policy, product, and engineering teams.

Emerging Age Assurance Concepts. The field has advanced considerably, and the updated infographic now includes a dedicated section on emerging technologies that address Age Signals and Age Tokens, User-Binding, Zero Knowledge Proofs (ZKP), Double-Blind Models and One-Time vs. Reusable Credential.

Updated Risks and Risk Management Approaches. The infographic now presents a more comprehensive view of the risks and challenges associated with age assurance—including excessive data collection and retention, secondary data use, lack of interoperability, false positives and negatives, data breaches, and user acceptance challenges. Correspondingly, the risk management section highlights both established and emerging mitigations: on-device processing, tokenization and zero knowledge proofs, anti-circumvention measures (such as Presentation Attack Detection), standards (ISO/IEC 27566-1, IEEE 2089.1), and certification and auditing.

Practical Example: The updated infographic includes a detailed use case following “Miles,” a 16-year-old accessing an online gaming service. The scenario illustrates how multiple age assurance methods can work together in a layered “waterfall” approach—starting with low-assurance age declaration for basic access, escalating to facial age estimation for age-restricted features, and offering authoritative inference or parental consent as inclusive fallbacks when estimation results are inconclusive and formal id is not available . The example also demonstrates token binding with passkeys, ensuring that even if Miles shares his phone with a younger friend, the age credential cannot be accessed without the correct PIN, pattern, or biometric.