Chevron Decision Will Impact Privacy and AI Regulations
The Supreme Court has issued a 6-3 decision in two long-awaited cases – Loper Bright Enterprises v. Raimondo and Relentless, Inc. v. Department of Commerce – overturning the legal doctrine of “Chevron deference.” While the decision will impact a wide range of federal rules, it is particularly salient for ongoing privacy, data protection, and artificial intelligence regulations across the federal government.
As a resource, today, Future of Privacy Forum also releases for the public an Issue Brief: The Role of Chevron Deference in Federal Privacy Regulation (read it here). In this Issue Brief, we highlight the current role that agency deference plays in data protection, privacy, and AI-related efforts across the federal government. These include major ongoing efforts such as the FTC’s Commercial Surveillance and Data Security Rulemaking, updates to the Child Online Privacy Protection Act (COPPA), and inter-agency efforts to prevent the use of discriminatory automated systems in the housing market and workplace.
For the last forty years, the Chevron doctrine (Chevron v. NRDC) has provided an analytical framework for courts to use when examining agency interpretations of ambiguous or deferential statutes. In situations where a statute is ambiguous or provides direction for further agency interpretation, courts have deferred to federal agency expertise. This analytical framework is now overruled. The majority opinion calls the doctrine “fundamentally misguided” and “unworkable,” emphasizing the separation of powers and the unique role of judicial review. Specifically, the decision held that the doctrine is incongruous with Article III of the U.S. Constitution, which delegates statutory interpretation to courts, as well as with the Administrative Procedure Act (APA), which governs administrative processes and specifies that courts must decide “all relevant questions of law.”
In contrast, courts will now be expected to exercise independent legal judgment, even when the statutes are ambiguous or silent on an issue, without deferring to the agency’s interpretation in place of their own. Courts can still respect and be informed by agency expertise (a lower standard known as Skidmore deference).
In privacy and AI, fields in which technology and business practices are evolving rapidly, this decision is especially important. Statutes must contain enough flexibility to remain effective over time, and inevitable ambiguities are likely to arise. Notably, several Justices brought up AI during oral arguments, with Justice Kagan noting that AI was likely to be “the next big piece of legislation on the horizon,” and that “Congress can hardly see a week in the future with respect to this subject, let alone a year or a decade.” The dissenting opinion expresses these same reservations about long-term workability, emphasizing highly technical and expertise-driven statutory questions that occur and the potential that courts will be ill-equipped to address them (“A rule of judicial humility gives way to a rule of judicial hubris.”). Furthermore, as Congress grapples with passing a comprehensive privacy law, the decision adds a new challenge of ensuring flexible, future-proof language that simultaneously contains enough specificity to avoid as many ambiguities as possible – sure to be a unique challenge for technology regulation in years to come.
AI Forward: FPF’s Annual DC Privacy Forum Explores Intersection of Privacy and AI
The Future of Privacy Forum (FPF) hosted its inaugural DC Privacy Forum: AI Forward on Wednesday, June 5th. Industry experts, policymakers, civil society, and academics explored the intersection of data, privacy, and AI. In Washington, DC’s southwest Waterfront at the InterContinental, participants joined in person for a full-day program consisting of keynote panels, AI talks, and debates moderated and led by FPF experts.
AI and FPF Experts Take the Stage
Keynote Panels and AI Talks
FPF CEO Jules Polonetsky kicked off the day with welcoming remarks and announced the launch of FPF’s new Center for Artificial Intelligence, which is headed by Anne J. Flanagan, FPF’s VP for AI, and focuses on AI policy and governance. The Center is supported by a Leadership Council of experts from around the globe, consisting of members from industry, academia, civil society, and current and former policymakers.
FPF Board Chair Alan Raul joined the stage to give opening remarks and introduce keynote speaker, Adam Russell, AI Safety Institute Chief Vision Officer from NIST. Adam Russell presented an overview of how, why, and what the US AI Safety Institute aims to achieve in AI safety and to help build collective intelligence.
FPF’s Director of Youth and Education, David Sallay, kicked off the first AI Talk Session along with Colleen McClain, Research Associate from the PEW Research Center. Sallay discussed the recent FPF report, “Vetting Generative AI Tools for Use in Schools,” which offers a checklist designed specifically for K -12 schools, outlining key considerations when incorporating generative AI into a school or district’s edtech vetting checklist.
McClain presented a new PEW survey analysis on youth and AI that included thought-provoking views and experiences of teenagers aged 13-17 and their parents, as well as the views of K-12 teachers in the U.S. One key insight revealed that U.S. adults view privacy as a main concern when it comes to trusting–or not trusting–use of AI.
FPF’s Anne J. Flanagan moderated a keynote panel, “Risk Assessments: Up to the Task?” with Ed Britan, Senior Vice President, Global Privacy & Marketing Legal, Salesforce; Barbara Cosgrove, Vice President and Chief Privacy Officer, Workday; and Katherine Fick, Associate General Counsel, IBM. These leading privacy experts explored how companies can evaluate risk factors when it comes to developing or deploying AI. This included what can be learned from previous privacy risk assessments, advice for those daunted by regulations and standards, guidance for those who are new to AI governance, and what makes these AI assessments different from those that have come before.
Shifting into the afternoon, FPF Board Member Agnes Bundy Scanlan moderated the second AI Talk, “Is Algorithmic Fairness Even Possible?” with Professor of Computer Science at Princeton University, Arvind Narayanan. During his presentation, Prof. Narayanan argued that algorithmic fairness has not been particularly impactful, arguing that most AI products do not succeed, that broken AI perpetuates broken institutions, and added that fixing algorithms will not solve systematic problems in our society. He also gave recommendations for policymakers and regulators, such as establishing standards for efficacy and managing explanation and contestability.
Next, FPF’s Policy Council for Data, Mobility, and Location, Adonne Washington, led the panel, “AI & The Future of Work,” featuring Keith Sonderling, Commissioner, U.S. Equal Employment Opportunity Commission (EEOC), and Lael Bellamy, Partner at DLA Piper, on concerns of bias and discrimination, as well as the potential of AI-driven tools for fostering inclusive workplaces. Commissioner Sonderling argued that AI tech can help us make better and more transparent employment decisions; however, he stressed that AI must be used properly. Bellamy added that AI tools could reinforce society’s legacy of bias referencing tools like ChatGPT and how it formulates its information from public data like Reddit and Wikipedia, regurgitating skewed knowledge.
Global Convergence and Hyperlocal Regulation
FPF’s Tatiana Rice moderated “AI Legislation: States to the Rescue?” with Del. Michelle Maldonado, D-VA, 2024 Virginia House of Delegates Communications, Technology, and Innovation Committee, and Senator Robert Rodriguez, Majority Leader, Colorado General Assembly, where both discussed the importance of recent privacy laws that were passed in their respective states. “Technology moves at the speed of light, and legislation moves at the speed of molasses,” stated Del. Maldonado on AI governance. Senator Rodriguez discussed the Colorado AI Act (CAIA) and how looking at previous legislation, such as the Colorado Privacy Act (CPA), was a helpful component in writing new privacy bills.
FPF’s Senior Counsel for Global Privacy, Lee Matheson, then moderated “Global Convergence or Competition for Regulatory Leadership” with Anupam Chander, Scott K. Ginsburg Professor of Law and Technology, Georgetown University Law Center. Prof. Chander outlined the main areas of convergence on AI regulation globally.
FPF AI Debates
In one of the most energetic parts of the day, FPF’s Director for U.S. Legislation, Keir Lamont, kicked off the AI Debates, moderating the first session, “Resolved: Data minimization is compatible with the development of artificial intelligence,” featuring Omer Tene, Partner, Goodwin Procter LLP opposing the position and Samir Jain, Vice President of Policy, CDT, arguing in favor of the position.
Tene argued that data minimization is antithetical to the development of AI. The essence of AI is the ability to discover new trends and correlations, Tene argued, and by its definition, minimizing data is limiting AI’s intelligence. Jain disagreed, explaining that more data is not necessarily better, nor is it necessary for the development of AI. He added that AI can derive from data, and certain sites are not necessarily reliable outputs. Audience members were given the opportunity to vote on which position they agreed with. In the end, it was 50% opposed and 50% in favor, with Jain swaying more attendees to his side of the argument.
This was then followed by the second debate, “Resolved: APRA Strikes the Right Balance For the Future,” featuring Jennifer Huddleston, Senior Fellow in Technology Policy, Cato Institute, on the opposed position and Cameron Kerry, Ann R. and Andrew H. Tisch Distinguished Visiting Fellow, Brookings Institute, with the in-favor position.
Huddleston expressed concerns with the American Privacy Rights Act (APRA) provisions, ranging from how data minimization can cause problems regarding AI to consequences for the next generation of innovators; she also questioned whether APRA would improve underlying situations for consumers. Kerry argued that it is long past the time to establish comprehensive privacy regulation and that APRA is an opportunity to address gaping holes in the current system, further noting that APRA would put in place civil rights protections, baseline protections for algorithms, and more. In this second AI debate, the audience once again ended in a vote of 50% opposed and 50% in favor.
Keynote Fireside Chat
In the late afternoon, FPF’s Senior Director for U.S. Policy, Stacey Gray, sat down in a fireside chat with Samuel Levine, Director of the Federal Trade Commission’s Bureau of Consumer Protection, who discussed how the Commission is prepared to hold companies accountable when it comes to protecting consumers’ individual data, including deterring AI from impersonation as well as aiming to understand how AI can be used to disguise advertising. “Trust is the foundation of any market economy,” Levine explained. Further, it is critical for the government to do better and collaborate with those in the industry when it comes to keeping consumers safe.
FPF Workstream Lightning Talks
To close out the day, FPF featured four Lightning Talks on the intersection of AI with various emerging technologies and/or regulations.
On AI and XR, FPF’s Senior Policy Analyst for Immersive Technologies, Jameson Spivack, delved into two parallel trends in neutral technology development: 1) AI is being integrated into new products, and 2) AI technologies are becoming more immersive. Spivack noted that if the further integration of XR is not done responsibly, then applications of immersive technology can raise substantial risks.
On AI and Cybersecurity, FPF’s Senior Technologist for Youth & Education Privacy, Jim Siegl talked about cybersecurity as one of the foundations of AI trust and how AI can be subject to novel security vulnerabilities alongside standard ones. He focused on confidentiality, integrity, and availability, with potential confidentiality risks including generative AI-enhancing phishing or malware development. AI tools can raise the prospect of attackers manipulating the behavior of Large Language Models (LLMs) both directly and indirectly, and each of these risks can be reduced but not eliminated, he continued.
Regarding Generative AI in the Asia Pacific region, FPF’s APAC Managing Director, Josh Lee, explained why the APAC sector is an emerging leader in AI regulation. Lee highlighted how AI is transnational and that the region widely uses AI tools, with most companies having a presence in the area. He noted how the APAC region is becoming a major international thought leader with respect to AI technology and AI governance. He highlighted the recent FPF report that provides a comprehensive overview of how generative AI systems work and key governance frameworks across five specific jurisdictions: Australia, China, Japan, Singapore, and South Korea.
Moving over to the EU, FPF’s Policy Counsel for Global Privacy Christina Michelakaki offered insights on initiatives coming from EU Data Protection Authorities (DPAs) and the UK Information Commissioner’s Office (ICO) concerning the processing of personal data in the context of an AI application. She noted that while the GDPR does not explicitly mention AI, it is a technologically neutral law, and it applies to any technology that involves the use of personal data, such as for training, testing, or deployment of an AI system. Therefore, when personal data is used, all the GDPR’s principles apply; the ones of fairness, transparency, and accountability are of particular relevance.
Evening Awards and 15th Anniversary Dinner Reception
After a full and engaging day of AI policy talks, debates, and discussions, FPF ended the First DC Privacy Forum: AI Forward by presenting Christopher Wolf, FPF Founder and Founding Board President, with the Legacy of Excellence Award for his 15 years of impactful tenure. FPF’s Board Chair, Alan Raul, FPF Board Member Dale Skivington, and FPF CEO, Jules Polonetsky, presented Wolf with the award.
A big thank you to all of those who participated in our inaugural DC Privacy Forum: AI Forward! We hope to see you next year. For updates on FPF work, please visit FPF.org for all our reports, publications, and infographics, follow us on Twitter/X and LinkedIn, and subscribe to our newsletter for the latest.
Comprehensive Privacy Anchors in the Ocean State
On June 25, 2024, Governor McKee transmitted without signature H 7787 and S 2500, the Rhode Island Data Transparency and Privacy Protection Act (RIDTPPA), making Rhode Island the nineteenth state overall and the seventh state in 2024 to enact a comprehensive privacy law. The law will take effect on January 1, 2026, and the majority of its substantive provisions will apply to entities that control or process personal data of either 35,000+ Rhode Islanders or 10,000+ Rhode Islanders if the entity derives 20% or more of its gross revenue from selling personal data. As another iteration of the Washington Privacy Act (WPA) framework, this law includes familiar terminology and core obligations, such as: controller/processor responsibilities allocated by role; the core individual data rights of access, correction, deletion, portability, and opt-out; and opt-in consent for processing sensitive data.
In this blog post, we highlight 3 notable aspects of the RIDTPPA: The law includes a unique, prescriptive privacy notice requirement that applies to a different set of entities than many of its other substantive provisions; in key places, the law is weaker than many other iterations of the WPA framework; and the law’s civil penalties are higher than what is typical under comparable laws.
1. No General Privacy Notice Requirement, but Prescriptive Notice of “Information Sharing Practices” Obligation for a Narrow Set of Businesses
The RIDTPPA includes a unique, prescriptive privacy notice obligation, which has two subcomponents. First, any “commercial website” or internet service provider (ISP) who (1) conducts business in Rhode Island, (2) has customers in Rhode Island, or (3) is otherwise subject to Rhode Island jurisdiction must “designate a controller.” The law does not define or cross-reference existing definitions of “commercial website” or “internet service provider.” The law defines controller as “an individual who, or legal entity that, alone or jointly with others determines the purpose and means of processing personal data.” Although this definition is typical of state comprehensive privacy laws, the law does not elaborate on what it means to “designate a controller.”
Second, the designated controller of a website or ISP that “collects, stores and sells customers’ personally identifiable information” (PII) must disclose certain information within either the controller’s customer agreement, an addendum to that agreement, or “in another conspicuous location on its website or online service platform.” The controller must provide:
“all categories of personal data that the controller collects through the website or online service about customers”;
“all third parties to whom the controller has sold or may sell customers’personally identifiable information”; and
an active email address or other online mechanism to contact the controller.
Additionally, if a controller processes personal data for targeted advertising or sells personal data to third parties, they must “clearly and conspicuously disclose” as much.
This requirement is ambiguous in several ways. Some requirements concern personal data, whereas others, including the threshold for applicability, concern personally identifiable information, which is undefined. As identified by David Stauss, the term “personally identifiable information” could be a holdover from a prior draft, which would have defined the term more narrowly than “personal data,” implying that the two terms are intended to have distinct meanings. On the other hand, a later provision regarding how to construct the law states, “This chapter is intended to apply only to covered entities that choose to collect, store, and sell or otherwise transfer or disclose personally identifiable information.” Given that each section establishes the law’s applicability in terms of processing of personal data, this could imply that the terms are synonymous.
Furthermore, the requirement to identify all third parties to whom the controller may sell PII raises operational questions given that controllers do not have clairvoyant insight as to whom they might sell PII to in the future. There is a practical question as to what happens if controllers begin selling PII to a new third party. It is currently unclear if the controller would be categorically prohibited from selling previously collected PII to that new third party or able to do so with notice and affirmative consent. Additionally, providing a long list of current third parties recipients of personal data could make privacy notices longer and less intelligible, unless that information is provided in an addendum, which nevertheless places additional burden on individuals to seek out that information. A contrasting approach is that taken in the Oregon Consumer Privacy Act, which requires controllers to provide the list of specific third party recipients of personal data upon request.
Notably, this is the only privacy notice requirement in the law, and it only applies to commercial websites and ISPs who collect, store, and sellpersonally identifiable information. This is a sharp contrast to the majority approach in state comprehensive privacy laws, which typically require all controllers who meet the applicability thresholds to provide “a reasonably accessible, clear and meaningful privacy notice” that includes information such as categories of personal data processed, processing purposes, how to exercise consumer rights and appeal decisions, categories of personal data shared with third parties, and contact information. Despite not having a general privacy notice obligation, a later provision of the law specifies that a controller must establish a secure and reliable means for customers to exercise their individual data rights as “described to the customer in the controller’s privacy notice.”
2. Little Rhody, Little Rights
The RIDTPPA is an outlier amongst states adhering to the WPA framework, in that many of that framework’s privacy rights and protections are missing or weakened in the RIDTPPA. Notwithstanding the novel privacy notice requirement, this law is close to the weakest iterations of the WPA framework, particularly Iowa and Utah. The law contains broad entity- and data-level exemptions—including for GLBA regulated entities (twice), nonprofits, and institutions of higher education—while several common privacy protections are conspicuously absent.
No General Data Minimization Requirement: Data minimization is a common and important feature of privacy and data protection regimes. The majority of state privacy laws enacted in recent years include what can be considered a procedural data minimization rule: Controllers must limit the collection and processing of personal data to what is “adequate, relevant, and reasonably necessary” to achieve the purposes that are disclosed to a data subject, and controllers must obtain affirmative, express consent for any unnecessary or incompatible secondary uses of personal data. Recently, states have begun experimenting with heightened data minimization requirements, and Maryland broke new ground this year by enacting a substantive data minimization rule which limits collection of personal data to what is reasonably necessary to provide or maintain a requested product or service. In contrast, the RIDTPPA does not include a data minimization requirement or similar restriction on secondary use of personal data. Previously, Iowa and Utah were the only such state laws to not impose a data minimization requirement.
Absence of Opt-out Signal Preferences: Rhode Island bucks a trend in recent years of requiring controllers to recognize universal opt-out preference signals, which allow individuals to exercise their rights to opt-out of targeted advertising and data sales on a default-basis rather than a website-by-website basis.
The Opt-out Right Does Not Apply to Pseudonymous Data: Laws following the WPA framework typically do not require controllers to comply with some of the individual data rights (e.g., access) in situations involving pseudonymous data, which is personal data that cannot be attributed to a “specific individual” without additional information that is kept separately and subject to technical and organizational measures that ensures the personal data is not attributable to “an identified or identifiable individual.” The RIDTPPA appears to follow the Tennessee Information Protection Act in extending the pseudonymous data exception to the right to opt-out of targeted advertising, sale of personal data, and profiling in furtherance of solely automated decisions that produce legal or similarly significant effects. This deprives individuals of key privacy protections in several ways. First, it weakens or even nullifies the right to opt-out of targeted advertising because the targeted advertising ecosystem largely relies on pseudonymous identifiers, such as hashed persistent identifiers or mobile advertising identifiers. Second, pseudonymous data, as compared to deidentified data, does not have the same kind of backend technical and legal requirements to prevent reidentification through cross-referencing data sets. Controllers who disclose pseudonymous data are required to exercise reasonable oversight to monitor compliance with any contractual commitments, but the RIDTPPA does not create an underlying requirement to impose such contractual commitments in the first place.
No Guidance for Data Protection Assessments: Like most state comprehensive privacy laws, the RIDTPPA requires controllers to conduct data protection assessments for certain processing activities that present a heightened risk of harm, including targeted advertising, sale of personal data, processing sensitive data, and profiling that presents a reasonably foreseeable risk of certain substantial injuries. However, the law omits any language as to what a data protection assessment entails.
No Heightened Protections for Teens: As is typical of laws following the WPA framework, the RIDTPPA treats the personal data of a known child (under 13) as sensitive data and requires opt-in consent for processing that data. Breaking a recent trend, however, the RIDTPPA does not include any heightened protections for teenagers. In the last two years, many new comprehensive privacy laws have required controllers to get opt-in consent from individuals ages 13 to 15 or 16 for targeted advertising, sale of personal data, and profiling in furtherance of legal or similarly significant decisions.
3. Little Rhody, Big Penalties
The RIDTPPA’s substantive provisions might be weaker than many other state privacy laws, but the law’s enforcement provisions arguably are stronger than elsewhere. Like many state comprehensive privacy laws, violations of the RIDTPPA constitute violations of the state’s prohibition on deceptive trade practices, which carry a fine of up to $10,000 per violation. That figure alone is high compared to many other states, but the RIDTPPA adds an additional monetary penalty for intentional disclosures of personal data either (1) to a shell company or other entity created for the purpose of circumventing the law’s requirements or (2) in violation of any provisions of the RIDTPPA. Such intentional disclosures carry a penalty of $100-500 “for each disclosure.” However, this penalty enhancement is ambiguous in at least two critical ways. First, it does not specify whether the intent requirement applies to the disclosure itself or the unlawful nature of the disclosure. Second, it does not specify what constitutes a disclosure and how such claims accrue. It could be one violation per person, repeat violations per person, or, in the most extreme case, tied to communication of individual data points. Regardless of how these questions are resolved, this provision could generate significant fines for controllers who are improperly disclosing individuals’ personal data.
FPF Statement on the Revised American Privacy Rights Act (APRA)
FPF’s CEO Jules Polonetsky gives a statement on the revised American Privacy Rights Act (APRA).
Top Six Major Privacy Enforcement Trends: A U.S. Legislation Retrospective
Enforcement activity intensifies as U.S. consumer privacy laws continue to evolve and come into effect. In 2023 and 2024 alone, there have been dozens of enforcement actions at the U.S. federal and state levels, some of which reveal or touch on significant throughlines for privacy policy issues, such as what constitutes a privacy violation or the expanding regulatory interest in the risks of collecting, inferring, and using sensitive data. This Retrospective focuses on six major enforcement trends that have recently spoken to key questions or policy issues in the privacy landscape:
DoorDash: The Right to Cure Under State Law is Not Absolute: The California Privacy Protection Agency’s second enforcement action provides insight into what constitutes a “sale” under state privacy laws, as well as the limitations of businesses’ statutory ‘right to cure’ alleged violations.
GoodRx, BetterHelp, Premom: Unauthorized Disclosures of Health Information as Breaches: The FTC enforced the Health Breach Notification Rule for the first time since it was finalized in 2009, arguing that unauthorized disclosures of health data can constitute a breach.
Betterhelp and Vitagene: Health Information (and Its Sensitivity) is Contextual and Situational: When it comes to companies that process health information that is outside the scope of HIPAA, the FTC demonstrated that personal health information may be created based on context and situation.
Epic Games: FTC Focuses on Impact of Design Choices on Teen Privacy:The FTC is wielding its Section 5 authority to protect the privacy of teenagers as Congress continues to consider amending COPPA to establish federal privacy protections for teens.
Cothron v. White Castle: Multiple Actionable Harms from Single Privacy Violations Spur Legislative Change: In Cothron v. White Castle, the Illinois Supreme Court addressed the critical question of when privacy claims accrue under the Illinois Biometric Information Privacy Act, prompting the Illinois legislature to amend the Act’s private right of action.
FTC v. Kochava: How Location Data Sales Impact Privacy Interests: InFTC v. Kochava, the Commission argues that the collection and disclosure of location data can constitute an injury under Section 5 of the FTC Act.
As an increasing number of state comprehensive privacy laws come into effect and the right to cure sunsets in many state laws, enforcement activity will continue to intensify. The Texas Attorney General has already telegraphed a desire to strictly enforce protections regarding sensitive data. The insights we can glean from existing enforcement trends can allow privacy professionals to better understand the policy environment, prepare proactively, and build resilient privacy programs.
Reproductive Rights Have Been Privacy Rights For 50 Years
About fifty years ago, the U.S. Supreme Court decided a case that would provide the basis for federal privacy protections for reproductive health decisions. The importance of protecting reproductive information and choice, particularly where abortion was concerned, was the basis for Roe v. Wade (1973) and Planned Parenthood v. Casey (1992), which provided women and pregnant individuals a basis for believing that their reproductive status and choices were confidential between them and their chosen healthcare provider. That decision was the law of the land for the decades that followed.
Two years ago, on June 24, 2022, the Supreme Court issued its decision in Dobbs v. Jackson Women’s Health Organization, overturning Roe and Casey, and removing the constitutional protections around reproductive choice and information, instigating and catalyzing a spate of laws criminalizing the act of seeking or providing abortion. In addition to reducing medical access and kindling distrust in reproductive health technologies, the decision propelled economic disruption and sparring between legal jurisdictions from cities to states to federal.
The effects have also spilled beyond the traditional healthcare and medical spaces. Suppliers of consumer-facing health and health-adjacent applications and services, from “period tracker” apps to activity loggers, have been forced to grapple with the question of how to continue to render their core services while ensuring that individuals’ data is protected against access that could lead to prosecution or persecution. Perceptions of privacy risks around data have become a significant weight on the balancing scale between protecting reproductive privacy and developing technologies and data that progress reproductive care and health.
In the wake of Dobbs, reproductive data and inferences drawn regarding reproductive status, as well as related information, have become a significant area of inquiry by lawmakers and regulators. State and federal lawmakers and regulators have coalesced around privacy as the basis for reproductive rights, generating proposals that weigh heavily on the side of restricting sensitive data to achieve protection. These include:
Laws and rules restricting the transfer of data between adversarial jurisdictions including ‘Shield Laws’ created by abortion-protective states to reduce data sharing for prosecution of abortion seekers and providers. In response to Executive Order 14076 on “Securing Access to Reproductive and Other Healthcare Services,” the Department of Health and Human Services (HHS) recently issued a rule prohibiting the use of reproductive information for investigative or prosecutorial purposes where “reproductive care may be assumed to be lawful in the context it was given” and emphasizing confidentiality as integral to patient-provider trust.
Explicit and emphatic protections for reproductive and gender-affirming care in broader privacy laws such as Washington state’s ‘My Health, My Data’ Act (MHMDA), described as a comprehensive privacy law. The portions and variations of the MHMDA ‘framework’ has made cameos in other state laws, including two recent proposals that ultimately failed to become law – the arguably more expansive New York S 158E that failed to receive the requisite votes and Vermont H 121, which was vetoed by the governor. Vermont’s governor cited the PRA as a key reason for the veto. The MHMDA PRA has also garnered far more attention than Nevada’s ‘use-based’ framework of sensitive data – argued by privacy scholars as a more effective approach.
Explicit reference to reproductive examples, such as location data related to abortion clinics in Federal Trade Commission (FTC) enforcement actions throughout 2023 and 2024. In 2023 the FTC pursued cases including those related to “unauthorized disclosures ”of “sexual and reproductive health” information in GoodRx and Easy Healthcare/Premom, as well as in reference to location data related to “reproductive health clinics” in Kochava. Unauthorized disclosures were a paradigmatic change in the FTC’s 2024 Health Breach Notification Rule (HBNR) rulemaking stemming from its inaugural application in GoodRx. In 2024, the FTC’s action against InMarket noted the company’s collection of location information, including where consumers “receive medical treatment”, and the X-Mode documents discussed the sensitivity of location of “women’s reproductive health clinics”.
The basis for privacy as the protective modality for reproductive care set in 1973 placed the responsibility of sound and equitable data practices squarely in the hands of privacy professionals today. In the two years since Dobbs, the issue of reproductive care has drastically shifted privacy policies in increasingly polarized directions across jurisdictions, disrupting data flows, including those that support reproductive and gender health. These disruptions have complicated and inhibited the slow correction of representation in data for improved health outcomes. It is imperative that new privacy laws and policies simultaneously protect and facilitate reproductive and gender health access and improvement.
The World’s First Binding Treaty on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: Regulation of AI in Broad Strokes
FPF has published a Two–Page Fact Sheet overview of the Framework Convention on AI.
While efforts to regulate the development and deployment of Artificial Intelligence (AI) systems have, for the most part, unfolded at national or regional level, there has been increased focus on the steps taken by the international community to negotiate and design cross-border regulatory frameworks. It is in this way that the data protection community, technology lawyers, and AI experts now have the crucial task of increasingly looking beyond regional borders for a holistic view of legislative frameworks aiming to regulate AI.
The Framework Convention on AI is one such significant initiative, which is spearheaded by the CoE, an International Organization founded in 1949 with the goal of promoting and advocating for human rights, democracy, and the rule of law. Recognizing that AI systems are developed and deployed across borders, an ad-hoc intergovernmental Committee on Artificial Intelligence (CAI) was established under the auspices of the CoE in January 2022, and tasked with launching a binding legal framework on the development, design, and application of AI systems.
There are several key reasons as to why the treaty is a significant and influential development in the field of global AI law and governance, not only in the context of the CoE and its Member States, but around the world.
Firstly, the Framework Convention was drafted by the CAI, composed of Ministers representing not only the CoE’s 46 Member States, but also of Ministers or high-level representatives from the Governments of the United States, Canada, Mexico, Japan, Israel, Ecuador, Peru, Uruguay, and Argentina. In addition to representatives of prominent human rights groups, the meetings of the CAI and the drafting of the Framework Convention included representatives of the European Commission, the European Data Protection Supervisor, and of the private sector. Inter-governmental and multi-stakeholder participation in the drafting of a cross-border, binding instrument is often a critical factor in determining its impact. Crucially, the Framework Convention will also be open for ratification to countries that are not members of the CoE.
Secondly, the importance of the Framework Convention lies in its scope and content. In addition to general obligations to respect and uphold human rights, it aims to establish a risk-based approach to regulating AI and a number of common principles related to activities within the entire lifecycle of AI systems. Its general principles include, among others, respect for human dignity; transparency and oversight; accountability and responsibility; non-discrimination; and privacy and personal data protection. States Parties to the Framework Convention will have to adopt appropriate legislative and administrative measures which give effect to the provisions of this instrument in their domestic laws. In this way, the Framework Convention has the potential to affect ongoing national and regional efforts to design and adopt binding AI laws, and may be uniquely positioned to advance interoperability.
With this brief overview in mind, this blog post contextualizes the work and mandate of the CAI in the context of the CoE and international law. It follows on to provide an outline of the Framework Convention, its scope, applicability, and key principles, including its risk-based approach. It then highlights its position towards fostering international cooperation in the field of cross-border AI governance through the establishment of a ‘Conference of the Parties.’ The post also draws some initial points of comparison with the EU AI Act and the CoE’s Convention for the Protection of Individuals with Regards to the Processing of Personal Data, otherwise known as Convention 108.
Human Rights Are At The Center of the Council of Europe’s Work, Including the Mandate of the Committee on Artificial Intelligence (CAI)
The CoE comprises 46 Member States, 27 of which are Member States of the European Union, and includes Turkey, Ukraine and the United Kingdom. In addition to its Member States, a number of countries hold the status of “Observer States”, meaning that they can cooperate with the CoE, be a part of its Committees (including the CAI), and become Parties to its Conventions. Observer States include Canada, the United States, Japan, Mexico, and the Holy See. Through the Observer State mechanism, CoE initiatives have an increasingly broader reach well beyond the confines of European borders.
As an International Organization, the CoE has played a key role in the development of binding human rights treaties, including the European Convention on Human Rights (ECHR), and Convention 108. Leveraging its experience in advancing both human rights and a high level of personal data protection, among other issues, the CoE has been well-placed to bring members of the international community together to begin to define the parameters of an AI law that is cross-border in nature.
Since its inception in January 2022, the CAI’s work falls under the human rights pillar of the CoE, as part of the Programme on the Effective Implementation of the ECHR, and the sub-Programme on the freedom of expression and information, media and data protection. It is therefore grounded in existing human rights obligations, including the rights to privacy and personal data protection. In order to grasp the possible impacts of such a treaty, it is crucial to understand how it will function under international law, while drawing a comparison between the Framework Convention on AI and Convention 108.
1.1. International Law in Action to Protect People in the Age of Computing: From Convention 108 to the Framework Convention
Traditionally, international law governs relations between States. It defines States’ legal responsibilities in their conduct with each other, within the States’ boundaries, and in their treatment of individuals. One of the ways in which international law governs the conduct and relations between States is through the drafting and ratification of international conventions or treaties. Treaties are legally binding instruments that govern the rights, duties, and obligations of participating States. Through treaties, international law encompasses many areas including human rights, world trade, economic development, and the processing of personal data.
It is on the basis of this treaty mechanism under international law that the CoE Convention 108 opened for signature on 28 January 1981 as the first legally binding, international instrument in the data protection field. Under Convention 108, States Parties to the treaty are required to take the necessary steps in their domestic legislation to apply its principles to ensure respect in their territory for the fundamental rights of all individuals with regard to the processing of their personal data.
In 2018, the CoE finalized the modernization of Convention 108 through the Amending Protocol CETS No. 223. While the principle-based Convention 108 was designed to be technology-neutral, its modernization was deemed necessary for two key reasons: 1) to address challenges resulting from the use of new information and communication technologies, and 2) to strengthen the Convention’s effective implementation.
Through the process of modernization, Convention 108 is now better recognized as Convention 108+, and as of January 2024 has 55 State Parties. Modernized Convention 108+ is also better aligned with the EU General Data Protection Regulation (GDPR), particularly with the expansion of its Article 9 on rights of the data subject, which now includes the individual right “not to be subject to a decision significantly affecting him or her based solely on automated processing of personal data” (automated decision-making).
As the only international, binding treaty on personal data protection, Convention 108 is an important reference point for the Framework Convention on AI. Already in its Preamble, the Framework Convention makes reference to the privacy rights of individuals and the protection of personal data, as applicable through Convention 108. Furthermore, both Conventions are similarly grounded in human rights and recognize the close interplay between new technologies, personal data processing, and the possible impacts of these on people’s rights.
Notably, and unlike Convention 108, the Framework Convention on AI takes the form of a so-called “framework convention”, a type of legally binding treaty which establishesbroader commitments for its parties. In essence, a framework convention serves as an umbrella document which lays down principles and objectives, while leaving room for stricter and more prescriptive standards and their implementation to domestic legislation.
Framework conventions are effective in creating a coherent treaty regime, while elevating the political will for action and leaving room for consensus on the finer details for a later stage. In this way, and considering that the Framework Convention on AI will also be open for ratification to non-Member States of the CoE, the instrument may become more attractive to a greater number of countries.
The Framework Convention on AI Proposes a Risk-Based Approach and General Principles Focusing on Equality and Human Dignity
2.1. A Harmonized Definition of an AI System
One of the first challenges of international cooperation and rule-making is the need to agree on common definitions. This has been particularly relevant in the context of AI governance and policy, as national, regional and international bodies have consistently negotiated to agree on a common definition for AI. The Framework Convention on AI addresses this in its Article 2, adopting the OECD’s definition of an AI system as a “machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.”
Promoted by one of the leading International Organizations in the global AI governance conversation, the OECD’s definition of an AI system has also been relevant in regional contexts. For example, the EU’s Artificial Intelligence Act (EU AI Act), which was given the final green light on 21 May 2024, adopts a very similar definition of an AI system. Similarly, Brazil’s draft AI Bill also adopts the OECD’s definition, showing the country’s intention to align its legislation with the mounting international consensus on a common definition for AI. It is also worth noting that the United States President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the recently enacted Colorado AI Act also adopt an AI definition that is similar in scope to the OECD definition.
The alignment on definitions is not insignificant, as it is by first agreeing on the subject matter of rule-making that a body of specific, intentional rules and principles can emerge. Furthermore, an initial alignment on definitions can help to already establish common ground for facilitating interoperability between different AI governance frameworks internationally.
2.2. The Framework Convention Only Applies to Public Authorities and Private Actors acting on their behalf
Before outlining the principles and obligations elaborated by the Framework Convention, it is important to establish the treaty’s scope and applicability. Its Article 3 states that the Convention covers “the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and the rule of law.”
Notably, the draft of the Framework Convention on AI from 18 December 2023, which formed the basis for negotiations until its final adoption date in May 2024, made several and consistent references to the lifecycle of an AI system as including the design, development, use and decommissioning stages. However, the finalized Framework Convention on AI makes reference to these stages only once, in its Preamble. With the treaty’s signature and implementation later this year, it still remains to be seen how the lifecycle of an AI system will be interpreted by States Parties in practice, and how this will impact the scope of applicability of the Convention in different countries’ domestic laws.
Regarding scope, Article 3(1)(a) elaborates that each Party to the Framework Convention on AI will have to apply its principles and obligations within the lifecycle of AI systems undertaken by public authorities,or private actors acting on their behalf. Private actors will only fall under the scope of the Convention if they meet two requirements: 1) the country in which they are established or in which they develop or deploy their AI products and services is a State Party to the Convention, and 2) they are designing, developing or deploying artificial intelligence systems on behalf of that State Parties’ public authorities.
Therefore, the Framework Convention does not by itself, once ratified by States Parties, provide obligations for all private actors with a role in the lifecycle of AI systems, unless States Parties decide to extend its scope in national law.
In addition to defining what falls within the scope of the Framework Convention, it similarly defines matters which do not fall under its purview. Article 3(2) provides that a Party to the Convention shall not be required to apply its obligations to activities within the lifecycle of AI systems related to the protection of its national security interests. States Parties are, however, nevertheless under an obligation to comply with applicable international laws and human rights obligations, including for purposes of national security.
The Framework Convention will similarly not apply to research and development activities regarding AI systems not yet made available for use, unless their testing has the potential to interfere with human rights, democracy and the rule of law (Article 3(3)). Finally, the Framework Convention will not apply to matters relating to national defence (Article 3(4)).
2.3. General Obligations and Common Principles Include Accountability, Individual Autonomy, Safe Innovation
Instead of opting for more prescriptive requirements, the Framework Convention on AI opts for establishing a broader, umbrella approach for international AI law, while making specific and continued reference to existing obligations, such as those found in international human rights law.
Articles 4 and 5 of the Framework Convention on AI address the requirements to ensure that activities within the lifecycle of AI systems are consistent with obligations to protect human rights, that they are not used to undermine democratic processes, and that they respect the rule of law. This includes seeking to protect individuals’ fair access and participation in public debate, and their ability to freely form opinions.
In addition, in Articles 7 to 13, seven common principles are elaborated which would apply in relation to activities within the lifecycle of AI systems:
Respect for human dignity and individual autonomy (Article 7);
Maintain measures to ensure that adequate transparency and oversight requirements tailored to specific contexts and risks are in place (Article 8);
Adopt or maintain measures to ensure accountability and responsibility for adverse impacts on human rights, democracy and the rule of law (Article 9);
Ensure that activities within the lifecycle of AI systems respect equality, including gender equality, and the prohibition of discrimination as provided under applicable international or domestic legislation; Article 10 on equality and discrimination also goes beyond by including a positive obligation to maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes in relation to the lifecycle of AI systems (Article 10);
Adopt or maintain measures to ensure that the privacy of individuals and their personal data are protected, including through international laws, standards and frameworks, and that effective guarantees and safeguards are put in place (Article 11);
Take measures to promote the reliability of AI systems and trust in their outputs, which could include requirements related to adequate quality and security (Article 12);
Establish controlled environments for developing, experimenting and testing AI systems under the supervision of competent authorities (Article 13).
The agreed upon principles attempt to strike a balance between stipulating broad, yet effective principles on the one hand, and determining the requirements which should be left to Member States’ discretion within their own jurisdictions and domestic legislation on the other.
Notably, the draft of the Framework Convention from 18 December 2023 included a general principle related to adopting and maintaining measures to preserve health, with the option of adopting a clause to include the protection of the environment in the scope of the principle. Similarly, in the same draft text from 18 December 2023, the previous iteration of above-mentioned Article 12 also included options to specify more prescriptive requirements regarding accuracy, performance, data quality, data integrity, data security, governance, cybersecurity and robustness. Both of these articles were amended over time during negotiations, and did not make it into the final text of the Convention.
A separate Article 21 specifically states that nothing in the Framework Convention shall be construed as limiting, derogating from or otherwise affecting human rights and obligations that may already be guaranteed under other relevant laws. Article 22 goes further to state that the Convention also does not limit the possibility of a State Party to grant wider protection in their domestic law. This is an important addition to the text, particularly at a time in which many countries and regions are drafting and adopting AI legislation.
2.4. The Risk-Based Approach is Different Than That of the EU AI Act, and it Mitigates Adverse Impacts of AI Systems
In its Article 1 on the object and purpose of the treaty, the Framework Convention on AI elaborates that measures implemented in the lifecycle of AI systems shall be “graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy and the rule of law” (emphasis added). In this way, the Framework Convention on AI captures the risk-based approach that has become a familiar component of regulatory discussions and frameworks for AI thus far.
Article 16(1) further outlines what the risk-based approach will entail in practice. It provides that each State Party shall adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems by considering actual and potential harms to human rights, democracy, and the rule of law. Article 16(2) proposes a set of broad requirements for assessing and mitigating risks, including to:
Take due account of the context and intended use of an AI system (Article 16(2)(a));
Take due account of the severity and probability of potential impacts (Article 16(2)(b));
Consider, where appropriate, the perspective of all relevant stakeholders, in particular persons whose rights may be impacted (Article 16(2)(c));
Apply the risk-management requirements iteratively and throughout the lifecycle of AI systems (Article 16(2)(d));
Include monitoring for risks and adverse impacts (Article 16(2)(e));
Include documentation of risks, actual and potential impacts, and on the risk management approach (Article 16(2)(f));
Require testing of artificial intelligence systems before making them available for first use and when they are significantly modified (Article 16(2)(g)).
The risk-based approach principles adopted by the Framework Convention on AI have similarities with obligations we see in the EU AI Act, particularly in relation to requirements for risk monitoring, documentation and testing. However, the Framework Convention does not take a layered approach to risk (from limited risk to high risk) and as such it does not prescribe contexts or use-cases in which AI systems may be prohibited or banned. Rather, in its Article 16(4), the Framework Convention on AI leaves this discretion to each State Party to assess the need for a moratorium, ban or other appropriate measures in respect to certain uses of AI that may be incompatible with human rights.
A Newly Created Body Will Promote International Cooperation on AI Governance
International cooperation and coordination in the field of AI governance has been called upon by many regional and international organizations and fora. Cross-border cooperation is consistently identified as a priority in the work of the OECD, forming one of the core tenets of the OECD AI Principles. Similarly, the United Nations’ High-Level Body on Artificial Intelligence is tasked with advancing an international, multi-stakeholder governance of AI, and calls for interoperability of AI frameworks and continued cooperation. The United Nations Human Rights Office of the High Commissioner recently released its Taxonomy of Human Rights Risks Connected to Generative AI, in the interests of stimulating international dialogue and agreement. At the intergovernmental level, the Group of 7 (G7) approved an international set of guiding principles on AI and a voluntary Code of Conduct for AI developers as part of the Hiroshima AI Process.
The Framework Convention on AI aims to establish its own proposal for furthering international cooperation, on the basis of a two-pronged approach: the first, encompassed in its Article 23, calls for the formation of a “Conference of the Parties”, to be composed of representatives of the Parties to the treaty; and the second, encompassed in its Article 25, through which Parties are to exchange relevant information among themselves, and to assist States that are not Parties to the Convention to act consistently with its requirements with a view to becoming Parties to it. The Preamble similarly recognizes the value of fostering cooperation and of extending such cooperation to other States that share the same values.
In this way, the Framework Convention on AI would encourage both continued cooperation and dialogue at the State Party level, as well as codify the requirement to take an inclusive stance towards countries which are not (yet) Parties to the treaty. This inclusive approach also extends to involving relevant non-State actors in the exchange of information on aspects of AI systems that may have an impact on human rights, democracy, and the rule of law, suggesting ongoing cooperation and exchange with public and private actors.
For an insight into how such continued cooperation may work in practice under the auspices of the Conference of the Parties, we can draw a useful example from the Consultative Committee established under Convention 108. The Consultative Committee is composed of representatives of Parties to the Convention, and observers such as non-Member States, representatives of International Organizations and non-governmental organizations. The Consultative Committee meets three times a year, and is responsible for the interpretation of Convention 108 and for improving its implementation, ensuring that it remains fit-for-purpose and adapting to an ever-growing set of challenges posed by new data processing systems.
Closing Reflections: Future Areas of Interplay?
As the world’s first treaty on artificial intelligence, the CoE’s Framework Convention on AI can help codify the key principles that any national or regional frameworks should include. With a strong foundation in human rights law, including respect for equality and non-discrimination, human dignity and individual autonomy, privacy and personal data protection, the concept behind the Framework Convention on AI is to act as a foundational, umbrella treaty beyond which more prescriptive rules can be adopted at country level.
In this way, complementarity can be achieved between, for example, the Framework Convention on AI and the EU AI Act, and the Framework Convention on AI and Convention 108. In both cases, the EU AI Act and Convention 108 are both instruments which go beyond principles and into prescriptive requirements for the regulation of AI systems and the processing of personal data, respectively. From 5 September 2024, when the Framework Convention will formally open for signature and ratification by States, the breadth of adoption of the treaty beyond CoE Member States should be closely monitored, as well as how the mechanisms for international cooperation on AI regulation progress in practice.
FPF has published a Two–Page Fact Sheetoutlining the scope, key terms, general obligations and common principles, risk-based approach requirements, and guidance on international cooperation.
FPF at CPDP.ai 2024: From Data Protection to Governance of Artificial Intelligence – A Global Perspective
Drawing inspiration from the latest developments in assessing the impacts and regulation of Artificial Intelligence (AI) technologies, the Brussels-based annual Computers, Privacy and Data Protection (CPDP) conference amended its acronym. The 17th edition became CPDP.ai for Computers, Privacy, Data Protection and Artificial Intelligence conference, taking place on 22-24 May.
To govern or to be governed, that is the question– this year, the main theme focused on the key questions of AI governance globally, and a vibrant programme explored current digital regulatory frameworks while navigating the complexity of the interplay with topics of privacy and data protection.
The Future of Privacy Forum (FPF) was present once again, organizing a panel on Global Approaches to AI Regulation: Towards an International Law on AI? FPF staff members also contributed to the conference as speakers in several other panels, having the opportunity to engage on key topics with a great variety of stakeholders from academia, industry, civil society, and regulatory authorities.
The CPDP.ai organizers recorded all the sessions which are available here.
On May 23, FPF’s Policy Manager for Global Privacy, Bianca-Ioana Marcu moderated the FPF-organized panel on Global Approaches to AI Regulation: Towards an International Law on AI? Joining the conversation were Audrey Plonk, Head of Digital Economy Policy Division at the OECD, Emma Redmond, Associate General Counsel at OpenAI, Bruno Bioni, Director and Founder at Data Privacy Brasil, and Gregory Smolynec, Deputy Commissioner Policy and Promotion at the Office of the Privacy Commissioner of Canada (OPC).
This multi-stakeholder, comparative panel explored what we can learn from regional and international approaches to AI regulation, and how these may facilitate a more global, interoperable approach to AI laws. Panelists shared key perspectives:
Bruno Bioni noted that Brazil’s approach to AI regulation is nuanced and context-specific, considering existing asymmetric cultural and power dynamics in Brazil. As such, it incorporates a stronger rights-based approach than, for example, the EU AI Act, by including specific concepts of vulnerability and clauses on the protection of vulnerable groups.
Commissioner Smolynec highlighted Canada’s approach to AI regulation, through a strategic plan that outlines the priorities of the OPC, including protecting privacy with maximum impact using existing laws, as well as how to address and advocate for privacy in a time of rapid technological changes, fostering a culture of privacy and privacy-by-design, and promoting innovation while at the same time leveraging it to protect fundamental rights.
Emma Redmond noted that regulatory alignment and the concept of global harmonization are crucial and that while each piece of regulation has its place and purpose, areas of commonality have to be found.
Audrey Plonk added that in order to talk about coherent approaches to AI regulation across countries and regions, we have to start with agreeing on definitions and terminology, such as the OECD’s definition of an AI system which can now be found in different regional AI laws.
Photo description: Panel titled Global Approaches to AI Regulation: Towards an International Law on AI? (May 23, CPDP.ai)
On May 22, Andreea Șerban, FPF’s Global Privacy and AI Analyst, contributed to a panel titled Fundamental Rights Protection and Artificial Intelligence, organized by Encrypt, a project dedicated to creating a GDPR-friendly, privacy-preserving framework for big data processing. Speakers included Marco Bassini, Assistant Professor at Tilburg Law School, Simona Demková, Assistant professor at Universiteit Leiden, Michèle Finck, Professor of Law and Artificial Intelligence at the University of Tübingen, Andreea Șerban from Future of Privacy Forum, and Giovanni de Gregorio, PLMJ Chair in Law and Technology at Católica Global School of Law who moderated the panel.
The discussions focused on the procedural safeguards for AI-driven decision-making as the key approach to safeguarding fundamental rights protection, the role of the Fundamental Rights Impact Assessments under the EU AI Act, and lessons learned from the GDPR experience that could be leveraged for the implementation of the AI Act further exploring the interplay between the GDPR and the AI Act from a global perspective.
Photo description: Panel titled Fundamental Rights Protection and Artificial Intelligence (May 22, CPDP.ai)
On May 23, Christina Michelakaki, Policy Counsel for Global Privacy at FPF was part of the panel organized by the Centre for IT & IP Law (CiTiP) at KU Leuven, titled Transforming GDPR into a Risk-Based Harm Tool Alongside Specific AI Regulation. Meeting Separate but Complementary Needs, together with Felix Bieker, Legal Researcher at Unabhängiges Landeszentrum für Datenschutz, Nadya Purtova, Professor of Law, Innovation, and Technology at Utrecht University, and moderated by Michiel Fierens, Doctoral researcher at Centre for IT & IP Law, KU Leuven.
The panel explored the challenges in providing legal interoperability and synergies between specific concepts from the GDPR and the EU AI Act. In the ever-developing AI governance regulatory landscape, with a particular focus on the EU AI Act, privacy and data protection norms remain the tools of choice to regulate personal data processing. In this regard, Christina Michelakaki highlighted that the EU AI Act sets a foundational standard, yet it is up to the entities developing and deploying AI technology to keep track of the national initiatives that further develop these provisions, such as Italy’s new draft AI law, as new internal frameworks could create country-specific obligations to be met by these entities.
Photo description: Panel titled Transforming GDPR into a Risk-Based Harm Tool Alongside Specific AI Regulation. Meeting Separate but Complementary Needs? (May 23, CPDP.ai)
On May 24, Rob van Eijk, FPF’s Managing Director for Europe, was part of the panel Where are we heading? Looking into the EU Strategy for Data through the Lens of AI and Data Protection, organized by Meta, together with Luca Bolognini, President of the Italian Institute for Privacy and Data Valorisation, Peter Craddock, Partner at Keller and Heckman, Patricia Vidal, Partner at Uría Menéndez and moderated by Cecilia Alvarez, EMEA Privacy Policy Director at Meta.
The panel discussed AI in the context of a data-oriented regulatory framework, focusing on how the EU could foster AI-driven innovation and competitiveness while ensuring equitable access and benefits. Rob van Eijk presented one of the latest FPF resources, a detailed EU AI Act timeline, and provided an overview of the current EU data-related legislation, the role of the EU AI Act in this framework, and its expected enforcement. The panel recording can be found here.
Photo description: Panel titled Where are we heading? Looking into the EU Strategy for Data through the Lens of AI and Data Protection (May 24, CPDP.ai)
Photo description: Presentation of the FPF EU AI Act Timeline (May 24, CPDP.ai)
Lastly, on May 20, FPF’s Bianca-Ioana Marcu moderated a panel session in the CPDP.ai pre-event on the Global Impact of the EU’s Regulations on Platform, AI and Data Governance: The Case of Brazil, organized by the Law, Science, Technology & Society (LSTS) Research Group at the Vrije Universiteit Brussel and the Fundação Getulio Vargas (FGV) Law School. The event coincided with the launch of FPF’s Issue Brief on the Regulatory Strategies and Priorities of Data Protection Authorities in Latin America: 2024 and Beyond.
Photo description: Panel moderated by FPF’s Bianca-Ioana Marcu, with Alessandro Mantelero (Polytechnic University of Turin; Laura Schertel Mednes (University of Brasilia); Frederico Oliviera da Silva (BEUC); and Marco Almada (European University Institute).
Overall, the CPDP.ai 2024 conference brought together all major key stakeholders in the privacy and digital field for yet another successful gathering of minds, having delivered engaging and challenging discussions on the future of the regulatory landscape in this field and how to best address the innovative and disruptive challenges posed by technological developments, with a special highlight for AI and its interplay with data protection.
Editor: Bianca-Ioana Marcu
Future of Privacy Forum Recognizes Leading Careers in Privacy and Efforts in AI Regulation with Inaugural Global Award
June 11, 2024 – Last week, the Future of Privacy Forum (FPF) – a global non-profit focused on data protection headquartered in Washington, D.C. – presented the Government of Singapore with the inaugural Global Responsible AI Leadership Award for the country’s prominent, pragmatic, and respected work in establishing frameworks for AI regulation and governance and fostering international cooperation in this field. FPF also granted privacy experts Jim Halpert and Patrice Ettinger its Career Achievement Award and Excellence in Career Award, respectively. The awards recognize leading U.S. cybersecurity and privacy professionals for their exemplary leadership in the field of data protection. In her roles as Chief Privacy Officer at Pfizer and Avon, Patrice blazed a trail for senior privacy executives as she built global data governance programs at her company and led efforts to support best practices across the pharma sector. Halpert served the United States as a White House cyber security legal advisor and for decades as trusted counsel to leading companies. Each mentored, trained, and supported numerous staff and colleagues who went on to also become leaders in data protection.
“FPF is honored to recognize Jim Halpert, Patrice Ettinger, and the Government of Singapore for their continued efforts and commitments to ensuring data protection and cybersecurity not just in the United States, but globally,” said Jules Polonetsky, FPF’s CEO. “This year, we’ve seen the ever-increasing importance of data privacy and rapid advancements of AI capabilities. Leaders such as our awardees help provide protections, frameworks, and solutions that help advance society and protect citizens.”
The awards were presented during FPF’s 15th Anniversary Advisory Board Meeting in Washington, D.C., on June 6. The award ceremony was held a day after FPF’s inaugural DC Privacy Forum, which brought together thought leaders, industry experts, and policymakers to explore the pivotal intersection of data privacy and AI, and its complex challenges and opportunities, as well as launched FPF’s Center for Artificial Intelligence.
FPF’s 2024 Achievement Award Winners include:
The Republic of Singapore, Global Responsible AI Leadership Award Winner
(Received by Singapore’s Ambassador to the United States, His Excellency Lui Tuck Yew)
The Government of the Republic of Singapore has made significant progress in the development and governance of artificial intelligence technologies over the last few years. Singapore was also ranked third in the 2023 Global AI Index, which benchmarks nations on their level of investment, innovation, and implementation of AI.
In 2019, Singapore published its first National AI Strategy, outlining plans to drive AI innovation and adoption across the economy. This was refreshed in December 2023. In 2019 and 2020, the Personal Data Protection Commission of Singapore also launched two editions of the Model AI Governance Framework, which won a UN WSIS Prize in 2019. Most recently, Singapore launched the Model AI Governance Framework for Generative AI in June 2024 – one of the first in the world to do so. Singapore was also ranked third in the 2023 Global AI Index, which benchmarks nations on their level of investment, innovation, and implementation of AI. The Government has also aimed to nurture an AI governance testing community by encouraging open-source engagement and collaboration on AI testing and assurance through its AI Verify Foundation. Singapore’s active contributions to multilateral platforms such as the United Nations and the OECD on global AI governance is a testament to its leading influence in this space.
Jim Halpert, Career Achievement Award Winner
Jim Halpert serves as General Counsel for the Office of the National Cyber Director at The White House. He is a renowned cybersecurity and privacy lawyer who, prior to his current appointment, worked at DLA Piper, where he was co-chair of the firm’s global Privacy & Cybersecurity practice, as well as partner of the IP & technology practice. Halpert has helped draft many state security and breach notice laws, the National Association of Corporate Directors Cyber Risk Handbook, DLA Piper’s Data Protection Laws of the World Handbook, and two major U.S. federal privacy laws.
Patrice Ettinger, Excellence in Career Award Winner
Patrice Ettinger served as the Vice President and Chief Privacy Officer at Pfizer for over a decade, where she led a global team on strategy, legal counseling, cybersecurity, compliance, and policy on privacy and data protection. While at Pfizer, she was a member of the companies’ AI Council, Bioethics Advisory Council, and Digital Policy Group, and co-chaired the Pfizer Women’s Resource Group in the New York headquarters. She also serves on the AI Governance Advisory Board at the International Association of Privacy Professionals (IAPP), where she is also an IAPP Westin Emeritus Fellow. Ettinger is also a senior fellow at FPF.
###
About Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials and industry to evaluate the societal, policy and legal implications of data use, identify the risks and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore and Tel Aviv. Follow FPF on X and LinkedIn.
Newly Updated Guidance: FPF Releases Updates to the Generative AI Internal Policy Considerations Resource to Provide New Key Lessons For Practitioners
Today, the Future of Privacy Forum (FPF) Center for Artificial Intelligence is releasing a newly updated version of our Generative AI internal compliance document – Generative AI for Organizational Use: Internal Policy Considerations, with new content addressing organizations’ ongoing responsibilities, specific concerns (e.g., high-risk uses), and lessons taken from recent regulatory enforcement related to these technologies. Last year, FPF published a generative AI compliance checklist, which drew from a series of consultations with practitioners and experts from over 30 cross-sector companies and organizations, to provide organizations with a powerful tool to help revise their internal policies and procedures to ensure that employees are using generative AI in a way that mitigates data, security, and privacy risks, respects intellectual property rights, and preserves consumer trust.
Generative AI uses have proliferated since the technology’s emergence, transforming how we interact, work, and make decisions. From drafting emails and computer code to performing customer service functions, these technologies have made significant progress. However, as generative AI continues to advance and find new applications, it is essential to consider how the internal policies governing them should evolve in response to novel challenges and developments in the compliance landscape.
Key takeaways from the Considerations document include:
Privacy, data protection, and AI impact assessments are ongoing responsibilities that entail cross-team collaboration from across the organization;
Employees using generative AI systems should be aware of public policy considerations—such as those related to addressing bias and toxicity—that override system outputs in order to mitigate or prevent the social and ethical harms that may arise from the deployment of generative AI systems;
In addition to privacy counsel, organizations should engage with experts representing a variety of legal specialties to issue spot and identify appropriate mitigations;
Organizations that develop and use generative AI tools should follow the latest enforcement trends, such as algorithmic disgorgement, and use them to encourage internal compliance with legal requirements; and
It is important for organizations to evaluate whether certain applications of generative AI systems either qualify as high-risk uses, or are prohibited under relevant laws, such as the EU AI Act, as these determinations can affect organization’s compliance obligations and the contents of internal policies.
As generative AI becomes mainstream through tools such as chatbots, image generation apps, and copilot tools that help with writing and creating computer code, it introduces new and transformational use cases for AI in everyday life. However, there are also risks and ethical considerations to manage throughout the lifecycle of these systems. A better understanding of these risks and considerations is essential as practitioners devise policies to manage the benefits and risks of generative AI tools. The re-release of Generative AI for Organizational Use: Internal Policy Considerations strives to do this. Download the updated version of the Considerations document.