FPF and OneTrust Release Collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide & Infographic
Today, the Future of Privacy Forum (FPF) and OneTrust released a collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide and accompanying Infographic. Conformity Assessments are a key and overarching accountability tool introduced in the proposed EU Artificial Intelligence Act (EU AIA or AIA) for high-risk AI systems.
Conformity Assessments are expected to play a significant role in the governance of AI in the EU, and the Guide and Infographic provide a step-by-step explanation of what a Conformity Assessment is–designed for individuals at organizations responsible for the legal obligation to perform one–along with a roadmap outlining the series of steps for conducting a Conformity Assessment.
The Guide and Infographic can serve as an essential resource for organizations who want to prepare for compliance with the EU AIA’s final text, which is expected to be adopted by the end of 2023 and become applicable in late 2025.
Information and background about the proposed EU AI Act & Conformity Assessments. The proposed EU AIA is a risk-based regulation with enhanced obligations for high-risk AI systems, including the obligation to conduct Conformity Assessments. In the EU context, the Conformity Assessment obligation is not new: the EU AIA aims to align with the processes and requirements found in laws that fall under the New Legislative Framework (NLF), and Conformity Assessments are also part of several EU laws on product safety, such as the General Product Safety Regulation, the Machinery Regulation, or the in vitro diagnostic Medical Devices Regulation.
The Conformity Assessment applicability for AI systems. A Conformity Assessment is the process of verifying and/or demonstrating that a high-risk AI system complies with the requirements enumerated under Title III, Chapter 2 of the EU AIA. The first step in the Conformity Assessment journey is determining whether an organization’s AI system falls under the Conformity Assessment legal obligation, and the Guide and Infographic include a flowchart of questions for an organization to answer in order to determine whether they need to comply with the Conformity Assessment obligation.
Conformity Assessment requirements for high-risk AI systems. The Guide describes each Conformity Assessment requirement, its meaning, and at what phase of the AI system’s life cycle each requirement should be met. These requirements include Risk Management System; Data and Data Governance; Technical Documentation; Record Keeping; Transparency Obligations; Human Oversight; Accuracy, Robustness and Cybersecurity.
Overview of EU Plans for Standards & Presumption of Conformity. The European Commission is looking to obtain standards that provide “procedures and processes for conformity assessment activities related to AI systems and quality management systems of AI providers.” Such standards will be crucial to developing operational guidance for the implementation of Conformity Assessments and are expected to facilitate compliance with the technical obligations prescribed by the EU AIA. Given that the EU AIA is still under negotiation, the draft standardization request that was issued by the European Commission in December 2022 may be amended when the AIA is finally adopted.
For more information about the EU AIA, Conformity Assessments, and the Guide and Infographic, please contact Katerina Demetzou at [email protected].
Click here to view the Updated Guide on Conformity Assessments under the EU AI Act.
FPF Submits Comments with the National Telecommunications and Information Administration (NTIA) on Kids Online Health and Safety
On November 15, the Future of Privacy Forum filed comments with the National Telecommunications and Information Administration (NTIA) in response to their request for comment on Kids Online Health and Safety as part of the Biden-Harris Administration’s Interagency Task Force on Kids Online Health & Safety.
Young people increasingly engage with their peers online and lawmakers continue to introduce legislation to expand protections for the privacy and safety of minors beyond the existing COPPA framework. However, adopting a one-size-fits-all approach to developing policies for minors online presents challenges, as protections that are appropriate for very young children may not be suitable for older teenagers with greater agency and autonomy. While addressing online experiences for minors is a multi-faceted issue, as evidenced by the interagency task force, FPF has identified four of the most impactful areas for privacy that the Task Force should consider as they develop voluntary guidance, policy recommendations, and a toolkit on safety-, health-, and privacy-by-design for industry to apply in developing digital goods and services.
1. Children and teens have varying privacy needs across developmental stages, and overgeneralized restrictions may exacerbate health risks and undermine the developmental benefits of social online experiences. In particular, limitations on access to content and connecting with peers may have negative consequences on the ability of adolescents to explore and develop independence and identity.
2. While many stakeholders agree on high-level policy goals, such as extending heightened protections to both children and teens or minimizing unnecessary data collection, there is little consensus on how best to implement broadly agreed-upon policy goals. In some areas, such as age assurance, there is significant disagreement on how best to grapple with conflicting equities on privacy and safety.
3. Companies building new features to protect the privacy and safety of minors online currently take into account the varying developmental stages of minors and the interaction between minors’ autonomy and parental involvement. These two considerations inform how companies balance privacy and safety before introducing new features and reviewing existing tools as research and societal norms evolve.
4. FPF recommends additional research investigating minors using online services for educational purposes versus recreation, shifts in privacy risks at different ages and stages of development, and the relationship between privacy and safety in applying heightened protections to teens. This research is necessary to identify appropriate safeguards for minors online in both policy and practice.
FPF Statement on Biden-Harris AI Executive Order
The Biden-Harris AI plan is incredibly comprehensive, with a whole of government approach and with an impact beyond government agencies. Although the executive order focuses on the government’s use of AI, the influence on the private sector will be profound due to the extensive requirements for government vendors, worker surveillance, education and housing priorities, the development of standards to conduct risk assessments and mitigate bias, the investments in privacy enhancing technologies, and more. Also important is the call for bipartisan privacy legislation, the most important precursor for protections for AI that impact vulnerable populations.
FPF Submits Comments to the FEC on the Use of Artificial Intelligence in Campaign Ads
On October 16, 2023, the Future of Privacy Forum submitted comments to the Federal Election Commission (FEC) on the use of artificial intelligence in campaign ads. The FEC is seeking comments in response to a petition that asked the Agency to initiate a rulemaking to clarify that its regulation on “fraudulent misrepresentation” applies to deliberately deceptive AI-generated campaign ads.
FPF’s comments follow an op-ed FPF’s Vice President of U.S. Policy Amie Stepanovich and AI Policy Counsel Amber Ezzell published in The Hill on how generative AI can be used to manipulate voters and election outcomes, and the benefits to voters and candidates when generative AI tools are deployed ethically and responsibly.
With contributions from Aaron Massey, FPF Senior Policy Analyst and Technologist, Keir Lamont, Director for U.S. Legislation, and Tariq Yusuf, FPF Policy Intern
Several technologies can help individuals configure their devices to automatically opt out of web services’ requests to sell or share personal information for targeted advertising. Seven state privacy laws require that organizations honor opt-out requests. This blog post discusses the legal landscape governing Universal Opt-Out Mechanisms (UOOMs), as well as the key differences between the leading UOOMs in terms of setup, default settings, and whether those settings can be configured. We then offer guidance to policymakers to consider clarity and consistency in establishing, interpreting, and enforcing UOOM mandates.
The legal environment behind Universal Opt-Out Mechanisms
Online advertising continues to evolve, specifically in reaction to new regulatory requirements as an increasing number of international jurisdictions and U.S. states have enacted comprehensive privacy laws. As of October 2024, twelve states grant individuals the right to opt out of businesses selling their personal information or processing that data for targeted advertising. Of these twelve state privacy laws, seven include provisions that make it easier for individuals to opt out of certain uses of personal data. This includes the kind of personal and pseudonymized information that is routinely shared with websites, such as browser information or information sent via cookies.
Historically, a significant practical hurdle existed in the implementation of opt-out rights: users wishing to exercise the right to opt out of the use of this information for targeted advertising must locate and manually click opt-out links that businesses provide on their web pages, and they generally must do so for every site they visit. To make opting out easier, seven state’s privacy laws (California, Colorado, Connecticut, Delaware, Montana, Oregon, and Texas) require businesses to honor individuals’ opt-out preferences transmitted through Universal Opt-Out Mechanisms (UOOMs) as valid means to opt out of targeted advertising and data sales. UOOMs refer to a range of desktop and mobile tools designed to provide consumers with the ability to configure their devices to automatically opt out of the sale or sharing of their personal information with internet-based entities with whom they interact. These tools transmit consumers’ opt out preferences by using technical specifications, chief among these the Global Privacy Control (GPC).
California became the first state to establish the force of law for opt-out signals as valid opt-outs through an Attorney General rulemaking process in August, 2020. Specifically, businesses who do not honor the Global Privacy Control on their websites may risk being found in noncompliance with the California Consumer Privacy Act (CCPA), which was the central topic in the recent enforcement action against Sephora, an online retailer. In the complaint, state authorities alleged that Sephora’s website was not configured to detect or process any GPC signals and, as a result, failed to honor users’ opt-out preferences by not opting them out of sales of their data.
Although other UOOMs exist (and more are likely to emerge), we focus exclusively on the tools endorsed by the creators of the Global Privacy Control specification. In 2023, the FPF team downloaded and installed each tool and evaluated each tool’s installation process, whether GPC signals were sent without additional configuration, and whether those settings could be adjusted (see Figure 1 below).
Installation
GPC Signals Sent without Additional Configuration
Can the Configuration Be Adjusted?
IronVest
Requires account sign-up
❌ No
Yes; GPC can be enabled only on a per-site basis, not globally.
Brave Browser
No steps required after installation
✅ Yes
No; GPC cannot be disabled, either globally or per-site, even when other protections in the “Shields” feature are turned off.
Disconnect
No steps required after installation
❌ No
Yes; GPC can be enabled globally but not on a per-site basis using a checkbox in the main browser plugin window.
DuckDuckGo Privacy Browser
No steps required after installation
✅ Yes
Yes; GPC can be disabled globally but not on a per-site basis.
DuckDuckGo Privacy Essentials
No steps required after installation
✅ Yes
Yes; GPC can be disabled both globally or on a per-site basis by disabling “Site Privacy Protection.”
Firefox
Requires technical configuration
❌ No
Yes, GPC can be disabled globally in the browser’s technical configuration but not on a per-site basis.
OptMeowt
No steps required after installation
✅ Yes
Yes; GPC can be disabled both globally or on a per-site basis by disabling the “Do Not Sell” feature.
Privacy Badger
No steps required after installation
✅ Yes
Yes; GPC can be disabled both globally or on a per-site basis by disabling the “Do Not Sell” feature.
Figure 1: Observations of eight leading UOOM toolsas of October 12, 2023
Our survey allows us to make four key observations about the state of these UOOMs.
Current GPC implementations are largely limited to browser plugins for desktop environments. Google Chrome, Microsoft Edge, and Safari do not natively support the GPC signal. Mozilla Firefox supports sending the GPC signal, but configuring was the most challenging setup of all the tools we tested. Brave and DuckDuckGo are the only browsers that natively support the GPC. In addition, Brave and DuckDuckGo are the only desktop and mobile browsers with GPC enabled by default.
GPC tools significantly differ from one another in user experiences for both installation and use. The installation process for six of the tools was direct and, therefore suitable to a broad range of consumer knowledge. Two of the tools, IronVest and Firefox, require additional steps to enable GPC. Ironvest requires the creation of an account upon downloading the tool, and through that account offers not only GPC but also a subscription-based suite of further online security services like password managers and email maskers. By contrast, Firefox does not require an account, but it requires users complete more steps to enable the GPC that require technical knowledge or experience. Specifically, users must access the about:config settings page in Firefox, which warns the user to “Proceed with Caution” and requires users to know how to find the GPC configuration options. Users with limited experience configuring about:config settings on this browser may struggle to enable the GPC signal on Firefox. Following FPF’s study on September 25, 2023, Mozilla enabled a graphical UI setting for GPC in Firefox Nightly. Firefox Nightly provides tech savvy users with more experimental builds of Firefox. Features typically migrate from Nightly to the more broadly available Firefox browser over time.
GPC tools differ significantly in their default settings after installation, potentially creating consumer confusion in switching from one service to another. Three of the tools leave the GPC off by default following final installation; four of them enable the GPC by default. Firefox, for example, does not enable GPC by default, and it requires the most work to enable, whereas Brave enables GPC by default without notifying users or allowing them to disable it. Many tools include other privacy features in addition to GPC, such as Privacy Badger’s ability to block surreptitious tracking mechanisms like supercookies. These tools were not examined in this report, though they may create divergent user experiences that can cause consumers to draw different conclusions as to each tool’s utility and effectiveness. Users installing a privacy-focused browser extension or using a privacy-focused browser may be unaware that in certain cases privacy features are disabled by default and require additional configuration after installation.
Finally, we observe that these tools significantly differ in configuration options for when and where to send the GPC signal. The tools collectively deploy two types of configuration: globally sending the GPC to every site and/or selectively sending the GPC on a per-site basis. None of the tools have pre-configured profiles or “allow / deny” lists for when to send the GPC, and about half of the tools allow users to set the GPC both as a global setting and on a per-site basis. IronVest only allows sending the GPC on a per-site basis, while Brave only enables the GPC on a global basis. However, given that most state laws that require compliance with a UOOM also require affirmative consent to opt back in following an opt-out, it is unclear whether disabling the GPC signal for a site after visiting it will have legal effect.
Next Steps & Policy Considerations
In 2023 alone, six states passed comprehensive privacy laws. In the years ahead, we expect that more states will be added to this list, and many are likely to include provisions regarding UOOMs. Policymakers must ensure that all UOOM requirements offer adequate clarity and consistency.
One place where greater detail from policymakers would provide benefit to organizations seeking to comply with legal requirements is in guidance not only for covered businesses, but also for vendors of consumer-facing privacy tools. Specifically, guidance would be useful regarding how a UOOM must be configured or implemented to give assurance that the GPC signals being sent are a legally valid expression of individual intent. For example, a minor detail such as whether a tool contains a “per-site” toggle for the GPC may be significant in one state, but not another.
Similarly, the question of “default settings” and their legal significance requires greater clarity in many jurisdictions. For example, to be considered a valid exercise of individuals’ opt-out rights under Colorado law, a valid GPC signal occurs when individuals provide “affirmative, freely given, and unambiguous choice.” This requirement creates an engineering ambiguity for publishers and websites over the validity of GPC signals they receive. For example, users installing a browser extension that requires a separate, affirmative user configuration prior to sending the GPC signal will unambiguously be a valid expression of individual choice. On the other hand, an individual using a browser marketed with a variety of privacy preserving features, including the GPC, may be sending a GPC signal that does not meet the law’s standards for defaults if those features are enabled by default and they do not provide notice to users. The user may have wanted a privacy feature other than GPC and not been aware that the GPC signal would be sent. On the other hand, another user may both be seeking and appreciate a default-on GPC and not want it to be legally ignored because they didn’t affirmatively enable it. Publishers and websites do not have an engineering mechanism to differentiate between these scenarios, incentivizing them to use nonstandard techniques, like fingerprinting, for the purposes of discerning which GPC signals are valid.
New states implementing comprehensive privacy laws also increase the odds that specific privacy rights may fracture across jurisdictions in ways that are either cohesive or irreconcilable. The current GPC specification does not support conveying users’ jurisdictions, so it is unclear how organizations must differentiate between signals originating from one jurisdiction or another. The result could be that entities must choose which state to risk running afoul of the law in such that they may follow the requirements of a conflicting jurisdiction.
As user-facing privacy tools are developed and updated, responsible businesses will likely err on the side of over-inclusion by treating all GPC signals as valid UOOMs. However, increased user adoption and the expansion of the GPC into new sectors (such as connected TVs or vehicles) could change expectations and put more pressure on different kinds of advertising activities. In the absence of uniform federal standards that would create guidance for such mechanisms, most businesses will aim to streamline compliance across states, providing a significant opportunity for policymakers to shape the direction of consumer privacy in the coming years. Policymakers must be aware of these developments and strive for clarity and consistency in order to best inform organizations, empower individuals, and set societal expectations and standards that can be applied in future cases.
FPF Weighs In on the Responsible Use and Adoption of Artificial Intelligence Technologies in New York City Classrooms
Last week, Future of Privacy Forum provided testimony at a joint public oversight hearing before the New York City Council Committees on Technology and Education on “The Role of Artificial Intelligence, Emerging Technology, and Computer Instruction in New York City Public Schools.”
Specifically, FPF urged the Council to consider the following recommendations for the responsible adoption of artificial intelligence technologies in the classroom:
Establish a common set of principles and definitions for AI, tailored specifically to educational use cases;
Identify AI uses that pose major risks – especially tools that make decisions about students and teachers;
Create rules that combat harmful uses of AI while preserving beneficial use;
Build more transparency within the procurement process with regard to how vendors use AI; and
Take a student-driven approach that enhances the ultimate goal of serving students and improving their educational experience.
During this back to school season, we are observing school districts across the country wrestle with questions about how to manage the proliferation of artificial intelligence technologies in tools and products used in K-12 classrooms. In the 2022-2023 school year, districts used an average of 2,591 different edtech tools. While there is no standard convention for indicating that a product or service uses AI, we know that the technology is embedded in many different types of edtech products and has been for a while now. We encourage districts to be transparent with their school community regarding how AI is utilized within the products it is using.
But first, it is critical to ensure uniformity in how AI is defined so that it is clear what technology is covered and to avoid creating overly broad rules that may have unintended consequences. A February 2023 audit by the New York City Office of Technology and Innovation on “Artificial Intelligence Governance” found that the New York City Department of Education has not established a governance framework for the use of AI, which creates risk in this space. FPF recommends starting with a common set of principles and definitions, tailored specifically to educational use cases.
While generative AI tools such as ChatGPT have gained public attention recently, there are many other tools already used in schools that fall under the umbrella of AI. Uses may be as commonplace as autocompleting a sentence in an email or speech-to-text tools to provide accommodations to special education students, or more complicated algorithms used to identify students at higher risk of dropping out. Effective policies governing the use of AI in schools should follow a targeted and risk-based approach to solve a particular problem or issue.
We can look to the moratorium on adopting biometric identification technology in New York schools following the 2020 passage of State Assembly Bill A6787D as an example of how an overly broad law can have unintended consequences. Although it appeared that lawmakers were seeking to address legitimate concerns stemming from facial recognition software used for school security, a form of algorithmic decision making, the moratorium had broader implications. Arguably, it could be viewed to ban the use or purchase of many of the computing devices used by schools. This summer, the NY Office of Information Technology Services released its report on the Use of Biometric Identifying Technology in School, following which it is likely that the Commission will reverse or significantly modify the moratorium on biometric identification technology in schools. This will present an opportunity for the city to consider what additional steps should be taken if it resumes use of biometric technology and will also likely open a floodgate for new procurement.
Accordingly, this is an important moment for pausing to think through the specific use cases of AI and technology in the classroom more broadly, identify the highest risks to students, and prioritize developing policies that address those higher risks. When vetting products, we urge schools to consider whether that product will actually enhance the ultimate goal of serving students and improving their educational experience and whether the technology is indeed necessary to facilitate that experience.
We urge careful consideration about the privacy and equity concerns associated with adopting AI technologies as AI systems may have a discriminatory impact on historically marginalized or otherwise vulnerable communities. We have already seen an example of how this can manifest in classrooms. Commonly deployed in schools, self-harm monitoring technology works by employing algorithms that rely on scanning and detecting key words or phrases across different student platforms. FPF research found that “using self-harm monitoring systems without strong guardrails and privacy-protective policies is likely to disproportionately harm already vulnerable student groups.” It can lead to students being needlessly put in contact with law enforcement and social services or facing school disciplinary consequences as a result of being flagged. We recommend engaging the school community in conversation prior to adopting this type of technology.
It is also critical to note that using any new classroom technology typically comes with increased collection, storage, and sharing of student data. There are already requirements under laws like FERPA and New York Ed Law 2-D. Districts should have a process in place to vet any new technology brought into classrooms and we urge an emphasis on proper storage and security of data used in AI systems to protect against breaches and privacy harms for students. School districts are already vulnerable as targets for cyber attacks, and it is important to minimize risk.
Finally, we flag that there are disparities in the accuracy of decisions made by AI systems and caution that there are risks when low accuracy systems are treated as gospel, especially within the context of high impact decision making in schools. Decisions made based on AI have the potential to shape a student’s education in really tangible ways.
We encourage you to consider these recommendations and thank you for allowing us to participate in this important discussion.
Future of Privacy Forum and Leading Companies Release Best Practices for AI in Employment Relationships
Expert Working Group Focused on AI in Employment Launches Best Practices that Promote Non-Discrimination, Human Oversight, Transparency, and Additional Protections.
Today, the Future of Privacy Forum (FPF), with ADP, Indeed, LinkedIn, and Workday — leading hiring and employment software developers — released Best Practices for AI and Workplace Assessment Technologies. The Best Practices guide makes key recommendations for organizations as they develop, deploy, or increasingly rely on artificial intelligence (AI) tools in their hiring and employment decisions.
Organizations are incorporating AI tools into their hiring and employment practices at an unprecedented rate. When guided by a framework centered on responsible and ethical use, AI hiring tools can help match candidates with relevant opportunities and inform organizations’ decisions about who to recruit, hire, and promote. However, AI tools present risks that, if not addressed, can impact job candidates and hiring organizations and pose challenges for regulators and other stakeholders.
FPF and the AI working group recommend:
Developers and deployers should have clearly defined responsibilities regarding AI hiring tools’ operation and oversight;
Organizations should not secretly use AI tools to hire, terminate, and take other actions that have consequential impacts;
AI hiring tools should be tested to ensure they are fit for their intended purposes and assessed for bias;
AI tools should not be used in a manner that harmfully discriminates, and organizations should implement anti-discrimination protections that go beyond laws and regulations as needed;
Organizations should not use facial characterization and emotion inference technologies in the hiring process absent public disclosures supporting the tools’ efficacy, fairness, and fitness for purpose;
Organizations should implement AI governance frameworks informed by the NIST AI Risk Management Framework;
Organizations should not claim that AI hiring tools are “bias-free;” and
AI hiring tools should be designed and operated with informed human oversight and engagement.
“When properly designed and utilized, AI must process vast amounts of personal data fairly and ethically, keeping in mind the legal obligations organizations have to those with disabilities and people from underrepresented, marginalized and multi-marginalized communities. This is why developers and deployers of AI in the employment context should use these Best Practices to show their commitment to ethical, responsible, and human-centered AI tools in compliance with civil rights, employment and privacy laws.”
“The intersection between hiring, employment, and AI tools presents complex opportunities and challenges for organizations, particularly concerning issues of equity and fairness in the workplace. Our Best Practices will guide U.S. companies as they create and use AI technologies that impact workers, ensuring that they address key issues regarding non-discrimination, responsible AI governance, transparency, data security and privacy, human oversight, and alternative review procedures.”
John Verdi, Senior Vice President of Policy at FPF
Leading policy frameworks, including the NIST’s AI Risk Management Framework (AI RMF), Civil Rights Principles for Hiring Assessment Technologies, the Data and Trust Alliance’s initiative Algorithmic Safety: Mitigating Bias in Workforce Decisions, and more, helped inform the Best Practices guide.
“AI tools can help candidates discover and describe their skills and find new opportunities that match their experience. The Best Practices assist organizations in instituting guardrails around using AI systems responsibly and ethically.”
Jack Berkowitz, ADP’s Chief Data Officer
“The use of automated technology in the workplace can result in better matches for both job seekers and employers, increased access to diverse candidates and a broader pool of applicants, and greater access to hiring tools for small to mid-sized businesses. These Best Practices provide concrete guidance for using the tools responsibly.”
Trey Causey, Indeed’s Head of Responsible AI
“We know that a responsible and principled approach to AI can lead to more transparency and better matching of job seeker skills to employer needs. The Best Practices are a real step forward and reflect the accountability needed to ensure these technologies continue to power opportunity for all members of the global workforce.”
Sue Duke, LinkedIn’s VP of Global Public Policy
“Since 2019, Workday has partnered with government officials and thought leaders like the Future of Privacy Forum to advance smart safeguards that cultivate trust and drive responsible AI. We’re proud to have co-developed these Best Practices, which offer policymakers a roadmap to responsible AI in the workplace and call on other organizations to join us in endorsing them.”
Chandler Morse, Workday’s Vice President of Public Policy
While existing anti-discrimination laws can apply to the use of AI tools for hiring, the AI governance field is still maturing. FPF’s Best Practices engages the broader AI governance field in the ethical use and development of AI for employment. The guide may also be updated to reflect developing AI regulatory requirements, frameworks, and technical standards.
Call for Nominations: 14th Annual Privacy Papers for Policymakers
The Future of Privacy Forum (FPF) invites privacy scholars and authors with an interest in privacy issues to submit finished papers to be considered for FPF’s 14th annual Privacy Papers for Policymakers (PPPM) Award. This award provides researchers with the opportunity to inject ideas into the current policy discussion, bringing relevant privacy research to the attention of the U.S. Congress, federal regulators, and international data protection agencies.
The award will be given to authors who have completed or published top privacy research and analytical work in the last year that is relevant to policymakers. The work should propose achievable short-term solutions or new means of analysis that could lead to real-world policy impact.
FPF is pleased to also offer a student paper award for students of undergraduate, graduate, and professional programs. Student submissions must follow the same guidelines as the general PPPM award.
We encourage you to share this opportunity with your peers and colleagues. Learn more about the Privacy Papers for Policymakers program and view previous year’s highlights and winning papers on our website.
FPF will invite winning authors to present their work at an annual event with top policymakers and privacy leaders in spring 2024 (date TBD). FPF will also publish a printed digest of the summaries of the winning papers for distribution to policymakers in the United States and abroad.
Learn more and submit your finished paper by October 20th, 2023. Please note that the deadline for student submissions is November 3rd, 2023.
Navigating Cross-Border Data Transfers in the Asia-Pacific region (APAC): Analyzing Legal Developments from 2021 to 2023
Today, the Future of Privacy Forum (FPF) published an Issue Brief comparatively analyzing cross-border data transfer provisions in new data protection laws in the Asia-Pacific. Titled Navigating Cross-Border Data Transfers in the Asia-Pacific region (APAC): Analyzing Legal Developments from 2021 to 2023, the Issue Brief outlines key developments in cross-border data transfers in the Asia-Pacific in the last few years, and explores the potential impact on businesses operating in the APAC region.
Today, cross-border data transfers are pivotal in enabling the global digital economy and facilitating digital trade. These transfers allow businesses to provide services globally, while allowing individuals access to a wide range of digital services and platforms. Yet, cross-border data transfers also raise legitimate concerns regarding the protection of individuals’ privacy and security.
Amidst this tension, data protection laws attempt to strike a balance by requiring organizations to satisfy certain conditions to ensure that personal data is appropriately protected when it is transferred out of jurisdiction, absent special circumstances. Common conditions include:
Assessment of the level of personal data protection in the destination jurisdiction (also known as “adequacy”);
Adoption of safeguards, such as legally binding agreements or certifications or rules approved by a regulator;
Consent from data subjects; and
Necessity for various, specifically defined purposes.
The APAC region has seen a significant acceleration in data protection regulatory activity in recent years, including the enactment of new data protection laws. In particular, since 2021, China, Indonesia, Japan, South Korea, Thailand, and Vietnam have newly enacted or amended their data protection laws and regulations.
An analysis of the data protection laws and regulations in these six jurisdictions indicates that there is a degree of alignment between Indonesia, Japan, South Korea, and Thailand regarding legal bases for cross-border data transfers, but China and Vietnam appear to be outliers with their own unique requirements. Notably:
Indonesia, Japan, South Korea, and Thailand all recognize adequacy and consent as valid legal bases for cross-border data transfers. There is also some alignment on the recognition of certification schemes.
However, given that these laws were enacted or amended recently, there remains uncertainty on which jurisdictions might be recognized as mutually adequate, or which certification schemes will be ultimately recognized.
China and Vietnam differ substantially from the other jurisdictions studied. Both jurisdictions impose unique conditions for transferring personal data, such as requiring transferring organizations to file detailed assessments with the relevant regulator.
Vietnam also only recognizes a single legal basis for transferring personal data abroad, while China recognizes three.
These divergences to regulating cross-border data transfers likely reflect the different policy considerations in every jurisdiction, the tension between enabling cross-border data transfers to facilitate digital trade, and national considerations, such as protecting national security and sovereignty. These divergences could complicate efforts by organizations operating in multiple jurisdictions to align their regional compliance programs. Nonetheless, there are promising avenues for increasing interoperability in the region, such as standardized or model contractual clauses, the growing recognition of regional certification schemes such as the APEC Cross Border Privacy Rules and Privacy Recognition for Processors systems, and to a more limited extent, the possibility that some jurisdictions may obtain adequacy decisions from the European Union in future.
For deeper analysis of these points and of the cross-border data transfer provisions for each of the six jurisdictions covered, download the Issue Brief here.
For inquiries about this Issue Brief, please contact Josh Lee Kok Thong, Managing Director (APAC), at [email protected], or Dominic Paulger, Policy Manager (APAC), at [email protected].
FPF is grateful to the following contributors for their assistance in ensuring the accuracy of this report:
Kemeng Cai (In-house Privacy Counsel, China)
Iqsan Sirie (Partner, TMT, Assegaf Hamzah & Partners) and Daniar Supriyadi (Associate, Capital Markets, M&A, Assegaf Hamzah & Partners)
Takeshige Sugimoto (Managing Director and Partner, S&K Brussels LPC; Senior Fellow, Future of Privacy Forum)
Thitirat Thipsamritkul (Lecturer, Faculty of Law, Thammasat University)
Kwang Bae Park (Partner, Head of TMT, Lee & Ko)
Kat MH Hille (General Counsel, OceanCDR.Tech)
Please note that nothing in this Issue Brief should be construed as legal advice. Further reading: In November 2022, FPF’s APAC office concluded a year-long project on consent and alternative legal bases for processing data in APAC that culminated in a reportcomparing relevant requirements in 14 APAC jurisdictions.
How Data Protection Authorities are De Facto Regulating Generative AI
The Istanbul Bar Association IT Law Commission published Dr. Gabriela Zanfir-Fortuna’s article, “How Data Protection Authorities are De Facto Regulating Generative AI,” in their August monthly AI Working Group Bulletin, “Law in the Age of Artificial Intelligence” (Yapay Zekâ Çağinda Hukuk).
Generative AI took the world by storm in the past year, with services like ChatGPT becoming “the fastest growing consumer application in history.” For generative AI applications to be trained and function immense amounts of data, including personal data, are necessary. It should be no surprise that Data Protection Authorities (‘DPAs’) were the first regulators around the world to take action, from opening investigations to actually issuing orders imposing suspension of the services where they found breaches of data protection law.
Their concerns span from the lack of a justification (a lawful ground) for processing personal data used for training the AI models, lack of transparency about the personal data used for training, and about how the personal data collected while users are interacting with the AI service is used, lack of avenues to exercise data subject rights such as access, erasure, and objection, impossibility to exercise the right of correcting inaccurate personal data when it comes to the output generated by such AI services, insufficient data security measures, unlawfully processing sensitive personal data and children’s data, to not applying data protection by design and by default.
Global Overview of DPA Investigations into Generative AI
Defined broadly, DPAs are supervisory authorities vested with the power to enforce comprehensive data protection law in their jurisdictions. In the past six months, as the popularity of generative AI was growing among consumers and businesses around the world, DPAs started opening investigations into how the providers of such services are complying with legal obligations related to how personal data are collected and used, as provided in their respective national data protection law. Their efforts are focusing currently on OpenAI as the provider of ChatGPT. Only two of the investigations have resulted until now in official enforcement action, be it preliminary, in Italy and South Korea. Here is a list of known open investigations, their timeline, and key concerns:
The Italian DPA (Garante) issued an emergency order on 30 March 2023, to block OpenAI from processing personal data of people in Italy. The Garante laid out several potential violations of provisions of the General Data Protection Regulation (‘GDPR’), including lawfulness, transparency, rights of the data subject, processing personal data of children, and data protection by design and by default. It lifted the prohibition a month later, after OpenAI announced changes as required by the DPA. An investigation on substance is still ongoing.
In the aftermath of the Italian order, the European Data Protection Board created a task force to “foster cooperation and exchange information” in relation to handling complaints and investigations into OpenAI and ChatGPT at EU level, on 13 April 2023.
The Federal Office of the Privacy Commissioner (OPC) of Canada announced on 4 April 2023, that it has launched an investigation into ChatGPT following a complaint that the service is processing personal data without consent. On 25 May, the OPC announced that it will investigate ChatGPT jointly with the provincial privacy authorities of British Columbia, Quebec, and Alberta, expanding the investigation to also look into whether OpenAI has respected obligations related to openness and transparency, access, accuracy, and accountability, as well as purpose limitation.
The Ibero-American Network of DPAs, reuniting supervisory authorities from 21 Spanish and Portuguese-speaking countries in Latin America and Europe, announced on 8 May 2023 that it initiated a coordinated action in relation to ChatGPT.
Japan’s Personal Information Protection Commission (PPC) published a warning issued to OpenAI on 1June 2023 which highlighted it should not collect sensitive personal data from users of ChatGPT or other persons without obtaining consent, and it should give notice in Japanese about the purpose for which it collects personal data from users and non-users.
The Brazilian DPA announced on 27 July 2023 that it has started an investigation into how ChatGPT is complying with the Lei Geral de Proteção de Dados (LGPD) after receiving a complaint, and after reports in the media arguing that the service as provided is not compliant with the country’s comprehensive data protection law.
The US Federal Trade Commission (FTC) has opened an investigation into ChatGPT in July 2023 to see whether its provider has engaged in “unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers” in violation of Section 5 of the FTC Act.
The South Korean Personal Information Protection Commission (PIPC) announced on 27 July 2023 that it imposed an administrative fine of 3.6 million KRW (approximately 3,000 USD) against OpenAI for failure to notify a data breach in relation to its payment procedure. At the same time, the PIPC issued a list of instances of non-compliance with the country’s Personal Information Protection Act related to transparency, lawful grounds for processing (absence of consent), lack of clarity related to the controller-processor relationship, and issues related to the absence of parental consent for children younger than 14. The PIPC gave OpenAI a month and a half, until 15 September 2023, to bring the processing of personal data into compliance.
This survey of investigations into how a generative AI service provider is complying with data protection law in jurisdictions around the world reveals significant commonalities among their legal obligations and how they are applicable to processing of personal data through this new technology. There is also overlap among concerns that DPAs have about generative AI’s impact on the rights of people in relation to their personal data. This provides good ground for collaboration and coordination among supervisory authorities as regulators of generative AI.
G7 DPAs Issue Statement on Generative AI, Distilling Key Data Protection Concerns Across Jurisdictions
In this spirit, the DPAs of the G7 members adopted in Tokyo, on 21 June 2023, a Statement on generative AI which lays out their key areas of concern related to how the technology processes personal data. The Commissioners started their statement by acknowledging that “there are growing concerns that generative AI may present risks and potential harms to privacy, data protection, and other fundamental human rights if not properly developed and regulated.”
The key areas of concern highlighted in the Statement considered the use of personal data at various stages of developing and deploying AI systems, including a focus on datasets used to train, validate, and test generative AI models, the interactions of individuals with generative AI tools and also the content generated by them. For each of these stages, the issue of a lawful ground for processing was raised. Security safeguards against inverting a generative AI model to extract or reproduce personal data originally processed in data sets used to train the model were also added as a key area of concern, as well as putting in place mitigation and monitoring measures to ensure personal data generated through such tools are accurate, complete and up-to-date, free from discriminatory, unlawful, or otherwise unjustifiable effects.
Other areas of concern mentioned were transparency to promote openness and explainability; production of technical documentation across the AI development lifecycle; technical and organizational measures in the application of the rights of individuals such as access, erasure, correction, and the right not to be subject to solely automated decision-making that has a significant effect on the individual; accountability measures to ensure appropriate levels of responsibility across the AI supply chain; and limiting collection of personal data to what is necessary to fulfill a specified task.
A key recommendation spelled out in the Statement, but also emerging from the investigations above, is for developers and providers to embed privacy in the design, conception, operation, and management of new products and services that use generative AI technologies, and to document their choices in a Data Protection Impact Assessment.