Singapore’s PDP Week 2024: FPF highlights include a hands-on workshop on practical Generative AI governance and a panel on India’s DPDPA
From July 15 to 18, 2024, the Future of Privacy Forum (FPF) participated in Personal Data Protection Week 2024 (PDP Week), an event organized and hosted by the Personal Data Protection Commission of Singapore (PDPC) at the Marina Bay Sands Expo and Convention Centre in Singapore.
As with PDP Weeks of previous years, programming during PDP Week 2024 combined PDPC events with the International Association of Privacy Professionals (IAPP)’s annual Asia Privacy Forum. However, for the first time, the PDPC also scheduled its annual Summit on Privacy-Enhancing Technologies (PETs) in the Asia-Pacific (APAC) region during PDP Week.
Throughout the week’s events, FPF fostered robust discussions on data protection issues arising from new and emerging technologies, including generative AI. Below is a comprehensive summary of our participation and key takeaways from these significant engagements.
1. FPF, with the support of PDPC, hosted a hands-on workshop to equip regional privacy professionals with practical knowledge on the complexities of generative AI governance in the APAC region.
On July 15, 2024, with the support of PDPC, FPF hosted a hands-on workshop titled “Governance Frameworks for Generative AI: Navigating the Complexities in Practice.” This event aimed to equip members of the regional data protection community with practical knowledge on the operational and implementation complexities of generative AI governance. It drew upon the findings from FPF APAC’s year-long research project, “Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific,” (FPF’s GenAI Report) which explored emerging governance frameworks for generative AI in APAC.
With a full house of 70 attendees, the workshop addressed rising concerns surrounding generative AI deployment risks, particularly in AI governance and data protection, highlighting guidelines and frameworks issued by data protection regulators across the APAC region. Participants engaged in dynamic discussions regarding AI and participated in a practical exercise, gaining invaluable insights into navigating the intricate landscape of generative AI governance.
Josh Lee Kok Thong, Managing Director of FPF APAC, hosted the entire event, which began with an introduction to FPF’s Center for AI by Anne J. Flanagan, FPF’s Vice President for AI. The event was structured in two parts: (1) an informational segment featuring presentations and a panel discussion; followed by (2) a practical, hands-on workshop.
1.1 The informational segment featured presentations by FPF and IMDA, as well as insights from industry and practice.
The informational segment included two presentations:
Dominic Paulger, Policy Manager for APAC at FPF, shared key findings and takeaways from FPF’s GenAI Report.
Darshini Ramiah, Manager (AI & Data Innovation) at the Infocomm Media Development Authority of Singapore (IMDA), provided an overview of Singapore’s Model AI Governance Framework for Generative AI, released in May 2024.
The industry sharing session that followed focused on key aspects of generative AI governance and deployment. The experts featured in this segment included:
Barbara Cosgrove, Vice President, Chief Privacy Officer at Workday;
David N. Alfred, Director and Co-Head of Data Protection, Privacy, and Cybersecurity at Drew & Napier; and
Lee Matheson, Senior Counsel for Global Privacy at FPF.
The experts discussed strategies for selecting AI service providers, emphasizing the importance of internal policies and risk assessment. The panelists argued that while AI introduces new technologies and applications, it ultimately functions similarly to other systems and services, allowing companies to leverage existing frameworks for compliance and risk management. The panelists additionally noted that many existing laws and regulations will remain applicable to AI systems, including those governing the professional liabilities of users of AI systems.
A key theme from the discussion was identifying red flags when engaging with AI service providers. A major red flag raised by one panelist was when a buyer or seller lacks a thorough understanding of the AI system they are discussing. The panelists agreed that it is crucial for both sides to be well-informed about the technology and its implications, and to beware potential AI vendors that could not provide in-depth explanations of their products.
The discussion emphasized the need for transparency and communication between companies and their vendors. Companies should seek vendors willing to engage in open conversations about their practices, rather than those claiming 100% compliance without discussion. Instead of relying solely on standard certifications, companies should request detailed information, such as data sheets or labeling, to understand the specific practices of their AI service providers.
Further, panelists considered transparency and communication crucial at multiple levels within the AI ecosystem. When AI service providers purchase hardware to run AI models, both buyer and provider need to be aware of the data sources and datasets involved, as these factors could impact their liability.
For effective use of generative AI products, the panelists agreed on the importance of establishing a governance framework within an organization. This includes having clear guidelines for the responsible use of AI, such as for managing confidential and personal information. If a company has an acceptable use policy, it should ensure that its communication strategies are consistent with such a policy. Panelists also noted that managing vendor relationships can be complex, necessitating clear contractual agreements and governance structures.
Panelists highlighted early-stage considerations for companies developing or deploying AI systems. They considered that security-by-design and privacy-by-design should be starting points for AI development and deployment. Engaging legal, regulatory, and compliance teams early in the process is essential for comprehensive risk management.
The discussion highlighted the similarities between data protection principles and AI governance. Key data protection concepts, such as accuracy, minimization, and purpose limitation, are also relevant to AI data governance. Panelists emphasized that while data scientists and analysts may not always view their work through a legal lens, their activities often fall within data protection requirements.
The discussion concluded with insights on managing training data and model improvement while balancing innovation with ethical and regulatory compliance across international jurisdictions.
Photo: Industry sharing segment of the workshop on key aspects of generative AI governance and deployment, July 15, 2024. (L-R) Barbara Cosgrove, Lee Matheson and David N. Alfred.
1.2 The hands-on portion of the workshop engaged participants in a group exercise based on a realistic hypothetical scenario.
The final segment of the workshop engaged participants in a practical group exercise exploring the implementation of a hypothetical generative AI application modeled after ChatGPT by a fictitious private education services provider. Participants were divided into groups representing specific stakeholders relevant to the AI deployment lifecycle, such as the developer, deployer and user of the application, or a regulator, employee or in-house legal counsel. Each group was tasked with identifying and addressing potential concerns and risk areas from the perspective of their stakeholder. These discussions fostered a comprehensive understanding of the challenges posed by generative AI applications and provided valuable insights and a hands-on experience for organizations aiming to develop or deploy generative AI responsibly and in compliance with regulatory frameworks in the APAC region.
Photo: Participants presenting major takeaways from their table discussions, July 15, 2024.
Photo: Closing the workshop with a group photo of the FPF team, July 15, 2024. (L-R) First row: Bilal Mohamed, Anne J. Flanagan, Josh Lee, Sakshi Shivhare, Brendan Tan. (L-R) Second row: Lee Matheson and Dominic Paulger.
2. At the IAPP Asia Privacy Forum, FPF organized a panel to examine India’s landmark data protection legislation, and also participated in a panel on data sovereignty.
2.1. On July 18, FPF organized a panel titled “Demystifying India’s Digital Personal Data Protection Act”.
This panel was moderated Bilal Mohamed, Policy Analyst for FPF’s Global Privacy Team, and featured as panelists:
Rakesh Maheshwari, formerly Senior Director and Group Coordinator (Cyber Laws and Data Governance), Ministry of Electronics and IT of India (MeitY), providing a regulator’s perspective;
Nehaa Chaudhari, Partner and head of the advisory and public policy practice at Ikigai Law, providing perspectives from the legal sector; and
Ashish Aggarwal, Vice President, Public Policy at nasscom, providing industry perspectives.
The panelists examined India’s landmark legislation, the Digital Personal Data Protection Act 2023 (DPDPA), covering familiar concepts like notice and consent, data subject rights, data breaches, and cross-border data transfers, as well as new features of the law like significant data fiduciaries and consent managers.
Rakesh Maheshwari provided insights into MeitY’s thinking behind several key provisions of the DPDPA. On children’s privacy, he explained that the Government was concerned with ensuring the safety of children who access online platforms and so set the threshold for parental consent at 18 by default. However, he also highlighted that the DPDPA’s children’s privacy provisions are flexible: if platforms demonstrate that they process children’s personal data safely, then the age threshold could potentially be lowered. Rakesh also explained that consent managers are intended to centralize management of consent across multiple, fragmented sources of data, such as health data from various sources like labs, hospitals, and clinics, while ensuring data protection and providing data subjects with control over how their data is processed. He further addressed the relationship between MeitY and theData Protection Board, clarifying that while the Government will establish subordinate rules to the DPDPA, the Board will act independently as an adjudicator. He emphasized the importance of close cooperation and harmonized operations between the Board and the Government.
Nehaa Chaudhari discussed the industry’s proactive approach to compliance, noting that many businesses in India have already started the compliance process, focusing on data mapping and proactively obtaining consent from data subjects. She highlighted the industry’s hope for clarity on certain aspects of the DPDPA, particularly concerning children’s data and verifiable parental consent. She described two key aspects for verifying parental consent: obtaining the parent’s consent and establishing the parent-child relationship. Businesses are exploring various models and technological tools to address these requirements, such as the adequacy of using checkboxes for consent. She also pointed out that the DPDPA does not impose explicit duties on data processors and instead, allows data controllers and processors to determine their respective responsibilities through contractual arrangements. While the DPDPA provides a baseline for compliance, Nehaa emphasized that sector-specific regulations might impose heightened obligations.
Ashish Aggarwal provided insights into how ready nasscom’s 3,000+ member companies are to comply with the DPDPA. He explained that business-to-business (B2B) companies that already comply with the GDPR could become DPDPA-compliant in around six months as such companies should already have completed data mapping. However, he noted that for business-to-consumer (B2C) companies, GDPR compliance alone may not be sufficient as there are significant differences between the GDPR and DPDPA. He highlighted that some provisions of the DPDPA (especially breach notifications) still require clarification under forthcoming subordinate rules to the DPDPA. However, he did not expect that these rules would be as comprehensive as GDPR.
Overall, the panel provided substantial insights into the challenges and opportunities presented by the DPDPA, offering actionable advice for navigating this new regulatory landscape.
Photo: FPF Panel on Demystifying India’s Digital Personal Data Protection Act, July 18, 2024. (L-R) Bilal Mohamed, Ashish Aggarwal, Rakesh Maheshwari, and Nehaa Chaudhari.
2.2 On July 17, FPF APAC Managing Director Josh Lee Kok Thong contributed to a panel on “Data Sovereignty: Nebulous and Evolving, But Here to Stay in 2024?”.
This panel delved into the complexities of data residency, data sovereignty, data localization, and cross-border data transfers within APAC’s evolving governance structures. The speakers explored the impact of data and privacy laws, noting the complexities added by data localization requirements and the diverse approaches of countries like China, Indonesia, India, and Vietnam.
Josh provided an overview of cross-border data flows in the APAC region, highlighting the concept of data sovereignty. He drew a distinction between “data sovereignty” – a conceptual framework for looking at data transfers – and “data localization” – a set of requirements rooted in laws or policies.
Photo: FPF APAC represented by Josh Lee on a panel on Data Sovereignty: Nebulous and Evolving, But Here to Stay in 2024? July 17, 2024. (L-R) Charmian Aw, Josh Lee, Darren Grayson Chng, Wei Loong Siow, and Denise Wong.
3. FPF was represented in two sessions at the PETs Summit held on July 16, 2024.
3.1. FPF Vice President for AI, Anne J. Flanagan, spoke on the panel “Architecting New Real-World Products and Solutions with PETs.”
The panel discussed how companies have leveraged PETs for various use cases to innovate and create new products and solutions by participating in the IMDA’s PET Sandbox – a regulatory sandbox initiative set up by the PDPC to offer companies the opportunity to collaborate with PET digital solution providers to develop use cases and pilot PETs. Panelists offered valuable insights into the business cases for integrating PETs and how it contributed to sustained success in an increasingly data-driven business environment.
Anne discussed the integration of PETs in AI product development, highlighting their potential to balance innovation with privacy protection. She emphasized that PETs are not a one-size-fits-all solution but rather a tool to address various privacy challenges. Anne stressed the importance of incorporating PETs within a comprehensive company framework to effectively tackle these issues. She also announced the launch of FPF’s recent report on Confidential Computing. This report offers an in-depth analysis of the technology’s role in data protection policy, detailing its fundamental aspects, applications across various sectors, and crucial policy considerations.
3.2. FPF APAC Managing Director Josh Lee Kok Thong chaired a roundtable titled “Unleashing The Data Economy: Identifying Challenges, Building Use Cases & How PETs Help Address Generative AI Concerns.”
This session focused on exploring privacy challenges in specific use cases and the application of PETs to mitigate these concerns. The roundtable delved into the data economy, individual use cases, privacy challenges, and the intersection of PETs with generative AI. Key highlights included building an AI toolbox, identifying challenges and use cases, choosing and implementing PETs, and using PETs to balance innovation with privacy.
4. FPF organized exclusive side events to foster deeper engagements with key stakeholders on July 18, 2024.
4.1 FPF hosted an invite-only Privacy Leaders’ Luncheon at Marina One West Tower.
This closed-door event also provided a platform for around 30 senior stakeholders of FPF APAC to discuss pressing challenges at the intersection of AI and privacy, with a particular focus on the APAC region. During the session, FPF Vice President for Artificial Intelligence Anne J. Flanagan introduced FPF’s new Center for AI to APAC stakeholders, highlighting our ongoing commitment to advancing AI governance.
4.2 FPF co-hosted a networking cocktail event with Rajah & Tann at Marina Bay Sands Expo and Convention Centre.
Later in the evening, on July 18, FPF APAC toasted with old and new friends and discussed the challenges and opportunities in AI and privacy. At the event, we were privileged to have the following distinguished speakers share brief remarks:
Denise Wong, Deputy Commissioner, Personal Data Protection Commission of Singapore.
Steve Tan, Deputy Head, Technology, Media & Telecommunications and Partner at Rajah & Tann.
Anne J. Flanagan, Vice President for AI at FPF.
Josh Lee Kok Thong, Managing Director of FPF APAC.
This event facilitated meaningful connections and discussions among the attendees, further strengthening FPF’s partnerships and friendships within the data protection community.
5. Conclusion
FPF is proud to showcase our significant participation in PDP Week 2024, the IAPP Asia Privacy Forum 2024, and the PETs APAC Summit, driving forward discussions on data protection and AI governance in the APAC region. FPF’s workshop on generative AI governance, insightful panel discussions, and exclusive networking events underscored our commitment to fostering collaboration and knowledge-sharing among industry, academia, regulators, and civil society.
As we look ahead, FPF remains dedicated to advancing the discourse on privacy and emerging technologies, ensuring that we continue to navigate the complexities of the digital age with a balanced and informed approach. We are grateful for the support of the PDPC, IAPP, and all our members, partners and participants who contributed to the success of these events.
Consumer Health Data Privacy Notices by the Numbers
Today, FPF is releasing an infographic that provides insights into how organizations are responding to the transparency requirements of recently enacted U.S. state health privacy laws. The infographic reflects a survey of privacy notices on the websites of 180+ companies across a variety of industries and sectors, from pharmaceutical to apparel.
Two key laws enacted on March 31, 2024 formed the basis for the survey, Washington’s My Health, My Data Act, and Nevada’s SB370. Both laws create specific obligations for online transparency notices on websites requiring detail about what health information is collected, although each law has a slightly different definition of health information (including reproductive and gender-affirming care information).
The Washington ‘My Health, My Data’ Act (“MHMDA”) establishes a duty for regulated entities to maintain and adhere to a “consumer health data privacy policy” that makes a specific set of disclosures and to “prominently publish” a link to this policy on its homepage. WA MHMDA defines health information as “personally identifiable information that is linked or reasonably capable of being linked to a consumer” and “identifies the consumer’s past, present, or future physical or mental health status.”
Chapter 603A of the Nevada Revised Statutes (“NV SB 370”) establishes a duty for regulated entities to develop and maintain a consumer health data privacy policy that “clearly and conspicuously” makes a specific set of disclosures. The law defines a use-based range of “consumer health data” that applies to information that a regulated entity “uses to identify the past, present or future health status of the consumer,” excluding certain personal information concerning consumer shopping habits and interests.
Of the 180+ companies surveyed, 40% of the websites surveyed had a consumer health data notice or policy. When consulting the general privacy notice or policy, 62% of organizations provided notice that some form of health data was collected within the relevant statutory definitions. Several policies explicitly stated that no health data was collected, used, or sold per “as defined by state laws”. Although many consider WA MHMDA to require a standalone notice, 40% of the websites that had a notice bundled information related to MHMDA and NV SB 370 into the same text (ex. MHMDA “and similar laws”.)
Other findings:
All industries, when taken separately, reflected an even or nearly even split in having a notice or not (ex: In a subsample of ten retailers, 50% would have a notice and 50% would not.) The exception to this was pharmaceutical and life sciences companies, where 90% of surveyed websites had notices.
For 70% of surveyed websites that included notices, those notices were linked in the homepage footer; with two websites also linked notices from the consent or cookie banners
15% of websites with notices had entirely separate and explicit policies for WA MHMDA and NV SB 370.
87% of companies surveyed that are headquartered in Washington State had notices on their websites.
This data provides a birds-eye view of the landscape of approaches to transparency around consumer health data. Privacy leaders may use these metrics to compare their approaches in publishing privacy notices to broader industry norms, or to initiate discussion in their organizations, including on decisions to either create bundled or standalone notices, standalone notice webpages, or to link to notices on homepages.
The data in this survey were collected April 12-17, shortly after the enactment of the two relevant laws. The sampled organizations represent a highly diverse range of companies, with an emphasis on companies with a health focus or a wellness component. Many thanks to Niharika Vattikonda, Angela Guo, and Jeter Sison for the tireless data work on this project!
Limitations: Data was limited to websites accessed via desktop. App interfaces were not included in the survey. No virtual personal networks (VPNs) were used (ex. a VPN based in Washington state.)
Please reach out Jordan Wrigley, Data and Policy Analyst for Health & Wellness ([email protected]) to discuss these findings or to learn more about FPF Health & Wellness projects!
CPDP LatAm 2024: What is Top of Mind in Latin American Data Protection and Privacy? From data sovereignty, to PETs
On July 17-18, the fourth edition of the Computers, Privacy, and Data Protection Conference Latin America (CPDP LatAm) was held in Rio de Janeiro, Brazil. This year’s theme was on “Data Governance: From Latin America to the G20,” highlighting Brazil’s current presidency of the international cooperation forum. As in previous years, FPF participated on the ground – this year, FPF organized a panel on the adoption and deployment of privacy-enhancing technologies in the region. This blog will cover highlights from both the plenary sessions and FPF’s panel.
During the opening plenary session, panelists discussed the relevance of data governance for informational self-determination and the sustainable development of technology. The panel argued that data sovereignty and data governance should be central values in the development and regulation of technologies in a way that empowers both nations and individuals. Panelists cautioned that in recent years some technologies have been developed without data governance frameworks and limited accountability, leaving self-determination to individuals and without a sustainable development future. As a result, panelists agreed data governance is likely to remain a recurring theme in G20 debates, and regulators will play an increasingly critical role in monitoring the sustainable and ethical development of technology.
During the closing plenary session, panelists reminded the audience that approving laws and regulations is just the first step in the regulatory journey. For instance, while discussing Brazil’s AI Bill (PL 2338/2023), panelists commented that the proposal provides a strong framework to regulate and monitor the deployment of AI technologies. Regardless of potential amendments to the current proposal, regulators must be aware that active implementation is the most relevant aspect of the regulatory journey.
On a separate note, panelists also discussed data governance as an essential component of digital public infrastructures (DPIs)1. For instance, they noted DPIs became relevant after India included them as a priority during its G20 presidency. Although digital public infrastructure is still an evolving concept, it can be explored as an alternative to develop and deploy technology, while keeping a critical approach and understanding the normative values embedded in this concept. The introduction of this concept offers a reminder that other jurisdictions and regions, including Latin America, can benefit from the knowledge and experience shared by other regions like the Asia-Pacific. At the same time, panelists agreed that these references should not prevent policymakers in Latin America from thinking, analyzing, and deciding standards and mechanisms for data governance in consideration of the region’s unique social, economic, and cultural dynamics.
FPF’s Panel: Exploring the Potential of PETs in Latin America
FPF’s panel focused on the potential of privacy-enhancing technologies (PETs) to advance privacy and data protection in Latin America. During the discussion, the goal was to cover three main points: i) the state of deployment of some of these technologies; ii) policymaking and regulatory priorities; and iii) opportunities and potential limitations.
First, panelists discussed the growing popularity of PETs in recent years as a result of progress in research and computational capacity. Global policy efforts for the adoption of PETs have included the release of guidance, the creation of sandboxes, and increased investment in PETs research and development. Latin America has not been the exception, as regulators have begun to discuss the potential of PETs to help mitigate privacy risks and reduce the identifiability of data.
For instance, Brazil’s Autoridade Nacional de Proteção de Dados (ANPD) recently conducted technical studies on anonymization and pseudonymization as a basis for its forthcoming guidance. The ANPD also acted as an observer of OpenLoop, Meta’s global initiative connecting policymakers and companies to develop policies around emerging technologies and AI, a project developed separately in Brazil and Uruguay. One of the project’s findings in Brazil identifies a gap in most data protection laws (including the LGPD): a lack of an express provision covering PETs. In some cases, the connection between the law and these technologies relies on achieving data protection principles such as data minimization or complying with anonymization obligations. Panelists agreed that the need to define clear standards for anonymization is an important step for PETs adoption.
[Photo description: Pedro Sydenstricker (Nym Technologies, Brazil); Pedro Martins (Data Privacy Brasil); Maria Badillo (FPF); Thiago Moraes (ANPD); Camila Nagano (iFood)]
Relatedly, panelists discussed use cases where PETs can help with business development while preserving the privacy and utility of the data. For instance, in the food delivery service industry, panelists discussed how different techniques help obscure or eliminate personal data retrieved from customer interactions. If properly implemented, businesses can keep relevant data for analysis and improvement of services while preserving the privacy of their customers. Panelists agreed that organizations investing time and resources to integrate these types of tools not only open up new opportunities to improve user engagement and drive strategic decision-making, but also build trust, an essential component in digital transactions.
Finally, panelists briefly addressed the relevance of PETs in addressing privacy risks generated by AI. Acknowledging that AI can bring new ethical and legal challenges, they agreed on the importance of exploring the potential of different tools and techniques when adopting or developing AI models. Panelists agreed that organizations should make efforts to approve internal governance programs and guidance, invest in education and training for staff, and keep track of regulation. This, however, must be complemented with more legal certainty and guidance from regulators on how to implement PETs and AI governance more generally.
To foster dialogue and collaboration around PETs and policymaking, FPF supports the Global PETs Network for Regulators, a forum that exclusively convenes regulators worldwide. If you are interested in participating in the Network, please reach out to [email protected] or [email protected]. You can also learn more about FPF’s PETs-related work here.
According to the United Nations Development Programme, there is growing consensus on defining DPIs as “a combination of (i) networked open technology standards built for public interest, (ii) enabling governance, and (iii) a community of innovative and competitive market players working to drive innovation, especially across public programmes.” Digital public infrastructure | United Nations Development Programme (visited July, 2024). ↩︎
Contextualizing the Kids Online Safety and Privacy Act: A Deep Dive into the Federal Kids Bill
Co-authored by Nick Alereza, FPF Policy Intern and student Boston University School of Law. With contributions from Jordan Francis.
On July 30, 2024, the U.S. Senate passed the Kids Online Safety and Privacy Act (KOSPA) by a vote of 91-3. KOSPA is a legislative package that includes two bills that gained significant traction in the Senate in recent years—the Kids Online Safety Act (KOSA), which was first introduced in 2022, and the Children and Teens Online Privacy Protection Act (“COPPA 2.0”), which was first introduced in 2019. KOSPA contains new provisions and a variety of provisions that would amend, and in some cases augment, the United States’ well-established existing federal children’s privacy law, the Children’s Online Privacy Protection Act (COPPA).
KOSPA’s passage in the Senate marks the most substantial advancement in federal privacy legislation in decades. In just the last two years, the children and teens’ privacy and online safety landscape has seen a flurry of activity. The federal executive branch has been active through efforts such as significant FTC enforcement actions and a report released just two weeks ago from the Biden-Harris Administration’s interagency Task Force on Kids Online Health and Safety. Most notably, many states have passed laws providing heightened protections for kids and teens online, some of which have been the subject of litigation.
Amongst all this activity, the Kids Online Safety and Privacy Act takes a new approach that is unlike much of what we have seen before. Like other proposals, the bill would create heightened protections for teens, and new protections for design and safety. However, KOSPA also contains a novel knowledge standard, limited preemption, and a novel “duty of care,” along with requiring particular design safeguards and prohibiting targeted advertising to children and teens.
1. A novel knowledge standard
Similarly to COPPA, the Kids Online Safety and Privacy Act (KOSPA) would establish a two-part threshold for when companies are required to comply with various data protection obligations, such as access, deletion, and parental consent, for when a service is “directed to children” or when services have “actual knowledge” that an individual is a child. However, KOSPA would modify the standard in a novel way: its protections for minors would apply when a business has “actual knowledge or knowledge fairly implied on the basis of objective circumstances.”
This language is based on the FTC’s trade regulation rules, which use the “knowledge fairly implied” standard to determine if a company knew it violated a trade rule. While the FTC is experienced in using this standard, it is new when applied to children’s privacy and online safety. Currently, there is little guidance or comparable laws to help understand how “knowledge fairly implied on the basis of objective circumstances” applies specifically to the narrow question of whether a user on a website is a minor. This standard is arguably closer to constructive knowledge and may even be broader than the “willful disregard” standard used in state comprehensive laws.
COPPA’s knowledge standard, or the question of what obligation a business has to figure out who on their website is a child, has long been debated. On one hand, critics of the existing standard argue that it is too narrow and that needing actual knowledge incentivizes companies to avoid evidence that might suggest children are on their websites. On the other hand, proponents of keeping the existing standard argue that broadening the threshold would require companies to engage in too much data collection, creating an unintended result of age-gating even general audience, age-appropriate websites. In recent years, most state comprehensive laws have taken the approach of using actual knowledge or willfully disregards,” which attempts to strike a balance between the two sides of this debate.
2. Narrow preemption of state laws
Preemption, or the question of which state privacy laws will be superseded by a federal standard, is one of the biggest sticking points in federal privacy debates. Under KOSPA, preemption is narrow and would explicitly supersede only state laws that directly conflict with the Act. Additionally, the Act includes a savings clause explicitly allowing states to enact laws and regulations that provide “greater protection” to minors than those under KOSPA.
While any federal law is likely to have some uncertainty when it comes to preemption of state laws, this language bodes well for states who have enacted heightened privacy and online safety protections for children and teenagers in recent years, such as Maryland, Connecticut, and New York. Some of the thinking with a federal privacy law is that it would afford one national standard for privacy rather than a “patchwork” state-by-state approach. However, with KOSA and COPPA 2.0, these would be additional protections layered on top of existing state compliance obligations.
3. A novel “duty of care” to prevent and mitigate harms to children and teens
One of the most discussed new provisions in KOSPA (arising from KOSA) is its duty of care. The proposal would require covered platforms to exercise “reasonable care” in the “creation and implementation of any design feature to prevent and mitigate [harms] to minors.” Specifically, KOSPA identifies six categories of harm, including explicitly stated mental health disorders, violence and online bullying, and deceptive marketing practices. (See Table 1)
Online services owing a duty of care to minors is a novel aspect of child-focused privacy laws a trend that has popped up in recent years – seen in the currently-enjoined California Age-Appropriate Design Code, Maryland Age-Appropriate Design Code, and recent amendments to Colorado and Connecticut’s comprehensive consumer privacy laws. Design codes require an affirmative duty to act in the best interests of children, whereas KOSA, Connecticut, and Colorado require a duty to avoid harm.
Overall, KOSPA/KOSA’s approach to a duty of care is both broader in scope, and at the same time more specific in its enumeration of specific harms, compared to existing state approaches. As comprehensive consumer privacy laws, Connecticut and Colorado are focused on how processing personal data may be used to facilitate harms whereas KOSA applies broadly to preventing and mitigating harms. Connecticut and Colorado also require an assessment of any service, product, or feature, while KOSA is focused only on “design features.” Lastly, Connecticut and Colorado’s list of harms is shorter and more narrowly focused on more traditional privacy harms, while KOSA enumerates specific concrete harms related to modern kids’ and teens’ well-being, such as anxiety, bullying, and abuse.
None of the state laws with duties of care are yet in force, so it remains to be seen how these provisions will be implemented by companies or enforced by regulators. However, the alignment of KOSA with the specificity and narrower scope of Colorado and Connecticut, could mitigate risks of legal challenges over restrictions on content, like those seen in the California AADC litigation.
KOSA’s duty of care
Connecticut & Colorado’s duty of care
A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors:
Controllers shall use reasonable care to avoid any heightened risk of harm to minors caused by such online service, product, or feature.
(1) Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors. (2) Patterns of use that indicate or encourage addiction-like behaviors by minors.
(3) Physical violence, online bullying, and harassment of the minor.
(4) Sexual exploitation and abuse of minors.
(5) Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol. (6) Predatory, unfair, or deceptive marketing practices, or other financial harms.
Heightened risk of harm to minors means processing minors personal data in a manner that presents any reasonably foreseeable risk of: (A) any unfair or deceptive treatment of, or any unlawful disparate impact on, minors (B) any financial, physical or reputational injury to minors, or (C) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of minors if such intrusion would be offensive to a reasonable person (D) unauthorized disclosure of the personal data of minors as a result of a security breach [note: this fourth harm is in CO, but not CT]
4. Changes to Verifiable Parental Consent (VPC)
KOSPA would expand the existing requirements for verifiable parental consent (VPC), requiring companies to collect it at an earlier stage than might often be obtained under COPPA. Interestingly, both provisions of KOSPA (the COPPA 2.0 and KOSA parts of the bill) address VPC separately. KOSA would require a covered platform to obtain verifiable parental consent (VPC) before a known child’s initial use of the service. While a covered platform may consolidate this process with its process to obtain VPC for COPPA, KOSA’s VPC requirement seems to still apply even if a covered platform’s personal information practices do not necessitate VPC under COPPA.
KOSA may also differ in its approach to children who already use a covered platform. Because KOSA requires VPC prior to a known child’s “initial use”, it is unclear whether a covered platform must obtain VPC from a child whose initial use happened before the bill’s effective date or when the platform knew they were a child. Comparable state social media laws include provisions that prevent a minor from holding an account they could not create: Florida’s HB 3 would require a social media service to terminate all accounts that likely belong to minors younger than 16, and Tennessee’s Social Media Act would require age-verification of an unverified account holder when they attempt to access their account.
5. Other Privacy and Safety Safeguards
KOSPA includes a number of requirements for companies to establish safeguards aimed at addressing “the frequency, time spent, or activity of minors” on platforms, including the ability to opt out of personalized recommendation systems. The proposal would also establish a flat ban on personalized advertising to kids and teens under the age of 17.
Design Safeguards for Time Spent and Recommendations
KOSPA requires covered platforms to “provide readily-accessible and easy-to-use safeguards” to any user or visitor that the platform knows is a minor. These safeguards must be on the most protective setting by default. KOSA requires a covered platform to make parental tools available, although a minor can change their own account settings without VPC.
Two of KOSPA’s safeguards have key differences compared to state social media laws with similar provisions. KOSA requires a covered platform to limit by default “design features that encourage or increase the frequency, time spent, or activity of minors.” State social media laws which regulate design features tend to do so narrowly such as Utah’s SB 196, which would prohibit the use of infinite scroll, autoplay, and push notifications for minors, or New York’s SAFE for Kids Act, which would require VPC to enable overnight notifications for minors. Once again, KOSA’s scope more closely resembles state privacy laws: Colorado and Connecticut both have a broader prohibition against the use of any “system design feature to significantly increase, sustain, or extend a minor’s use of the online service, product, or feature” without a child’s VPC or a minor’s consent. But unlike all of these laws, KOSPA would allow minors, including children, to change any of these settings without VPC.
The second notable safeguard is a requirement for a covered platform to include controls to adjust or opt-out of any personalized recommendation systems, which are suggestion or ranking algorithms that incorporate a user’s personal information as defined in COPPA. This category appears to be narrower than New York’s SAFE for Kids Act, which would limit feeds which rank or suggest content based on any information associated with a user or user’s device.
Prohibition on Targeted Advertising
Finally, the COPPA 2.0 portion of the bill creates a flat prohibition on targeted advertising to children and teens 16 and under. While comparable state laws have moved in the direction of creating additional restrictions on advertising to minors, the federal approach goes the furthest by creating a ban rather than allowing for opt-in consent. Notably, the bill takes the approach of creating and defining the term “individual-specific advertising.” The combination of the targeted advertising ban and the broader, constructive knowledge standard used is likely to have significant impacts for the adtech ecosystem.
Reporting Mechanism
KOSPA requires a covered platform to incorporate a reporting mechanism, through which minors, parents, or schools can report harms to minors. The platform must have an electronic point of contact specific to these matters, and the platform must substantively respond to a report within at most 10 or 21 days, depending on the size of the platform and the imminence of harm to the minor. KOSPA’s attention to detail regarding reporting mechanisms stands out when compared to the Maryland AADC’s single requirement that a service’s reporting tools be “prominent, accessible, and responsive.”
Looking ahead
While KOSPA passed the Senate by an overwhelming vote of 91-3, its future in the House of Representatives is uncertain. The House started its August recess just days before the Senate vote, and the earliest KOSPA could be taken up in the House is September 9, which will be just under two months until the November election. Whether that helps or hurts the bill’s chances is subject to speculation. No matter Congress’s next move, states are poised to keep forging ahead on youth privacy and online safety.
Connecting Experts to Make Privacy-Enhancing Tech and AI Work for Everyone
The Future of Privacy Forum (FPF) launched its Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics on Tuesday, July 9th.
Industry experts, policymakers, civil society, and academics met to discuss the possibilities afforded by Privacy Enhancing Technologies (PETs), the inherent regulatory challenges, and how PETs interact with rapidly developing AI systems. FPF experts led participants in a workshop-style virtual meeting to direct and inform the RCN’s next three years of work. Later that day, senior representatives from companies, government, civil society, and academia met at the Eisenhower Executive Office Building to discuss how PETs can be used ethically, equitably, and responsibly. Among the major themes:
Privacy Enhancing Tech can support socially important data-driven research while protecting sensitive personal info;
In some contexts, there are hard questions about how to implement PETs while preserving data that is crucial for assessing and combating bias, especially when it comes to AI decision-making systems;
Greater clarity about how regulators apply data protection laws to information subjected to PETs safeguards could increase the use and effectiveness of Privacy Enhancing Tech;
Analysis of existing PETs implementations can yield important insights into the opportunities and challenges of particular tech and approaches.
Virtual Kickoff
FPF hosted a Virtual Kickoff event where over 40 global experts helped shape the RCN’s work for the next three years. There were three main areas of discussion: How can we broadly define a PET while still having a clear scope? Second, what can we learn from the opportunities and challenges encountered by existing PETs implementations? Third, what are the most important requests for policymakers?
Here’s what the experts had to say:
Broadly Defining PETs
Deciding what is and isn’t a PET is essential for making any recommendations for their use, but forming a definitive list is inherently fraught with complexity and counterexamples. Some participants suggested building a framework and series of questions to ask about a given use case with an applied technology could be a helpful way to move forward. Participants also noted that usability is essential in defining a PET—without understanding and building for the end users, we risk PETs losing their intended value. Relatedly, participants noted a sociotechnical system aspect of this work and emphasized the need to think about the human pieces that attach to technologies
PETs Possibilities
Participants identified many areas of opportunity for PETs usage, such as in the social sciences, medical research, credential verification, AI model training, behavioral advertising, and education. At the same time, there are several known issues, including balancing the tradeoff between privacy and data utility, a lack of policy clarity and economic incentives to use PETs, computational overhead, ethical considerations, and, for some, a lack of trust in the technologies. Experts advised that for more people to use PETs, the tools must become more accessible and provide additional training and support for new users. Participants identified AI as a contributor to both the opportunities and challenges while agreeing that AI technologies are a key part of some aspects of the PETs landscape moving forward.
Policy Asks for Regulators
The most frequent request was for more regulatory clarity around PETs. For example, experts wanted to know what legal and technical obligations organizations have using PETs, what regulators need to see to support the development of PETs as a mechanism for meeting data minimization and other requirements, and what the legal definitions of de-identification or anonymization are when using PETs. While some suggested regulators needed specific use cases to make such determinations, others indicated that no one wants to “go first” and suggested general use cases representing common PETs uses could be instructive. Regardless of how clarity is achieved, experts want lawmakers and regulators to provide specific measurements for how organizations can comply with various legal regimes, accurately estimate risk, and make informed decisions about PETs deployment.
A White House Roundtable Event
The Roundtable meeting, hosted by the White House Office of Science and Technology Policy at the Eisenhower Executive Office Building’s ornate Secretary of War Suite, marked the beginning of a collaborative effort to advance Privacy Enhancing Technologies and their use in developing more ethical, fair, and representative AI. The meeting commenced with an overview of the project’s goals. Hal Finkel, Program Manager for Computer Science and Advanced Scientific Computing Research at the Department of Energy, and Greg Hager, Head of the Directorate for Computer and Information Science and Engineering at the National Science Foundation, expressed their agencies’ commitment to ensuring technology benefits every member of the public, emphasizing the critical role of PETs in maintaining data privacy, especially in AI applications that require extensive data collection.
Participants discussed the global momentum behind PETs driven by new data protection laws from the local to international levels. They highlighted the necessity of creating robust governance frameworks alongside technological innovations to ensure ethical use. Additionally, they articulated the complexities of studying AI’s societal impacts, particularly involving vulnerable populations, highlighting the need for governance frameworks to accompany technological solutions to privacy preservation.
Artificial Intelligence
The group also dove into some of the challenges and opportunities posed by foundation models: machine unlearning, balancing privacy with utility in personalized assistants, and identity/personhood verification. These issues underscore the necessity for advanced PETs that can adapt to evolving AI capabilities. Several people shared practical insights from the deployment of PETs in large-scale projects, such as the U.S. Census, conveying the importance of starting with a clear use case and ensuring equal footing for PETs teams to ensure success.
Specific opportunities for PETs in AI system testing were outlined, such as enabling organizations to disaggregate existing data internally and facilitating private measurement. Challenges included the need to relate metrics to life outcomes without extensive data sharing and understanding the impact of AI systems on individuals. Participants noted coordination challenges in setting up technical elements at this early stage and the gap from theory to practice.
Business Cases
Attendees also focused on the role of government in supporting business cases for PETs and the need for broader dissemination of PETs expertise beyond academia and big tech. Many people underscored the importance of public trust and consumer advocacy regarding PETs. As consumer sentiment shifts towards greater awareness of privacy issues, a unique opportunity exists to root efforts in democratic consensus and ensure that marginalized groups are adequately represented and protected.
The discussion also touched on the economic and other forms of feasibility of PETs, noting that deployment and operational costs can be prohibitive. Several people reaffirmed the need for public trust in PETs, highlighting that consumers are increasingly aware of privacy stakes and expect technologies to protect their data. They also reiterated the importance of centering public trust and consumer advocacy in these efforts.
Supporting Additional Deployment
The meeting concluded with a focus on the FPF RCN’s future direction, maintaining the need for ongoing collaboration to accelerate progress toward a privacy-preserving data-sharing and analytics ecosystem that advances democratic values. By bringing together a diverse group of experts, the RCN will foster convergence, address persistent differences, and support the broad deployment of PETs. Based on expert input such as this Roundtable, FPF will explore various mechanisms for deployment, including new technology, legal and regulatory frameworks, and standards and certifications, particularly in use cases that support privacy-preserving machine learning and the use of AI by U.S. federal agencies.
As the meeting wrapped up, participants expressed optimism and a shared commitment to ongoing collaboration. The future of AI and privacy lies in the collective ability to innovate responsibly, govern wisely, and earn the public’s trust, paving the way for a new era of privacy-preserving technologies.
Next Steps for The RCN
FPF is gathering all of the participants’ feedback, suggestions, and ideas, and we’ll send out a roadmap for the first year shortly. The two main groups (Experts and Regulators) will meet regularly to provide substantive feedback on our progress. About 18 months from the RCN launch, we’ll bring both groups together for an in-person event in Washington, D.C., for an in-depth working session.
Want to Contribute?
If you’re a subject matter expert on PETs or use PETs and want to contribute to their future use and regulation, we want to hear from you!
Sign up here to be considered for the Expert or Regular Sub-Groups. For questions about the RCN, email [email protected].
The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics is supported by the U.S. National Science Foundation under Award #2413978 and the U.S. Department of Energy, Office of Science under Award #DE-SC0024884.
Reflections on California’s Age-Appropriate Design Code in Advance of Oral Arguments
Co-authored with Isaiah Hinton, Policy Intern for the Youth and Education Team
Update: On Wednesday, July 17th, the U.S. 9th Circuit Court of Appeals heard oral arguments for an appeal of the District Court’s preliminary injunction of the California Age-Appropriate Design Code Act (AADC). Judges Milan Smith Jr., Mark Bennett, and Anthony Johnstone appeared interested in questions about severability and implications of the recent NetChoice/CCIA v. Moody decision on this case. The panel seemed skeptical of the State’s argument that the California AADC does not regulate content, particularly through the DPIA provisions concerning whether the design of a service could expose children to “harmful, or potentially harmful content” or lead to children “experiencing or being targeted by harmful, or potentially harmful, contacts.” While NetChoice conceded that they did not challenge four provisions, including those regarding geolocation information, NetChoice argued that the entirety of the law must be struck because the DPIA requirements are unconstitutional and interrelated to the rest of the law. However, it was noted that severability is a state issue, while the First Amendment’s constitutionality is a federal question and the idea of certifying the question to the California Supreme Court was raised.
The California AADC was the first of its kind in the U.S. and marked a significant development in youth privacy policy debates by mandating privacy by design and default for children under 18. Ahead of the oral arguments, this blog post provides an overview of how the California AADC’s enactment and subsequent constitutional challenge continue to impact the regulation of young people’s online experiences in the U.S.
The Enactment
California lawmakers modeled the AADC after the United Kingdom’s Age-Appropriate Design Code (UK AADC) and aimed to regulate the collection, processing, storage, and transfer of children’s data. The California law’s scope extended beyond the existing framework under the federal Children’s Online Privacy Protection Act (COPPA) by covering more online services and expanding protections to all individuals under 18. The California AADC included provisions from the UK AADC that were novel to U.S. law such as mandating the implementation of age estimation techniques if an online product, service, or feature was “likely to be accessed by children” and configuring default privacy settings to a “high level of privacy.” The California law was intended to address genuine privacy and safety risks faced by young people online and sparked renewed interest in seeking policy solutions, leading to an influx in state laws and enforcement actions. The law’s novel approach also raised concerns about not only the practicality of the law’s provisions but also their constitutionality.
The California AADC was introduced in the California Assembly as AB 2273 in February 2022, passed the legislature, and was signed by Governor Newsom in September 2022. The law’s intended enforcement date was July 1, 2024.
NetChoice, a trade association, filed a complaint in the U.S. District Court on December 14, 2022, alleging that the California AADC is unconstitutional, opening NetChoice v. Bonta.
NetChoice filed for a preliminary injunction in February 2023, which District Court Judge Beth Labson Freeman granted in September of that same year.
The following month, October 2023, California Attorney General Rob Bonta filed to appeal the preliminary injunction with the U.S. Court of Appeals.
The Enjoinment
The United States District Court for the Northern District of California issued a preliminary injunction, preventing enforcement of the California AADC pending a ruling on the case’s merits based on the Court’s view that NetChoice is likely to succeed on its claim that the law violates the First Amendment. In granting the injunction, the Court considered NetChoice’s allegation that most of the California AADC is an unlawful prior restraint on protected speech. The Court was concerned by many of the law’s provisions and assessed concerns with:
Provisions that required estimating user age; conducting data protection impact assessments (DPIA); configuring high default privacy settings; using age-appropriate language for policies and privacy information; enforcing published terms, policies, and community standards; and restricting the collecting, selling, sharing, and retaining of children’s personal information.
Provisions that prohibited using children’s personal information in a knowingly harmful way; profiling children by default; using a child’s personal information for any reason other than for which it was collected; and using “dark patterns” to encourage children to provide excessive personal information or to forgo privacy protections.
The Court acknowledged that the State has a substantial interest in protecting minors, but found that NetChoice would likely succeed on claims that the law is unconstitutionally vague and that California struggled to satisfy the other aspects of intermediate scrutiny.
Three Main Takeaways:
The California AADC Highlighted Existing Discussions About How to Protect Youth Privacy and Safety and Has Been Influential in Other States.
Most experts agree that there are concerns about young people’s privacy and safety online, but there are uncertainties about who should address these concerns and how. There is growing interest from policymakers in new regulation that provides privacy and safety protections for minors beyond COPPA’s parental consent framework and for minors over the age of 12. Even in states that did not copy it exactly, concepts from it have appeared in other state laws. This increasingly diverse patchwork of state laws complicates compliance. Some examples of concepts from the California AADC that appear in other state bills include:
Knowledge Standards: State legislation has gone beyond COPPA’s “directed to children” or “actual knowledge” standard. Recent U.S. laws include novel language such as “likely to be accessed by a child” that may be vague and difficult to interpret.
Age Appropriateness: Legislators continue to draft provisions modeled from the AADC that require “age-appropriate” standards, ask businesses to consider the “best interest of the children,” or evaluate whether features may be “harmful” to young people. However, these terms are largely undefined in the U.S. and without additional research and guidance, pose a significant barrier to understanding how to interpret these online protections.
Age Assurance Technologies: Legislators have proposed the need for using technology to either infer or verify a user’s age before allowing them to use certain services, including in some states social media or adult content.
You can read more about the knowledge standards of currently enacted laws in our blog and accompanying resource. You can also read about using a risk-based approach that balances privacy and equity in our age assurance infographic and accompanying blog.
The California AADC’s Enactment, and Its Enjoinment, Influenced Subsequent Regulation.
Several states followed California’s lead by introducing copycats or variants of the AADC, and one even became law. The Maryland legislature made an effort to remove the vulnerabilities of California’s AADC when writing their version and also passed a comprehensive privacy law during the same legislative session. See FPF’s blog on the Maryland AADC, our chart comparing it to the California AADC, and our blog on Maryland’s Online Data Privacy Act.
The District Court’s finding that the California AADC provisions are likely to be unconstitutional may have caused some legislators to hesitate to propose AADC-style bills or to diverge in ways that would address some of the litigation’s concerns. Here are two examples of laws that diverged from the AADC style.
Connecticut Data Privacy Act (CTDPA) – Read FPF’s blog about the law and our chart comparing it to the California AADC.
Florida Digital Bill of Rights (SB 262) – Read FPF’s blog about the law and chart comparing it to the California AADC.
Despite these proactive changes by state legislatures, the implications of a final constitutionality ruling are unclear. NetChoice v. Bonta raises questions about the constitutionality of laws with similar provisions. Even laws beyond youth privacy contain provisions like purpose limitations, dark pattern prohibitions, or age assurance requirements. If the District Court’s ruling stands, future legislation will need to be more narrowly tailored to specific harms and aims of the law.
The California AADC is now one of Several Youth Privacy and Safety Laws Facing Constitutional Challenges.
The outcomes of these cases will impact how youth privacy legislation is written, implemented, and enforced. The constitutional challenges to the California AADC address common youth privacy provisions such as data use and minimization, transparency, DPIAs, age assurance, and parental consent. Some of the laws at issue would effectively ban people under the age of 18 from using certain online services, while others could effectively require the age estimation of all users. While youth privacy and safety legislation proliferated in the states following the California AADC, many of those enacted have been constitutionally challenged. See FPF’s Overview of Contested Youth Privacy & Safety Provisions in Pending State Law Litigation.
Since the UK and California AADCs’ enactments, conversations have been happening around the world on how to best protect youth privacy and safety online through regulation. These efforts, like youth provisions in India’s DPDPA, are not subject to the same First Amendment concerns raised by NetChoice, and these laws are moving forward without facing the same challenges in court. These court decisions could greatly impact how kids and teens use the internet in the U.S. and may lead to a completely different online experience for children in America than those abroad.
Conclusion
The passing of California’s Age-Appropriate Design Code was a catalyst for conversations in America around protecting kids and teens online. As more states introduce and adopt youth privacy and safety laws, legislators and companies will continue to look to existing regulations for guidance on drafting and complying with new laws. The oral arguments in NetChoice v. Bonta will provide insight into what youth privacy and safety provisions are most constitutionally problematic for legislation and regulation and will help shape future youth privacy and safety policymaking.
Read our comments to the National Telecommunications and Information Administration (NTIA) in response to their request for comment on Kids Online Health and Safety.
NEW FPF REPORT: Confidential Computing and Privacy: Policy Implications of Trusted Execution Environments
Written by Judy Wang, FPF Communications Intern
Today, the Future of Privacy Forum (FPF) published a paper on confidential computing, a privacy-enhancing technology (PET) that marks a significant shift in the trustworthiness and verifiability of data processing for the use cases it supports, including training and use of AI models.
Confidential computing leverages two key technologies: trusted execution environments and attestation services. The technology allows organizations to restrict access to personal information, intellectual property, or sensitive or high-risk data through a secure hardware-based enclave or “trusted execution environment” (TEE). Economic sectors that have led the way in adopting confidential computing include financial services, healthcare, and advertising. As manufacturers continue to develop confidential computing technologies, policymakers and practitioners should consider a range of data protection implications discussed in the paper.
The paper, titled “Confidential Computing And Privacy: Policy Implications Of Trusted Execution Environments,” expands upon the following categories:
What is Confidential Computing?
Emerging Sector Applications
Policy Considerations
In Policy Considerations, the paper explores some of the novel implications of this technology for data protection policy, including how it may impact issues like transparency, legal questions related to “de-identification,” “sale,” and “sharing” of data, cross-border data transfers, and data localization. Ultimately, the usefulness, scale of impact, and regulatory compliance benefits of confidential computing depend on the specific configuration and management of the TEE and attestation service.
Download the paper here for a more detailed discussion of confidential computing and how it differs from other PETs, as well as an in-depth analysis of its sectoral applications and policy considerations.
Interested in learning more about PETs? Read about FPF’s recently launched PETs Research Coordination Network (RCN), supported by grants from the U.S. National Science Foundation (NSF) and U.S. Department of Energy (DoE). This project will analyze and promote the trustworthy adoption of PETs in the context of artificial intelligence (AI) and other technologies, directed by the Biden-Harris Administration’s Executive Order on AI.
FPF will also participate in the PETs Summit during Personal Data Protection Commission Singapore’s (PDPC) Personal Data Protection Week, during which the new report will be distributed. FPF’s Vice President for Artificial Intelligence and head of FPF’s Center for AI, Anne J. Flanagan, will be speaking on the panel “Architecting real world new products and solutions with PETs.” Managing Director for FPF Asia-PacificJosh Lee Kok Thong will be chairing the roundtable “Unleashing The Data Economy: Identifying Challenges, Building Use Cases & How PETs Help Address Generative AI Concerns.” Learn more about the events and FPF’s involvement at the PDPC PETs Summit here.
A First for AI: A Close Look at The Colorado AI Act
Colorado made history on May 17, 2024 when Governor Polis signed into law the Colorado Artificial Intelligence Act (“CAIA”), the first law in the United States to comprehensively regulate the development and deployment of high-risk artificial intelligence (“AI”) systems. The law will come into effect on February 1, 2026, preceding the March, 2026 effective date of (most of) the European Union’s AI Act.
To help inform public understanding of the law, the Future of Privacy Forum released a Policy Brief summarizing and analyzing key CAIA elements, as well as identifying significant observations about the law.
In the Brief, FPF provides the following analysis and observations:
1. Broader Potential Scope of Regulated Entities:Unlike state data privacy laws, which typically apply to covered entities that meet certain thresholds, the CAIA applies to any person or entity that is a developer or deployer of a high-risk AI system. A high-risk AI system, under the Act, refers to AI systems that make or are a substantial factor in making consequential decisions, including any legal or material decision affecting an individual’s access to critical life opportunities such as education, employment, insurance, healthcare, and more. Additionally, one section of the law applies to any entity offering or deploying any consumer-facing AI system. Therefore, despite a detailed list of exclusions, including a narrow exemption for small deployers, the law has broad applicability to a variety of businesses and sectors in Colorado.
2. Role-Specific Obligations:The CAIAapportionsrole-specific obligations for deployers and developers,akin to controllers and processors under data privacy regimes. Deployers, who directly interact with consumers and control how the AI system is utilized, take on more responsibilities than developers, including the following:
Maintaining aRisk Management Policy & Program that governs their deployment of high-risk AI systems. It must be updated and reviewed regularly, specify the principles, processes, and personnel used to identify and mitigate algorithmic discrimination, and “be reasonable” in comparison to recognized frameworks such as the NIST Artificial Intelligence Risk Management Framework (NIST AI RMF).
ConductImpact Assessments annually, which must include the system’s purpose and intended use cases, any known or reasonably foreseeable risks of algorithmic discrimination, risk mitigation steps taken, categories of data processed for system use, the system’s performance metrics, transparency measures, and a description of post-deployment monitoring.
NotifyingSubjects about the use of high-risk AI systems, disclosing information about the system’s purpose and the data used to make decisions, and providing the relevant consumer rights (detailed below).
Publicly Disclosing on their websites the types of high-risk AI systems currently deployed, and how known or reasonably foreseeable risks of algorithmic discrimination are being managed.
Developers are primarily tasked with providing documentation to help deployers fulfill their duties. This includes high-level summaries of training data types, system limitations, purposes, performance evaluations, and risk mitigation measures for algorithmic discrimination. Additionally, developers must publicly disclose on their websites summaries of high-risk AI systems sold or shared and detail how they manage risks of algorithmic discrimination.
Both developers and deployers must notify the Attorney General of any discovered instances of algorithmic discrimination.
3. Duty of Care to Mitigate Algorithmic Discrimination:Developers and deployers are also subject to a duty to use “reasonable care” to protect consumers from “any known or reasonably foreseeable risks of algorithmic discrimination from use of the high-risk AI system.” In the Brief, FPF notes that the CAIA’s algorithmic discrimination provisions appear to cover both intentional discrimination and disparate impact. Developers and deployers maintain a rebuttable presumption of using reasonable care under this provision if they satisfy their role-specific obligations. In comparison with a blanket prohibition against algorithmic discrimination, as seen in other legislative proposals, the duty of care approach likely means that enforcers of the CAIA will assess developer and deployer actions using a proportionality test considering factors, circumstances, and industry standards, to determine whether they exercised reasonable care to prevent algorithmic discrimination.
4. Novel Consumer Rights:Like many proposals to regulate AI, the CAIA provides consumers rights to be notified about the use of high-risk AI systems used to make decisions about them and receive a statement that discloses the purpose of the system and nature of its consequential decision. Because Colorado consumers already maintain data privacy rights under their state privacy law, deployers must also inform consumers of their right to opt-out of profiling in furtherance of solely automated decisions under the Colorado Privacy Act.
The CAIA also creates novel consumer rights where a deployer used a high-risk AI system to reach a consequential decision that is adverse to an individual. In those scenarios, the deployer must provide the individual with an explanation of the reasons for the decision, an opportunity to correct any inaccurate personal data the system processed for the decision, and an opportunity to appeal the decision for human review. However, deployers may not be required to provide the right to appeal if it is not technically feasible or it is not in the best interest of the individual, such as where delay would threaten an individual’s health or safety.
5. Attorney General Authority:Though the CAIA does not create a private right of action, it grants the Colorado Attorney General significant authority to enforce the law and implement necessary regulations. If an enforcement action is brought by the Attorney General, a developer, deployer, or other person may assert an affirmative defense based on their compliance with the NIST AI RMF, another recognized national or international risk management framework, or any other risk management framework designated by the Attorney General. The Attorney General also has permissive rulemaking authority in a variety of other areas, such as documentation and requirements, requirements for developer and deployer notices and disclosures, and the content and requirements of the deployer’s impact assessments.
Lastly, though the enactment of the CAIA was informed by extensive stakeholder engagement efforts led by Colorado Senate Majority Leader Rodriguez and Connecticut Senator Maroney, FPF raises several questions and considerations about the implementation and enforcement of the CAIA in the Policy Brief, such as:
Metrics: Because the CAIA does not mandate the use of particular metrics to identify and measure algorithmic discrimination, developers and deployers will have flexibility to choose how to measure and test for bias. Are there metrics or testing that may be considered unreasonable or not pass muster?
Technical Feasibility: When would it not be “technically feasible” to provide a consumer a right to appeal an adverse decision? Are considerations of burden or lack of resources appropriate considerations? Is there or should there be a consumer right for when a deployer inappropriately denies a consumer’s right to appeal?
Enforcement: How will the law interact with existing civil rights statutes? Although CAIA does not include a private right of action, can an individual use information disclosed under this law as a basis to exercise their existing civil rights? Conversely, if an action is brought against an entity for algorithmic discrimination under existing civil rights law, could the defendant utilize information or standards compliance under the CAIA as a defense?
If the state legislature’s AI taskforce or the Attorney General does not address these questions in the next session, many of these issues may only be resolved through litigation.
Nonetheless, given concerns raised by the Governor, we may expect to see changes to the law that could alter the scope, substance, and allocation of responsibility. For now, though, the CAIA stands as it is currently written, and remains the first-in-the-nation law to regulate the AI industry, protect consumers, and mitigate the risks of algorithmic discrimination. FPF will continue to closely monitor updates and developments as they progress.
This blog post is for informational purposes only and should not be used or construed as legal advice.
FPF Launches Effort to Advance Privacy-Enhancing Technologies, Convenes Experts, and Meets With White House
FPF’s Research Coordination Network will support developing and deploying Privacy-Enhancing Technologies (PETs) for socially beneficial data sharing and analytics.
FPF’s RCN will bring together a multi-stakeholder community of academic researchers, industry practitioners, policymakers, and others to identify key barriers to responsible use of PETs and opportunities for PETs to enable ethical data use and sharing. Some PETs offer new anonymization tools, while others enable collaborative analysis on privately-held datasets, allowing the use of data without the need to share or disclose the data itself. Given the wide range of use cases and applications for PETs, particularly in the field of AI, the RCN will hold regular meetings to promote ethical data use, encourage responsible scientific research and innovation, and ensure that individuals and society can benefit from data sharing and analytics. The RCN will also engage with FPF’s Global PETs Network in an effort to increase regulatory clarity regarding PETs.
Today’s virtual meeting will gather subject-matter experts to focus on the broad definitions of PETs, their risks and benefits, and policy work that could unlock their use in more contexts. Following the meeting, prominent researchers and industry leaders will join a Roundtable discussion with executive branch officials in the White House to discuss the intersection of PETs, AI, and data privacy.
“Today’s event officially kicks off FPF’s three-year project,” said John Verdi, FPF’s Senior Vice President for Policy, who serves as the project’s principal investigator. “We are thrilled to play an important role in this concerted effort to advance regulatory clarity regarding PETs, AI, and emerging technologies. The diversity of perspectives in the PETs Research Coordination Network will be key to its success in developing best practices and policy recommendations.”
In addition to the main expert group, FPF will convene a regulator sub-group focused specifically on legal and regulatory mechanisms supporting the development and use of PETs. More information is available here.
The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics is supported by the U.S. National Science Foundation under Award #2413978 and the U.S. Department of Energy, Office of Science under Award #DE-SC0024884.
###
About the Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.
We’re in this Together: Expert Speakers Explore Topics Related to Protecting Privacy, Security, and Online Safety for Young People in Australia
On June 26, the Future of Privacy Forum (FPF) and the Australian Strategic Policy Institute (ASPI) co-hosted an online discussion on Privacy, Security, and Online Safety for Young People in Australia. The panel included welcoming remarks from John Verdi, FPF, and Bart Hogeveen, ASPI, and consisted of experts across all three disciplines, including:
Mike Bareja, Deputy Director of Cyber, Technology & Security, ASPI (moderator)
Peter Leonard, Principal and Director, Data Synergies
Lizzie O’Shea, Founder and Chair, Digital Rights Watch
Amber Hawkes, Principal, Online Safety & Digital Literacy, Pearl Consulting
Dr. Susanne Lloyd-Jones, Cyber Security CRC Post Doctoral Fellow at the UNSW Allens Hub for Technology, Law and Innovation
Hayden Wells – Detective Acting Superintendent, Australian Centre to Counter Child Exploitation (ACCCE)
The discussion came just days after Australia’s eSafety Commissioner published the final pending industry standards to govern the treatment of Child Sexual Exploitation Material (CSEM) as well as pro-terror material, crime and violence material, and drug-related material (collectively, “class 1A” and “class 1B” material). These final standards address Designated Internet Services and Relevant Electronic Services, joining six other codes covering other categories of services.
In October 2023, prior to the publication of the draft industry standards, FPF hosted a roundtable conversation with expert contributors from across Australia to explore potential benefits and risks that may arise with different approaches. The final Outcomes Report from that event highlighted key takeaways relevant to regulations in this area. The Office of the eSafety Commissioner will now look to industry codes for “class 1C” and “class 2” material, to cover online pornography and “other high-impact material.”
The Australian Parliament is also currently considering updates to the Privacy Act to govern how personal information may be processed. The updates, which are expected later this year, are likely to include proposed additional protections to apply only to children (defined as those who are under 18).
Speakers at the June 26 event engaged in an educational and far-ranging conversation that raised several important topics and themes. While the panelists discussed the need to ensure that any action in this area was appropriate to Australia’s unique culture and needs, many also recognized that the approaches being implemented in Australia are serving as the basis for countries around the world – including countries with fewer protections for individual rights.
Several speakers spoke to the importance of having inclusive conversations that break down the silos around related regulatory topics. As was noted, government and industry responses to questions around safety, security, and privacy often overlap and generally would benefit from greater collaboration, both in places where the proposed response to one interest may contravene another as well as in places where action taken in one area may compliment or benefit the work being done in another.
Many speakers referenced on-going discussions on encryption (i.e., technology applied to protect transactions from unwanted or unintended recipients) and indicated that it went to the heart of these three topics. While encryption, and specifically end-to-end encryption, may, in some cases, make obtaining specific content more difficult for investigators, it also is widely considered one of the most important methods for protecting communications and interactions in the digital world, providing increased privacy, security, and safety. In addition to encryption, speakers also discussed the impact that emerging technologies were having across each of these areas, from quantum cryptography and generative artificial intelligence, to immersive and “embodied” technologies, all of which may drive both significant benefits and risks for young people and may require nuanced, comprehensive responses.
Other topics emphasized the importance of providing tailored education and resources to everyone involved in responding to material that may create risks for young people, such as regulators, investigators, and civil society organizations as well as parents and children themselves. Speakers explained that resources must meet people, particularly young people, where they are. Regarding banning young people from social media, many speakers described how such action may be more likely to cause harm than provide benefit. They emphasized that young people need to build the necessary skills and resilience that are needed to interact in those spaces, and a ban would inhibit the ability to develop important skills. Speakers also discussed the critical importance of transparency and accountability, both for regulators and for industry.
You can watch the full discussion on FPF’s YouTube page. Please visit FPF’s website for more information on the work FPF is doing on children’s privacy and cybersecurity. FPF will be hosting additional in-person events drilling down into different topics in this space later this year in major Australian cities. These events will be open to the public – stay tuned for more information and subscribe to our newsletter to receive updates about the events, and stay informed about FPF APAC news and updates.