Understanding Body-Related Data Practices and Ensuring Legal Compliance in Immersive Technologies
Organizations are increasingly incorporating immersive technologies like extended reality (XR) and virtual worlds into their products and services, blurring the boundaries between the physical and digital worlds. Immersive technologies hold the potential to transform the way people learn, work, play, travel, and take care of their health, but may create new privacy risks as well. Many of these technologies rely on large amounts of data about individuals’ bodies, without which they would be less immersive, and in some cases couldn’t function at all.
Body-related data raises particular privacy risks, and leading organizations in the immersive technology space are adopting risk-based approaches for handling this type of data. Focusing on the risks—to the organization and to those impacted by the organization’s data practices—makes it easier not only to comply with the law but also to ensure more ethical data practices.
There are concrete steps organizations can take to ensure that body-related data is handled safely and responsibly. As part of their data protection strategies, organizations should:
Understand their data practices: mapping these practices, specifying their purposes, and identifying all relevant stakeholders.
Evaluate their legal obligations: analyzing existing legal obligations, as well as how they may change in the near future based on emerging trends.
Identify risks to individuals, communities, and society: cataloging the features of their data and data practices that create greater risks.
Implement best practices: operationalizing technical, organizational, and legal safeguards to prevent or mitigate the identified risks.
To guide organizations as they develop their body-related data practices, the Future of Privacy Forum created the Risk Framework for Body-Related Data in Immersive Technologies. This framework serves as a straightforward, practical guide for organizations to analyze the unique risks associated with body-related data, particularly in immersive environments, and to institute data practices that are capable of earning the public’s trust. Developed in consultation with privacy experts and grounded in the experiences of organizations working in the immersive technology space, the framework is also useful for organizations that handle body-related data in other contexts as well. This post will explore the first two stages of the risk framework: understanding an organization’s data practices, and evaluating legal obligations to ensure compliance.
I. Understanding how organizations handle personal data
The first step to handling body-related data is for organizations to understand how they handle personal data. Doing so will help them communicate these practices to their users, regulators, the general public, and other relevant stakeholders. Developing a comprehensive understanding of an organization’s data practices is also critical for identifying potential privacy risks and implementing best practices to mitigate them. Organizations should bring together experts from different teams to document how they collect, use, and onwardly transfer body-related data. The following steps help organizations conduct these processes effectively.
Create data maps of data practices, particularly in regard to body-related data
Data mapping is the process of creating an inventory of all the personal data an organization handles, including how it’s used, to whom it is transferred, and how long it is kept. While tools exist to assist organizations with data mapping, it is helpful to assign a designated person within an organization, such as a chief privacy officer or data protection officer, to be responsible for completing the data map. Data mapping also helps organizations in certain jurisdictions maintain compliance with legal obligations related to data practice documentation. Certain kinds of body-related data—such as data about people’s faces, hands, voices, and body movements—will be particularly relevant in immersive environments, and organizations operating in this space should pay special attention to them.
Document the purpose of each data practice
In order to determine which data practices are necessary, and which may be adjusted, organizations must be able to specify what goal or purpose each practice serves. Organizations might engage in a particular data practice for a variety of purposes: enabling relevant features or products, improving a product’s technical performance, facilitating targeted advertising, or customizing a user’s experience, to name a few. This documentation will help inform an organization’s evaluations of its privacy risks and legal obligations, and generate buy-in from business stakeholders within the organization by linking their interests to privacy compliance.
Identify all relevant stakeholders impacted by data practices
Evaluating an organization’s legal obligations and privacy risks requires key organizational leaders to understand which stakeholders are implicated—both as partners in data transfer agreements and as people impacted by the organization’s data practices. Organizations must understand the kinds of entities with whom they are transferring data, and who specifically within these third parties are handling the data. They should also understand who is impacted by their data practices, including data subjects or users as well as bystanders whose data may also be implicated. Special attention should be paid to individuals and communities whose data may raise additional legal or ethical considerations, such as children and teens, and people from historically marginalized or vulnerable communities.
II. Analyzing relevant legal frameworks and ensuring compliance
Once an organization has established a thorough understanding of its data practices, the next step in preparing to handle body-related data is to evaluate whether the enumerated data practices are in compliance with the law. Collecting, using, or transferring body-related data may implicate a number of issues under current U.S. privacy law. However, most existing regulations were not drafted with immersive technologies in mind. It can therefore sometimes be unclear how these rules apply to immersive technologies, and an organization’s obligations will depend on where it operates, what kind of data it handles and why, and the size and nature of the organization, among other factors.
To understand and comply with all existing obligations, organizations need to know the scope of data types covered by current laws, the requirements and rights that attach to them, and the unique considerations that may apply in immersive spaces and in regard to body-related data. Existing privacy laws in the U.S. apply, depending on jurisdiction, to body-related data involving personal, biometric, sensitive, health, and publicly available data, and organizations should pay special attention to the specific requirements under such laws.
Organizations dealing with these data types have certain legal obligations, including:
Granting users access, correction, and deletion rights
Providing opportunities to provide consent
Avoiding “dark patterns” and manipulative or deceptive design
Being transparent and providing notice to users
Minimizing data collection and retention when necessary
Conduct data protection impact assessments (DPIAs)
Institute protections for kids and teens
2023 proved to be a significant year for state privacy laws, and new legislation and regulations will continue to impact the data privacy legal landscape. Organizations should keep an eye on the major areas for emerging legislation such as youth privacy and safety, as well as consumer health data. They should also monitor how emerging litigation impacts current requirements through interpreting current legislative language.
For more information on what organizations can do to ensure they handle body-related data safely and responsibly, read for the next post in our series, focusing on identifying risks and implementing best practices. For a comprehensive guide to body-related data practices in immersive technologies, see FPF’s Risk Framework for Body-Related Data in Immersive Technologies.
FPF in 2023: A Year in Review
As 2023 comes to an end, we want to reflect on a year that saw the Future of Privacy Forum (FPF) continue to expand its presence globally and domestically while organizing engaging events, publishing thought-provoking analysis, providing the latest expert updates, and more. FPF continues to convene industry experts, academics, consumer advocates, and other experts to explore the challenging issues in the data protection and privacy field.
The AI Impact
2023 was the year of AI. We saw AI technologies catapulted into the mainstream with Generative AI tools such as ChatGPT, Google Bard, and others. AI continues to have countries worldwide working to regulate the technology and companies scrambling to figure out how to navigate AI amongst their employees and their products and services.
To respond to the demand for understanding in AI, FPF worked with stakeholders on best practices, provided in-depth training on AI-related topics, and discussed the evolving impact of this technology with many of you at roundtable discussions, expert panels, and more.
Here are some of FPF’s biggest AI moments of 2023:
Hosted our first-ever Japan Privacy Symposium where Data Protection and Privacy Commissioners of the G7 DPAs discussed their approaches to regulating AI.
Discussed alternative solutions for processing of (personal) data with Machine Learning at CPDP Brussels and generative AI systems in Asia-Pacific during Singapore’s PDP Week.
Participated in a Capitol Hill briefing hosted by the Wilson Center and Seed AI in conjunction with the Congressional Artificial Intelligence Caucus “AI Primer: AI in the Workplace,” highlighting FPF’s Best Practices.
Provided testimony on the responsible use and adoption of AI technologies in New York City classrooms.
Published insightful op-eds in WIRED discussing the intersection of AI and immersive technologies and The Hill on generative AI and elections.
Held stakeholder workshops on the current regulation of generative AI throughout the APAC region.
Organized a session at the Global Privacy Assembly on the use of public information for LLM training.
Relaunched the FPF Training program, providing in-depth expert sessions on topics such as the EU AI Act, the fundamentals of AI and machine learning, and more.
Continuing FPF’s Global Reach
In 2023, FPF closely followed and advised upon significant developments in Asia, the European Union, Africa, and Latin America. We also discussed privacy and data protection with many of you at key conferences and events across the globe, including in Washington, DC, Brussels, Tokyo, Singapore, Bermuda, and Tel Aviv.
As India’s Digital Personal Data Protection Act sprinted through its final stages in August after several years of debates, postponements, and negotiations, FPF provided an in-depth, comprehensive explainer of its important aspects and key provisions, as well as discussed its extraterritorial effects in a LinkedIn Live conversation. The Act also focused on protections for the processing of personal data of children and introduced the concept of “verifiably safe” measures and, FPF in partnership with The Dialogue released a Brief containing a Catalog of Measures for “Verifiably Safe” Processing of Children’s Personal Data Under India’s Digital Personal Data Protection Act (DPDPA) 2023. In partnership with NASSCOM, FPF also hosted a webinar series on the consent regime under India’s new Digital Personal Data Protection Act of 2023.
FPF saw its presence in Asia continue to grow as the FPF Asia-Pacific office entered its third year. FPF and S&K Brussels hosted the first-ever Japan Privacy Symposium in Tokyo, providing insight into the regulatory priorities of the G7 DPAs and global thought leadership on the interaction of data protection and privacy laws with AI. During Singapore’s PDP Week, our Asia-Pacific team held a roundtable on the governance implications of generative AI systems, spoke at the Asia Privacy Forum, and hosted an in-person training on the EU AI Act.
FPF remains consistently active in the European Union, with several engaging events bringing together the European data privacy community and numerous thought-provoking blogs, reports, and analyses published in 2023. FPF launched its in-depth report on enforcement of the EU’s GDPR Data Protection by Design and by Default obligations and hosted our 7th Annual Brussels Privacy Symposium with the Brussels Privacy Hub of Vrije Universiteit Brussel, which included opening remarks by European Commissioner for Justice Didier Reynders and European Data Protection Supervisor Wojciech Wiewiórowski. We also analyzed the regulatory strategies of European DPAs for 2023 and beyond in our continuing series.
In addition, our global experts provided analysis on privacy and data protection developments in Vietnam, Nigeria, Australia, Tanzania, and the African Union and published an overview comparing three regional model contractual frameworks for cross-border data transfers.
U.S. Legislative Activity
In 2023, FPF played a key role in informing regulatory agencies and state legislatures on privacy in various emerging technologies, such as AI. Our experts testified before state legislatures, provided informative analysis, submitted regulatory comments, and more.
We provided recommendations and filed comments with the:
U.S. Department of Health and Human Services Office for Civil Rightsregarding the Notice of Proposed Rulemaking on extending additional protections to reproductive health care data under the Health Insurance Portability and Accountability Act.
U.S. Federal Trade Commissionregarding the Notice of Proposed Rulemaking to clarify the scope and application of the Health Breach Notification Rule, and again regarding the use of “Privacy-Protective Facial Age Estimation” as a potential mechanism for verifiable parental consent under the Children’s Online Privacy Protection Act Rule.
California Privacy Protection Agency to inform the Agency’s rulemaking to implement the California Privacy Rights Act amendments to the California Consumer Privacy Act’s provisions on cybersecurity audits, risk assessments, and automated decision-making.
National Telecommunications and Information Administration in response to their request for comment on privacy, equity, and civil rights, and again in response to their request for comment on Kids Online Health and Safety as part of the Biden-Harris Administration’s Interagency Task Force on Kids Online Health & Safety.
Consumer Financial Protection Bureau in response to their request for comment regarding data portability for financial products and services, and again in response to their Request for Information (RFI) Regarding Data Brokers and Other Business Practices Involving the Collection and Sale of Consumer Information.
2023 also saw developments in various U.S. state commercial privacy laws. We found that the number of state laws increased from five to twelve (or, arguably, thirteen), and in response, provided timely analysis in Iowa, Indiana, Montana, Tennessee, Florida, Texas, Connecticut, Oregon, Utah, and Delaware. In addition, Washington and Nevada became the first to pass broad-based consumer health data privacy legislation. Earlier this month, our Director for U.S. Legislation Keir Lamont took a look ahead at the state privacy landscape in 2024.
For the 13th year, FPF recognized leading privacy research and analytical work with the Privacy Papers for Policymakers Award held on Capitol Hill. The winners spoke about their research in front of an audience of academic, industry, and policy professionals in the field. The event featured keynote speaker FTC Commissioner Alvaro Bedoya.
Youth & Education Privacy
Federal and state policymakers turned to the protection of children online, with President Biden notably mentioning it for a second year in a row during this year’s State of the Union address.
In partnership with LGBT Tech, we outlined recommendations for schools and districts to balance inclusion and student safety in technology use. Our analysis builds on thorough research, including interviews with recent high school graduates who identify as LGBTQ+, to gather firsthand accounts of how student monitoring impacted their feelings of privacy and safety at school.
Over the summer, we published one of our popular infographics examining age assurance technologies. The infographic’s authors unpacked the risks and potential harms associated with attempting to discern someone’s age online and potential mitigation tools in this LinkedIn Live conversation.
Privacy by design for kids and teens also expanded globally in 2023. As policymakers, advocates, and companies grapple with the ever-changing landscape of youth privacy regulation, we hosted a well-attended webinar with a wide range of global experts discussing the current state of kids’ and teens’ privacy policy.
The Rise of Emerging Technologies, Examining the Open Banking Ecosystem, & Analysis on Research Data Sharing
As stakeholders became increasingly interested in immersive technologies, notably AR/VR/MR, we responded by releasing the Risk Framework for Body-Related Data in Immersive Technologies, whichassists organizations in safely and responsibly handling body-related data. Our team also held a series of webinars exploring the intersection of immersive technology with topics like AI, advertising, education, and more.
In March, we published an infographic breaking down the complex U.S. open banking ecosystem, supported by over a year of meetings and outreach with leaders in banking, credit management, financial data aggregators, and solution providers to comprehensively understand the developing industry of open banking, with the infographic’s authors discussing its privacy implications in a LinkedIn Live conversation.
In 2023, we continued to examine privacy and research data sharing by producing Data Sharing for Research: A Compendium of Case Studies, Analysis, and Recommendations, demonstrating how, for many organizations, data-sharing partnerships are transitioning from being considered an experimental business activity to an expected business competency. We also held the 3rd Annual Award for Research Data Stewardship, honoring representatives from Optum and the Mayo Clinic for their outstanding corporate-academic research data-sharing partnership. During this virtual event, we opened with a keynote address by U.S. Congresswoman Lori Trahan.
Bringing Together Leaders in Privacy and Data Protection
On a different track, FPF also built out a wide range of peer-to-peer meetings and calls for the senior executives working on data protection compliance issues. We hosted virtual meetings on key topics of interest on an every other month basis, smaller meetings for specific sector leaders, and in-person meetings in multiple cities.
The “Current State of Global Opt-Out Technology” event in March provided members with insights into regulator discussions, along with a vendor showcase featuring solution demonstrations.
Hosted 50+ in-person and virtual peer-to-peer meetings across the globe for intimate discussions among privacy executives focused on their top-of-mind issues.
Launched Privacy Metrics 2.0 to help advance industry OKRs and the underlying metrics, provide privacy leaders with the tools they need to have effective conversations with their boards, and provide useful information for ESG reporting and investor communications.
This is by no means a comprehensive list of all of FPF’s important and engaging work in 2023, but we hope it gives you a sense of our work’s impact on the privacy community and society at large. We believe our success is due to deep engagement with privacy experts in industry, academia, civil society, and government and our belief that collaborating across sectors and disciplines is needed to advance practical safeguards needed for data uses that benefit society. Keep updated on FPF’s work by subscribing to our monthly briefing and following us on LinkedIn, Twitter/X, and Instagram.
On behalf of the FPF team, we wish you a very Happy New Year and look forward to celebrating 15 years of FPF in 2024!
FPF Publishes New Report: A Conversation on Privacy, Safety, and Security in Australia: Themes and Takeaways
Australia’s Online Safety Act of 2021 (“Online Safety Act”) mandates the development of industry codes or standards to provide appropriate community safeguards with respect to certain online content, including child sexual exploitation material, pro-terror material, crime and violence material, and drug-related material. Through September 2023, the eSafety has registered six industry codes that cover: Social Media Services, App Distribution Services, Hosting Services, Internet Carriage Services, Equipment, and Internet Search Engine Services. In May 2023, however, the Commissioner rejected proposed codes for relevant electronic services (“RES”) and designated internet services (“DIS”) on account that they “do[] not provide appropriate community safeguards.” Under the Online Safety Act, the rejection of the RES and DIS codes by the Office of the eSafety Commissioner initiated a process in which the Commissioner drafted industry standards for these sectors. A draft of the industry standards was published on November 20, 2023, and is open for public comment until December 21, 2023.
For purposes of the FPF and meeting, participants were asked to assume the existence of industry standards that satisfies the Online Safety Act’s statutory requirements. As such, the goal was not to solicit arguments about any specific approach, but rather to provide an opportunity for experts to discuss underlying opportunities and challenges in regard to the creation of industry standards, particularly in regard to partially or entirely end-to-end encrypted services. While meeting participants were not in full agreement in regard to any specific point, there were many themes that came up multiple times within the conversation as well as areas of consensus on certain points, including:
Participants agreed broadly on the goals of the e-Safety Act and the mission of the e-Safety Commissioner
Several participants found deficits in the length and scope of the public consultation available throughout the process
Participants identified several potential benefits of an industry code beyond its intended scope
Participants broadly opposed any approach that would require otherwise encrypted messaging services to utilize content hashing and/or client-side scanning
Many participants discussed the need for unique treatment for different types of content based on distinctions in context
Participants flagged previous cases of mission drift in regard to certain legal authorities and warned of similar evolution
Participants flagged an important role for greater education, both for individuals as well as enforcers
Participants supported a broad public dialogue on effective responses and solutions
Participants identified a large number of unanswered questions in regard to the creation, implementation, and enforcement of industry codes that left much uncertainty
Australia has played a leadership role globally on issues related to Online Safety and is likely to continue to do so
Risk Framework for Body-Related Data in Immersive Technologies
Organizations building immersive technologies like extended reality and virtual worlds often rely on large amounts of data about individuals’ bodies and behaviors. While body-related data allows for new, positive applications in health, education, entertainment, and more, it can also raise privacy and safety risks. FPF’s risk-based framework helps organizations seeking to develop safe, responsible immersive technologies, guiding them through the process of documenting how and why they handle body-related data, complying with applicable laws, evaluating their privacy and safety risks, and implementing best practices.
While the framework is most useful for organizations working on technologies with immersive elements, it is also useful for organizations that handle body-related data in other contexts.
Stage 1: Understanding How Organizations Handle Personal Data
Understanding your organization’s data practices is the first step toward identifying potential privacy risks, ensuring legal compliance, and implementing relevant best practices to improve privacy and safety. It can also allow organizations to better communicate about those practices. To this end, organizations should:
Create data maps of their data practices, particularly in regard to body-related data types.
Document the purpose of each data practice.
Identify all relevant stakeholders impacted by data practices, including third-party recipients of personal data and data subjects.
Stage 2: Analyzing Relevant Legal Frameworks and Ensuring Compliance
Collecting, using, or transferring body-related data may implicate a number of current and emerging U.S. privacy laws. As such, organizations should:
Understand the individual rights and business obligations that apply under existing comprehensive and sectoral privacy laws.
Analyze how emerging legislation and regulations will impact body-based data practices.
Stage 3: Identifying and Assessing Risks to Individuals, Communities, and Society
Privacy harms may stem from particular types of data being used or handled in particular ways, or transferred to particular parties. In that regard, legal compliance may not be enough to mitigate risks, and organizations should:
1. Proactively identify and minimize the risks their data practices could pose to individuals, communities, and society. Factors that impact the risk of a data practice include:
Identifiability
Use for critical decisions
Sensitivity
Partners and third parties
Potential for inferences
Data retention
Data accuracy and bias
User expectations and understanding
2. Assess how fair, ethical, and responsible the organization’s data practices are based on the identified risks.
Stage 4: Implementing Relevant Best Practices
There are a number of legal, technical, and policy safeguards that can help organizations maintain statutory and regulatory compliance, minimize privacy risks, and ensure that immersive technologies are used fairly, ethically, and responsibly. Organizations should:
1. Implement best practices intentionally—adopted with consideration of an organization’s data practices and associated risks; comprehensively—touching all parts of the data lifecycle and addressing all relevant risks; and collaboratively—developed in consultation with multidisciplinary teams within an organization including stakeholders from legal, product, engineering, privacy, and trust and safety. Such practices include:
Data minimization
Local and on-device processing and storage
Purpose specification and limitation
Third party management
Meaningful notice and consent
Data integrity
User controls
Privacy-enhancing technologies (PETs)
2. Evaluate best practices in regard to one another, as part of a coherent strategy.
3. Assess best practices on an ongoing basis to ensure they remain effective.
Five Big Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2024
Entering 2024, the United States now stands alone as the sole G20 nation without a comprehensive, national framework governing the collection and use of personal data. With bipartisan efforts to enact federal privacy legislation once again languishing in Congress, state-level activity on privacy dramatically accelerated in 2023. As the dust from this year settles, we find that the number of states with ‘comprehensive’ commercial privacy laws swelled from five to twelve (or, arguably, thirteen), a new family of health-specific privacy laws emerged in Democratic-led states while Republican-led states increasingly adopted controversial age verification and parental consent laws, and state lawmakers took the first steps towards comprehensively regulating the development and use of Artificial Intelligence technologies.
While stakeholders are eager to know whether and how these 2023 trends will carry over into next year’s state legislative cycle, it is too early to make predictions with any confidence. So instead, this post explores five big questions about the state privacy landscape that will shape how 2024 legislative developments will impact the protection of personal information in the United States.
1. Will Any State Buck the Consensus Framework for ‘Comprehensive’ Privacy Protections?
Following the adoption of the California Consumer Privacy Act (CCPA) in 2018, many stakeholders expressed concern that U.S. states were poised to enact a deluge of divergent and conflicting state privacy laws, confusing individuals and placing onerous burdens on businesses for compliance. To date, the worst case scenarios for this dreaded “patchwork” have largely not come to pass. Instead, lawmakers outside California have repeatedly rejected the convoluted and ever-shifting CCPA approach in preference of iterating around the edges of the more streamlined Washington Privacy Act-framework. Alternative approaches like the ULC model bill or frameworks rooted in the federal American Data Privacy and Protection Act proposal have failed to gain any serious traction. Will this trend hold, or is any state positioned to upend the bipartisan consensus on privacy legislation and adopt an alternative regulatory framework that creates novel individual rights, covered entity obligations, or enforcement provisions?
Despite the overarching trend of regulatory convergence there are still meaningful differences between the post-California comprehensive state privacy laws. Notable new wrinkles adopted in the 2023 legislative sessions include the Texas requirement that even small businesses obtain consent to sell sensitive personal data, Oregon creating a right-to-know the specific third parties who receive personal data from covered entities, and Delaware extending certain protections for adolescents up to the age of seventeen. However, for the most part, the new class of comprehensive commercial privacy laws adhere to the same overarching framework, definitions, and core concepts, enabling regulated entities to build out of one-size-fits-most compliance strategies.
Next year, states wishing to enact protections for personal data held by businesses will have a clear blueprint with a bipartisan track record of success for doing so. However, the emerging inter-state consensus for privacy protection is not without its critics. In particular, some privacy advocacy groups have argued that the current laws place too much of the onus for protecting privacy on individuals rather than the businesses and nonprofits that are engaged in the collection, processing, and transfer of user data and have supported various models that would take a different approach.
Based on the 2023 lawmaking sessions, two states stand out as potential candidates to buck the Washington Privacy Act-paradigm by virtue of having unique privacy proposals previously clear a chamber in their state legislature. First is the Kentucky Consumer Data Protection Act (SB 15) from Senator Westerfield which passed the State Senate by a 32-2 vote in 2023. This bill included a GDPR-style ‘lawful basis’ requirement for the collection of personal data. Second, in New York State, Senator Thomas (who is now running for Congress) shepherded the New York Privacy Act (S 365) through the State Senate. The proposal included numerous distinct privacy rights and protections, particularly with respect to first-party online advertising. Could 2024 be the year that one or both of these proposals cross the finish line?
2. What will California do on Artificial Intelligence?
Recent advancements and public attention to Artificial Intelligence (AI) systems, particularly those with generative capabilities, have placed AI high on the agenda for policymakers at all levels of government. To be sure, automated decision making and profiling technologies have been in use in various forms for many years and are regulated by existing legal regimes both within and outside the privacy context. Nevertheless, lawmakers appear keen to explore new governance models that will allow the U.S. to unlock the social and economic benefits promised by AI while minimizing risks to both individuals and communities. As has been the case with commercial privacy legislation, California once again appears poised to play an important role in establishing initial, generally applicable rules-of-the-road for business use of AI systems. However, this time there are two overlapping approaches that stakeholders must track.
Of the two efforts taking place in California, the first is with the California Privacy Protection Agency (“the Agency”). The CCPA charges the Agency with establishing rules “governing access and opt-out rights with respect to businesses’ use of automated decisionmaking technology” (ADMT). The Agency interprets this provision as an authorization to create standalone individual rights to opt-out of various automated processing technologies. Agency board member Alastair Mactaggart has gone so far as to call the Agency “probably the only realistic” AI regulator in the United States on the basis of this provision. To date, the Agency has proposed draft regulations that would create individual opt-out rights with respect to ADMT in six distinct circumstances that extend far beyond existing legal regimes. These include when ADMT is used to reach significant decisions about an individual, when ADMT is used to profile an employee or student, and when ADMT is used to profile an individual in a public place.
Second, California legislators have also taken an active interest in establishing broad protections and rights with respect to the use of AI systems. In 2023, Assemblymember Bauer-Kahan’s AB331 on automated decision tools made substantial legislative progress and appears likely to be reintroduced next year. The proposal is geared toward preventing algorithmic discrimination and imports a developer-deployer distinction from global frameworks for the allocation of risk management, rights, and transparency responsibilities. While the proposal was not enacted on its first attempt, AB331 has nevertheless already proven to be influential in shaping how policymakers in other states are considering AI systems.
Critically, these two emerging Californian approaches to regulating AI systems broadly overlap and are in tension on many key issues. For example, the CCPA’s draft regulations would include systems that so much as “facilitate” human decisions, while AB 331 is focused on systems that are the “controlling factor” for decisions. Separately, AB 331 is focused toward high-risk “consequential decisions,” while the CPPA is considering several applicability thresholds based on data collection and use in certain contexts that are unmoored from any objective standard of individual harm. The manner in which these diverging California processes advance, and questions about how they would operate in conjunction, is likely to play a major role in the emergence of standards for AI governance in the United States.
3. Will 2024 (Finally) be the Year of Privacy Enforcement Actions?
As the emerging state-driven approach to regulating individual privacy in the U.S. continues to mature, the contours of personal rights and business obligations will necessarily begin to be shaped not just by laws on the books, but also their interpretation, implementation and enforcement. While five ‘comprehensive’ state privacy laws will be in effect at the start of 2024, there remains a scarcity of regulator actions enforcing this new class of law. To date, the only known enforcement action that reached a financial penalty is the California Attorney General’s 2022 settlement with the French cosmetics retailer Sephora, which was based primarily on alleged failure to allow customers to opt-out of behavioral advertising. Following a quiet 2023, could 2024 be the year that the public first experiences widespread enforcement of their new privacy rights?
One structural reason for a lack of visible enforcement actions may be that Virginia, Colorado, Connecticut, and until recently, California all provide the ability for businesses to ‘cure’ many or all alleged violations of their privacy laws before a formal enforcement action can take place (this right to cure shall sunset in both Colorado and Connecticut in 2025). Therefore, initial enforcement activity in the first wave of state privacy laws may be happening largely out of the public eye, with businesses rapidly bringing their programs into compliance in response to notices of suspected noncompliance. Furthermore, while the CCPA’s right to cure has already sunset, the ability of its regulators to fully enforce the law has been thrown into doubt until next year due to missed rulemaking deadlines and a subsequent lawsuit from the California Chamber of Commerce.
Despite what may be perceived as initial slow going, there are several indicators of regulatory interest that may foreshadow forthcoming enforcement actions. For example, the Colorado Attorney General has announced the release of a series of enforcement letters focused on educating companies about their new obligations, particularly with respect to processing sensitive personal data. Furthermore, the California Attorney General’s Office and the California Privacy Protection Agency have launched separate inquiries with the Attorney General’s office seeking information about how businesses are applying the CCPA to employee data while the Agency is investigating the connected vehicle space. The fruits of these efforts may result in an upswing in public enforcement activity in 2024.
Separately, much of the Washington My Health, My Data Act (MHMD), the first major state privacy law to contain a broad private right of action since the adoption of the Illinois Biometric Information Privacy Act (BIPA) in 2008, will take effect in March 2024. MHMD is a far-reaching and novel commercial health data privacy framework that contains numerous ambiguous and inartfully drafted provisions which may generate both confusion and ripe grounds for litigation. In contrast to BIPA however, MHMD’s private right of action is tied to the state’s Consumer Protection Act, which lacks statutory damages and requires a showing of injury to ‘business or property’ to recover damages – a requirement that may temper the trial bar’s enthusiasm for lawsuits. The forthcoming litigation landscape around the MHMD and its perceived success or failure for advancing individual privacy protection may shape the state privacy enforcement landscape in 2023 and significantly influence whether private enforcement mechanisms are considered for inclusion in future privacy laws.
4. Which States will Tinker with their Existing Laws?
Despite the purported ‘comprehensiveness’ of the new state privacy laws, enacting a commercial privacy regime has been shown to often be just the start of a state’s legislative engagement on privacy matters. In 2023 alone, four of the initial five movers on state privacy took meaningful further steps on commercial privacy legislation. First, California lawmakers amended the CCPA to expand the definition of sensitive personal data and create protections for reproductive care information while also passing a first-of-its-kind law to establish a one-stop-shop mechanism to enable people to delete personal information held by data brokers. Second, before the Connecticut Data Privacy Act even took effect, its original sponsors successfully adopted amendments to dramatically expand its terms to include novel protections for health and child data. Third, Utah enacted new legislation creating far-reaching restrictions and age verification requirements for social media and adult content websites. Finally, Virginia came close to adopting a Governor-sponsored amendment to the landmark VCDPA which would have created verifiable parental consent requirements for the collection of personal information from children under age 18.
With a dozen comprehensive privacy laws now on the books that mostly share a similar framework, perhaps the question stakeholders should be asking is not ‘who is the next domino to fall’ but, ‘which existing law will be the first to be substantially revised?’
5. Is Any of this Constitutional Anyway?
Certain observers, particularly those more skeptical of government regulation, have long argued that wide reaching state privacy laws are Constitutionally suspect given the Dormant Commerce Clause and the First Amendment, particularly pursuant to Sorrell v IMS Health (2011) precedent. Such concerns and objections have been a long simmering feature of the conversation around the evolving state privacy landscape; however, they gained new life in September when an Obama-appointed federal judge enjoined California’s novel California Age Appropriate Design Code Act (AADC) from taking effect. What impact will this injunction and ongoing litigation involving the AADC have on the broader U.S. privacy landscape?
Adopted in 2022, the California Age-Appropriate Design Code Act was always an odd fit for the American legal context. The statute is directly rooted in a United Kingdom Code of Practice designed to implement aspects of the General Data Protection Regulation with respect to children. Certain non-privacy focused AADC business requirements – like conducting age estimation of users, limiting access to “potentially” harmful content, and granting the state Attorney General power to second guess whether organizations’ content moderation decisions conform with their posted policies – are in clear tension with longstanding U.S. precedent.
It was therefore expected when the trade association NetChoice initiated litigation against the AADC in December, 2022. However, in a surprise to many observers, the Court’s subsequent injunction systematically assessed and determined that essentially every affirmative obligation of the AADC is unlikely to survive commercial speech scrutiny, including privacy focused requirements for conducting data protection impact assessments (DPIAs), setting high default privacy settings, minimizing data collection and processing, and restrictions on so-called ‘dark patterns.’ Many of these provisions are common features (at least conceptually) of both comprehensive and sectoral U.S. commercial privacy laws. Should the full scope of District Court’s holding survive the state’s appeal intact, it will raise significant questions about the continued constitutional integrity of privacy laws across the country while providing a blueprint for subsequent legal challenges.
Conclusion
This commentary has noted several jurisdictions where impactful privacy legislation, regulation, enforcement, and litigation is a near certainty in the new year. However, the rate of state privacy activity has expanded each year since 2018, and observers should expect a new barrage of privacy proposals starting when state sessions formally start convening in January. There are many questions, but perhaps only one clear forecast: another turbulent and exciting year in the ongoing state-level efforts to advance and secure new privacy rights and protections for personal data is on the close horizon. Interested stakeholders can follow The Patchwork Dispatch for industry leading-updates and analysis tracking emerging trends and key developments throughout the year.
The PrivaSeer Project in 2023: Access to 1.4 million privacy policies in one searchable body of documents
In the summer of 2021, FPF announced our participation in a collaborative project with researchers from the Pennsylvania State University and the University of Michigan to develop and build a searchable database of privacy policies and other privacy-related documents, with the support of the National Science Foundation. This project, PrivaSeer, has since become an evolving, publicly available search engine of more than 1.4 million privacy policies.
PrivaSeer is designed to make privacy policies transparent, discoverable, and searchable, for use by researchers in the privacy field as well as privacy practitioners in the marketplace. PrivaSeer supports searches of a corpus of privacy policies collected from the web at distinct points in time – currently four time stamps. Search results can be filtered by a wide variety of parameters, including the date of the crawl, the publisher’s industry, use of particular tracking technologies, inclusion of relevant regulations, assessment on Flesch-Kincaid Reading Level, and more. The high level of customizable searchability is made possible via NLP techniques designed and implemented by researchers at the Pennsylvania State University and the University of Michigan. The project will continue to add new tranches of policies to the existing corpus on a periodic basis.
Two Project-Related Publications Received “Best Student Paper” Awards This Year
In addition to building the eponymous online tool, the PrivaSeer project grant has supported the publication of a number of papers by researchers involved in the privacy field. First, an effort to systematically identify and discuss issues within the privacy research community titled “Researchers’ Experiences in Analyzing Privacy Policies: Challenges and Opportunities” was presented at the 2023 Privacy Enhancing Technologies Symposium held in Lausanne, Switzerland by lead author Abraham Mhaidli, one of PrivaSeer’s graduate researchers from the University of Michigan. The paper was selected as one of the winners of the Symposium’s Andreas Pfitzmann Best Student Paper Award.
The paper was based on semi-structured interviews conducted with 26 researchers from a variety of academic disciplines working in the privacy space, and investigated what common research practices and pitfalls might exist in the privacy research space. The co-authors identified a lack of consistent, re-usable, well-maintained tools as one of the major obstacles to ongoing privacy research, resulting in significant duplication of effort among the research community, and noted the difficulty in fostering interdisciplinary collaboration.
A second paper, “Privacy Now or Never: Large-Scale Extraction and Analysis of Dates in Privacy Policy Text,” was accepted at the 23rd Symposium on Document Engineering (DocEng), hosted in Limerick, Ireland. This paper was presented by PrivaSeer graduate researcher and lead author Mukund Srinath from the Pennsylvania State University, and investigated the degree to which online privacy disclosures comply with annual update requirements across a set of large-scale web crawls containing several million distinct policies. Using a newly developed method for extracting dates from plain-text documents, the researchers discovered that under 40% of public privacy notices contain readable dates, and further, updates correlated heavily to major changes in the data protection legal landscape, with a significant percentage likely dating to 2018 without subsequent change. The paper’s conclusions point to the significant compliance problem of ensuring that privacy notices are actually kept up-to-date, and suggest that for many data controllers this is not the case, although more recent updates were associated with URLs that saw greater amount of online traffic.
A third paper, “Privacy Lost and Found: An Investigation at Scale of Web Privacy Policy Availability,” was also accepted at DocEng, and was further selected as the winner of the Best Student Paper Award. This paper presented a large-scale investigation of the availability of privacy policies, seeking to identify and analyze potential reasons for policy unavailability such as dead links, documents with empty content, documents that consist solely of placeholder text, and documents unavailable in the specific languages offered by their respective websites. The paper was also able to offer critical analysis and conclusions regarding privacy notices generally, based on a number of statistical methodologies. Overall, the researchers found that privacy policy URLs were only available in 34% of websites examined, and were able to estimate population parameters for both the total number of English-language privacy documents on the web and for their likely distribution across different commercial sectors. The study was able to further the privacy research community’s understanding of the overall status of English-language privacy policy policies worldwide, and provide valuable information about the rate and likelihood of users encountering various difficulties in accessing them.
2023 Stakeholders Workshop Provided Valuable Input Into Refining the PrivaSeer Search Engine and Tools
In addition to the publications associated with the PrivaSeer project, on July 25, 2023, the Future of Privacy Forum hosted an interdisciplinary workshop with key stakeholders to present the project to members of the privacy research community in industry and civil society.
July’s workshop featured presentations from FPF’s Vice President for Global Privacy Dr. Gabriela Zanfir Fortuna, as well as project co-leads Dr. Shomir Wilson, Assistant Professor in the College of Information Sciences and Technology at the Pennsylvania State University and Dr. Florian Schaub, Associate Professor of Information and of Electrical Engineering and Computer Science at the University of Michigan. Dr. Zanfir-Fortuna provided a practical demonstration of the PrivaSeer tool in action, while Professors Wilson and Schaub provided an overview of PrivaSeer’s development and current functionality.
Presentations by the project’s co-leads were followed by a discussion of how the tool may be used and improved as a future resource for researchers and industry professionals with various key FPF stakeholders. Discussants raised the prospect of using PrivaSeer to research the emergence of specific terms relating to the use of AI/ML technologies in privacy notices, conduct comparative studies of privacy policies presented in multiple languages, and examine how required disclosures related to cross-border data transfers may be changing over time. Participants also discussed how the tool might be useful in assessing privacy-adjacent disclosures such as cookie notices and terms of service, and provided the research team with a wide array of useful feedback as the project progresses into its third year.
PrivaSeer is now a functional, public-facing tool available to the privacy community, both for researchers and for privacy professionals working in public or private-sector compliance. FPF will continue to support the development of new functionality in the tool, and our team looks forward to contributing however we can to the scholarship in this area.
A Blueprint for the Future: White House and States Issue Guidelines on AI and Generative AI
Since July 2023, eight U.S. states (California, Kansas, New Jersey, Oklahoma, Oregon, Pennsylvania, Virginia, and Wisconsin) and the White House have published executive orders (EOs) to support the responsible and ethical use of artificial intelligence (AI) systems, including generative AI. In response to the evolving AI landscape, these directives signal a growing recognition of the rapid pace of AI development and the need to manage potential risk to individuals’ data and mitigate algorithmic discrimination against marginalized communities.
FPF has released a new comparison chart that summarizes and compares U.S. state and federal EOs and discusses how they fit into the broader context of AI and privacy.
In addition to the state governments, several cities (e.g., Boston, San Jose, and Seattle) have also issued guidelines on generative AI use that seek to recognize the opportunities of AI while mitigating bias, privacy, and cybersecurity risks. In contrast, other jurisdictions, such as Maine, have issued a moratorium on state generative AI use while they perform a holistic risk assessment.
Although each of the state and federal EOs on AI and generative AI has a different scope, at minimum, most charge agencies with the creation of a task force to study AI and offer recommendations.
Here are some overarching takeaways from our analysis of all of the EOs:
1. The White House and California Issued the Most Prescriptive EOs
Of the U.S. state and federal EOs analyzed, the White House requires the heaviest lift. The White House EO mandates dozens of reports and next steps for federal agencies, including the creation of guidance and standards for AI auditing, generative AI authentication, and privacy-enhancing technologies (PETs).
Similarly, of the state EOs, California is the most prescriptive and includes a number of specific mandates and reports tailored to different agencies, such as the creation of procurement guidelines, assessments on the effect of generative AI on infrastructure, and research on the impact of generative AI on marginalized communities.
2. Most State EOs Focus on “Generative AI”
Several state governments, such as California, Kansas, New Jersey, Pennsylvania, and Wisconsin, only focus on generative AI – how the technology should be used by state agencies, the risks it carries, and how it may affect their state industries and workforce. Oklahoma, Oregon, and Virginia take a broader stance and cover generative AI as well as broader types of AI systems in their EOs. Kansas and Pennsylvania are the only two states to explicitly define generative AI.
The White House EO represents an amalgam of the state EOs, as it defines generative AI (similar to Kansas and Pennsylvania) and also broadly covers different types of AI systems (similar to Oklahoma and Virginia).
3. Varying Approaches to Agencies’ Roles
The White House EO charges certain agencies with authority to create binding guidelines and standards for government actors. In contrast, rather than creating new task forces or boards, Kansas and Virginia charge state agencies to study AI technology and provide general recommendations. New Jersey and Wisconsin, two states with less rigorous EOs, emphasize that their task forces serve solely advisory roles. Oklahoma and the White House are the only EOs to require each agency to appoint an individual on their team to become an AI and generative AI expert.
4. Impact to Industry
While these EOs are primarily focused on government use of emerging AI systems, there are major requirements contained in many of them that may have consequential effects on industry.
Procurement Requirements:Companies selling certain AI products and services to government entities will need to satisfy new baseline procurement standards.
Enforcement:Agency-created standards and policies may inform government regulators’ perspectives on AI compliance with data privacy, security, civil rights, and consumer protection laws, particularly given the forthcoming standard setting activity directed by the White House EO.
Influence on Legislation:As mentioned in California’s EO and the White House EO’s accompanying fact sheet, key actors in state and federal executive agencies will work with policymakers to pursue legislative approaches to support the development of responsible AI by the private sector.
These EOs represent a watershed moment for AI system users, developers, and regulators alike. Over the next few years, increased government action in this area will lead to new requirements and opportunities that will have lasting implications for both the public and private sector.
FPF and The Dialogue Release Collaboration on a Catalog of Measures for “Verifiably safe” Processing of Children’s Personal Data under India’s DPDPA 2023
When India’s DPDPA passed in August, it created heightened protections for the processing of personal data of children up to 18. When the law goes into effect, entities who determine the purpose and means of processing data, known as “data fiduciaries,” will need to apply these heightened protections to children’s data. Under the DPDPA, there is no further distinguishing between age groups of children, and all protections, such as obtaining parental consent before processing a child’s data, will apply to all children up to 18. However, the DPDPA stipulates that if the processing of personal data of children is done “in a manner that is verifiably safe,” the Indian government has the competence to lower the age above which data fiduciaries may be exempt from certain obligations.
In partnership with The Dialogue, an emerging research and public-policy think-tank based in New Delhi with a vision to drive a progressive narrative in India’s policy discourse, FPF prepared a Brief compiling a catalog of measures that may be deemed “verifiably safe” when processing children’s personal data. The Brief was informed by best practices and accepted approaches from key jurisdictions with experience in implementing data protection legal obligations geared towards children. Not all of these measures may immediately apply to all industry stakeholders.
While the concept of “verifiably safe” processing of children’s personal data is unique to the DPDPA and not found in other data protection regimes, the Brief’s catalog of measures can aid practitioners and policymakers across the globe.
The Brief outlines the following measures that can amount to “verifiably safe” processing of personal data of children, proposing additional context and actionable criteria for each item:
1. Ensure enhanced transparency and digital literacy for children.
2. Ensure enhanced transparency and digital literacy for parents and lawful guardians of very young users.
3. Opt for informative push notifications and provide tools for children concerning privacy settings and reporting mechanisms.
4. Provide parents or lawful guardians with tools to view, and in some cases set, children’s privacy settings and exercise privacy rights.
5. Set account settings as “privacy friendly” by default.
6. Limit advertising to children.
7. Maintain the functionality of a service at all times, considering the best interests of children.
8. Adopt policies to limit the collection and sharing of children’s data.
9. Consider all risks of processing their personal data for children and their best interests via thorough assessments.
10. Ensure the accuracy of the personal data of children held.
11. Use and retain personal data of children considering their best interests.
12. Adopt policies regarding how children’s data may be safely shared.
13. Give children options in an objective and neutral way, avoiding deceptive language or design.
14. Put in place robust internal policies and procedures for processing personal data of children and prioritize staff training.
15. Enhance accountability for data breaches through notifying the parents or lawful guardians and adopting internal policies such as Voluntary Undertaking if a data breach occurs.
16. Conduct specific due diligence with regard to children’s personal data when engaging processors.
We encourage further conversation between government, industry, privacy experts, and representatives of children, parents, and lawful guardians to identify which practices and measures may suit specific types of services and industries, or specific categories of data fiduciaries.
ICYMI: FPF Webinar Discussed The Current State of Kids’ and Teens’ Privacy
Privacy by design for kids and teens has expanded across the globe. As policymakers, advocates, and companies grapple with the ever-changing landscape of youth privacy regulation, the Future of Privacy Forum recently hosted a webinar discussing the current state of kids’ and teens’ privacy policy. The webinar explored the current frameworks that are influential worldwide, the variations in youth privacy approaches, and the nuances of several emerging trends.
The virtual conversation, moderated by FPF’s Chloe Altieri, included a discussion about the industry’s work on compliance with a variety of regulations across jurisdictions. The webinar began with presentations from the panelists, setting the stage with their current work on youth privacy issues. The panelists were Phyllis H. Marcus, Partner at Hunton Andrews Kurth LLP, Pascale Raulin-Serrier, Senior Advisor in Digital Education and Coordinator of the DEWG at the French CNIL, Michael Murray, Head of Regulatory Policy at the U.K. ICO, and Shanna Pearce, Managing Counsel, Family Experience at Epic Games.
Phyllis led the audience through the current U.S. legislative and regulatory landscape. The U.S. States have been incredibly active in children’s and teens’ privacy legislation, with 11 states having enacted bills and an additional 35 states have considered legislation for youth online safety. The trends emerging from the online safety legislation being considered show four primary categories of laws being considered by state regulators: Platform Accountability Laws, Age Verification Laws, Social Media Metering Laws, and the California Age-Appropriate Design Code (CA AADC). In recent actions, the Federal Trade Commission has stepped up enforcement of the Children’s Online Privacy Protection Act (COPPA). Finally, the U.S. Congress has taken an interest in enhancing youth online safety measures through the introduction of COPPA 2.0 by Sen. Markey and Sen. Cassidy and the Kids Online Safety Act (KOSA) by Sen. Blumenthal and Sen. Blackburn.
Pascale discussed the work of the French CNIL, both nationally and internationally, in making privacy and youth online safety an effective initiative. Key international initiatives include the International Resolution on Children’s Digital Rights adopted in 2021, which harmonized a regulatory vision and set of core principles among Data Protection Authorities Worldwide around youth online safety. Additionally, the CNIL is working to improve digital education to combat the issues of online safety. The CNIL published eight recommendations to enhance the protection of children online in 2021 with the aim of providing practical advice to a range of stakeholders. Such recommendations include strengthening youth awareness of risks online and privacy rights as well as encouraging youth to exercise their privacy rights in order to stay safe. Youth education efforts are being undertaken in conjunction with digital literacy efforts that empower parents and caretakers to have meaningful conversations about online safety with their children.
Michael discussed the United Kingdom’s Age Appropriate Design Code (U.K. Children’s Code), which is a statutory code of practice under the UK’s General Data Protection Regulation(GDPR). The Children’s Code is grounded in the principles established by the United Nations Convention on the Rights of the Child (UNCRC). Michael explained that the Code sets out 15 interlinked standards of age-appropriate design for online services that are likely to be accessed by children. The U.K. Information Commissioner’s Office (ICO) has undertaken a wide-ranging effort to supervise the code by exploring the possible ways the ICO could provide guidance for online services to meet the Code’s objectives. The ICO not only looks to submitted complaints but also to industry engagement efforts and engagement with other regulators or government offices to inform its guidance. According to recent industry surveys, this new method of supervising code implementation has been effective.
Shanna discussed the safety and privacy considerations that are necessary when building online experiences for kids and teens, specifically on gaming platforms. Shanna spoke about the balance and thoughtful product design required to create a positive experience for younger players and their guardians in compliance with global regulations. One product of this balancing act for Epic Games was the deployment of a suite of protections across its ecosystem of games, including age-appropriate default settings, parental controls, and Cabined Accounts, an Epic account designed to create a safe and inclusive space for younger players. Players with Cabined Accounts can still play Fortnite, Rocket League, or Fall Guys but won’t be able to access certain features, such as voice chat, until their parent or guardian provides verifiable parental consent (“VPC”).
After the panelists’ presentations, we launched into a discussion on several of the most salient issues unsettled in youth privacy policy, such as online safety, age assurance, parental consent, new regulatory requirements, and guidance for the industry. We have summarized the discussion and included key takeaways.
How are policymakers and industry members working to resolve points of tension between privacy and safety? How is this tension and its resolution approached differently across the globe?
Michael Murray gave a three-fold answer on the difference between U.S. and U.K. child privacy & safety policy. The first key difference is that in the U.K. context, the ICO and OFCOM lead a dual effort where the ICO focuses on privacy, and the U.K.’s Office of Communications (Ofcom) focuses on safety and content regulation. In the U.S., there is no defined, systemic privacy and safety regulatory effort. The second is that the U.K. is a signatory to the UNCRC, which defines children as anyone under the age of 18, and that is reflected in U.K. law. In contrast, the U.S. is not a UNCRC signatory, and the current U.S. federal protections define children as individuals under the age of 13 years old. Finally, third, the U.S. operates largely under an actual knowledge standard that an online site or service is directed to children. Whereas the U.K. Code and, recently, the California AADC operate under a “likely to be accessed” by children standard.
Phyllis H. Marcus elaborated on some of the points Michael brought up, describing the knowledge standard and tensions between privacy and safety we see in the U.S. COPPA is a privacy rather than a safety regime, but it does have safety components as one of its statutory underpinnings. It is important to know that this current regime is being somewhat upended by the patchwork of state laws being passed. Additionally, this regime may change if COPPA 2.0 and KOSA make their way through Congress and especially if they go into effect, where, combined, they will regulate privacy and safety regimes in tandem.
Pascale provided insight into the approach by France and the European Union (EU), where safety and privacy are not opposing or differentiated regulatory efforts. Safety is an element of privacy regulation and is considered within Data Protection legislation.
Age assurance is a large part of the policy conversation on youth privacy and safety online, especially as privacy protections for teens are expanding. What are your thoughts on the current issues around age assurance?
According to Michael, the area of age assurance and age verification is rapidly evolving, but there is “no silver bullet” for establishing or verifying user age on digital platforms and services. The U.K. Code takes a risk-based and proportional response to age assurance. The lower the risk of processing a child’s data, the less intensive age assurance or verification mechanisms need to be. For example, self-declaration might be a suitable mechanism for a lower-risk service. However, where risk is higher, more assurance and verification are needed to protect against processing children’s data, which is a GDPR violation. For higher-risk services, age estimation software with a buffer, a form of cabined accounts, or age verification through mobile contracts and digital IDs could be employed. These methods are all still immature, and there is no clear one-size-fits-all solution.
Pascale expressed that there are no clear solutions for age assurance and verification at this time. The CNIL, along with working groups such as DEWG, are still experimenting with age assurance methods. The CNIL is working towards developing a technical approach among different stakeholders in both the government and private sectors to harmonize methods across the chain of actors concerned. However, it is clear that the solution developed will not rely on biometric data collection for age verification efforts, though scans and estimations of face shape may be permitted. The end goal will be to find a strong mechanism for verification while balancing privacy concerns.
What are the important considerations when trying to strike a balance between data minimization, collecting information for age assurance, the level of accuracy that is appropriate, and the evolving landscape of age assurance technologies?
Shanna discussed the challenges the industry faces with respect to the state of age assurance technology in certain scenarios. Those challenges include imprecision with some age assurance technology and methods that cannot be used across all types of devices where users access online services, such as gaming consoles. These challenges are reduced when several methods are offered for verification of adults providing parental consent but may be significant where a single method is used as a gate for users to access services. While alternative age assurance technologies continue to develop, Shanna observed that the industry can find creative ways to improve the reliability of existing methods like self-reported age gates–such as providing child experiences that reduce the incentive to misstate age (Epic’s Cabined Accounts are one example) and using trusted data intermediaries to reduce friction and privacy risk to parents providing parent verification.
Michael echoed concerns about data minimization complicating age verification techniques. He added that when given a choice, a lot of parents prefer age estimation mechanisms as opposed to giving hard identifiers or personal information such as ID numbers or credit cards for age verification purposes.
These questions around age assurance are sometimes linked to discourse about parental consent. Can you speak to these two topics and share a bit about the emerging methods for each?
Phyllis provided more insight into age assurance and parental consent practices in the U.S., noting that, at least in the U.S., age assurance and parental consent “are really two different things.” When it comes to children’s use of online services under federal law, there is no requirement to verify the age of users. Rather, the U.S. requirement is to obtain consent from someone whom the company has reasonable assurance to be the parent of a child requesting access to a service. There are some parental consent mechanisms that have been whitelisted by the FTC and have been in place for decades, while others are still being reviewed by the FTC. The idea of age assurance, on the other hand, is relatively new in the U.S., and there are a number of actors considering the possibility of deploying age assurance methods in the states. Key considerations for exploring the use of age assurance technology in the U.S. include looking at less-intrusive, less risky verification methods and making data minimization a priority. Finally, Phyllis made clear that when using age estimation systems for age assurance, the over/under age estimations could be risky if not adequately tested. When estimating a user’s age with just a few years for margin of error, that would be the difference in compliance and non-compliance.
There is a lot of work being done globally by lawmakers, regulators, advocates, industry leaders, and researchers to answer these policy questions we have discussed today. How are recommendations created, and how is this guidance impactful for remedying noncompliance or figuring out solutions to protect youth privacy online?
Pascale noted that in addition to the CNIL’s eight recommendations previously mentioned, global IT experts are exploring age verification technologies to be able to create recommendations for compliance and enforcement. Work is also being done on digital education for parents as a way to increase awareness and understanding of child privacy and safety online. There is a balance between allowing parents to be involved and requiring online services to add protections. There are also important nuances around teen autonomy, developmental stages, and parents sharing too much of their child’s information. There is more to come on recommendations providing topics of discussion with parents as well as developing cooperation on a voluntary basis with the industry.
Michael agreed that digital education is a vitally important part of the solution, and research shows that parents want to have a say in the online services their kids use. Still, it cannot be the entire solution, and parents will not always be able to make informed decisions. Children’s design codes are placing an emphasis on the design of online services to avoid placing an overwhelming burden on the shoulders of parents. This emphasis works productively in tandem with developing resources for parents to have productive conversations with their children.
Recent youth privacy legislation has included a variety of standards for the level of knowledge of an online service’s audience’s age. These variations in legislation have led to companies needing to consider youth privacy issues, like age assurance, that previously did not. How is this impacting emerging technology and the practical implementation of new products?
Phyllis responded that the development of new standards for determining what services do or do not fall under the scope of regulatory scrutiny and age assurance requirements is one of the most hotly contested and highly discussed issues in the evolving U.S. landscape. Under COPPA, the requirements are defined clearly into buckets, which then clearly define the scope of the law. The current federal standard in the U.S. is actual knowledge that a service is directed to children. Phyllis cautions that it’s important to note that most new initiatives change this paradigm, and the jury is still out on what the new standard will ultimately entail with COPPA 2.0 and KOSA.
Shanna noted that while many services were developed and deployed prior to legislation going into effect, those services may be brought into scope later. Retrofitting an existing service to address things like parental consent and default settings may require significant design and technical effort, and the process is complicated further as the age of digital consent differs across regions. Shanna stated that engagement by regulatory bodies and issuing of guidance is invaluable to companies trying to comply with these evolving requirements in the tech space.
Pascale added that this is not the first time that the industry has faced technical difficulties like the ones we see today in age assurance and verification. According to Pascale, innovation is a key element to prioritize in each company’s approach because big innovations can guide smaller ones.
We asked a final question to all of our panelists: What do you foresee as the near future of youth privacy policy? What issue should companies or policymakers have top of mind right now?
Phyllis observed that there is a lot on the horizon and that it will be easy for actors to fall behind if they are not intentionally keeping up with youth privacy. It is clear that developments in the U.K. have had an effect on U.S. policy at both the state and federal levels. These initiatives will continue to be momentum to keep an eye on.
Pascale opined that privacy by design is one of the best policy options. While digital education is important to aid in solving these issues, integrating privacy by design at the conception of tech innovation will help to distribute the pressure of protecting youth online.
Michael noted that age assurance is an obvious answer. Additionally, the resolution of First Amendment questions presented in the litigation of the California Age-Appropriate Design Code will be critical. The suit brings up fundamental issues around how to protect data without impacting U.S. constitutional rights that will be an important debate.
Shanna is interested in seeing how companies balance privacy with uses of emerging technologies that improve online safety. She also observed that a variety of laws are currently taking shape around the globe, and there’s an opportunity to improve consistency and clarity of forthcoming guidance so companies can comply effectively.
Each of the panelists shared helpful resources, which we have listed and linked below, along with a few of our own. You can also find the panelist’s presentation slides and additional resources here.
Coming up soon! You won’t want to miss FPF’s final session in our virtual Immersive Tech Panel Series on December 6 at 11 am ET. The December session will dive into designing immersive spaces with kids and teens in mind. You can register for this event here.
For more information or to learn how to become involved with FPF’s youth privacy analysis and initiatives, please contact Chloe at [email protected]. Subscribe here to receive monthly newsletters from the Youth and Education Team.
FPF Offers Input on Massachusetts Student Data Privacy Proposal
On October 30, FPF provided testimony before a hearing of the Massachusetts Joint Committee on Education regarding H.532/S.280, an Act Relative to Student and Educator Data Privacy.
FPF’s Director of Youth & Education Privacy, David Sallay, discussed his previous experience as chief privacy officer for the Utah State Board of Education and applauded policymakers for calling for the creation of a similar role in H.523/S.280. During the hearing, he also highlighted how the role could help address several of the other bills that were discussed, including providing support to rural schools and to Massachusetts’ educator-to-career data center. By designating privacy-focused personnel and requiring training, Massachusetts has an opportunity to improve structure, transparency, and consistency for schools, districts, and parents.
FPF’s testimony also included several recommendations and improvements, including expanding the bill’s student data privacy training requirements beyond educators, as procurement, IT, and other administrative staff often also have access to covered data. One component of the bill that will likely prove to be controversial is the broad private right of action included in the enforcement provisions. We expect this to be the subject of continued discussion and debate in the legislature. Citing his experience in Utah, David noted that granting the chief privacy officer role the authority to investigate alleged violations of student privacy laws, could help streamline and simplify enforcement.