Vermont and Nebraska: Diverging Experiments in State Age-Appropriate Design Codes

In May 2025, Nebraska and Vermont passed Age-Appropriate Design Code Acts (AADCs), continuing the bipartisan trend of states advancing protections for youth online. While these new bills arrived within the same week and share both a common name and general purpose, their scope, applicability, and substance take two very different approaches to a common goal: crafting a design code that can withstand First Amendment scrutiny. 

Much like the divergence in “The Road Not Taken,” each state has taken its version of the path less traveled in crafting an AADC, informed by different assumptions about risks to minors online, risks of constitutional challenges, and enforcement priorities. As states grapple with legal challenges to earlier AADCs (California’s law remains blocked and a lawsuit was filed against Maryland’s law earlier this year) Nebraska and Vermont demonstrate how policymakers are experimenting with divergent frameworks in hopes of creating constitutionally sound models for youth online privacy and safety.

See our comparison chart for a full side-by-side comparison between the Nebraska Age-Appropriate Design Code Act (LB 504) and Vermont Age-Appropriate Design Code Act (S.69).

Two visions of scope

Each AADC’s scope turns on two key provisions – business thresholds tied to revenue and number of affected users, and an applicability standard based on either audience composition or “knowedge” of minor users on the service.

Business thresholds

Both the Nebraska and Vermont AADCs have narrower applicability than prior child online safety bills, though adopt different approaches to determining in-scope businesses.

Nebraska’s law applies only to businesses that derive more than half their revenue from selling or sharing personal data. This is an unusually high bar that could exclude many common services used by minors, including many platforms and services that are primarily supported by advertising revenue and subscriptions. Additionally, Nebraska includes a carveout for services that can demonstrate fewer than 2% of their users are minors.  In contrast, the Vermont AADC likely has a broader applicability, but still only applies to businesses that derive a majority of their revenue from online services generally, regardless of how they monetize. 

Applicability of when a service must apply minor protections

Another major divergence between the two AADCs lies in the circumstances under which covered businesses are deemed to know that a user is a child and required to provide heightened protections and controls.

Nebraska adopts an “actual knowledge” standard. However, the law defines “actual knowledge” as all information and inferences known to the covered business, including marketing data. Given that marketing segmentation can be as broad as “Gen Z,” covering anyone born from the late 90s to early 2010s, Nebraska’s law demonstrates an intent to construe actual knowledge broadly. Nevertheless, the law explicitly states that businesses are not required to collect age data to comply, which has been a hotly contested requirement under other state laws, as age verification requirements are historically not the least restrictive means of protecting children online and often impact the protected speech of adults.

Vermont takes a different path, triggering obligations when a service is “reasonably likely” to be accessed by minors, establishing a multifactor test that includes internal research and overall audience composition. Vermont’s approach is more akin to an audience assessment like COPPA’s “directed to children” standard for children under age 13. Though, from a practical standpoint, it’s likely that most websites online are reasonably likely to be accessed by at least some minors under the age of 18 who would be in scope of the Vermont AADC. Vermont’s Attorney General is also tasked with developing age assurance rules, including privacy-preserving techniques and guardrails; however, it is not clear whether the AG may seek to compel businesses to affirmative conduct age assurance through this rulemaking, and when questioned, the AG’s office said it was up to legislative intent.

In short, Nebraska seeks to explicitly avoid requiring age verification altogether, while Vermont seems to set the stage for proactive assessment and regulation on age estimation.

Designing around harm without regulating content

Vermont’s AADC contains a duty of care to protect minors in the design of online products but adds important disclaimers in a nod to First Amendment concerns that have plagued similar requirements in other state laws. Covered businesses must design services to avoid reasonably foreseeable emotional distress, compulsive use, or discrimination. However, the bill clarifies that the mere content that a minor views cannot, by itself, constitute harm. Nebraska, by contrast, does not create a duty of care. 

To date, most Age-Appropriate Design Code bills have exclusively focused on tools and protections for covered minors. Nebraska breaks from this mold by requiring businesses to build tools for parents to help them monitor and limit their child’s use of online services. This section likely draws inspiration from the federal Kids Online Safety Act, which earlier versions of the Nebraska framework more closely resembled. 

Both states require covered services to set strong default privacy settings, but Vermont takes a more granular approach. It explicitly prohibits providing users with a single “less protective” setting that would override others, explicitly limiting the use of all-in-one privacy toggles. Furthermore, a number of its default setting requirements only apply to social media platforms, a divergence from prior AADCs whose requirements have generally been agnostic to the type of online service. For example, Vermont prohibits allowing known adults to like, comment, or otherwise provide feedback on a covered minor’s media on social media. This would be allowed to the extent any non-social media platforms have this type of functionality. In contrast to Vermont’s default settings approach to safer design, Nebraska requires covered businesses to develop various tools for minors. In some instances, these tools overlap with the default settings called for in Vermont and are just a different statutory approach of arriving at the same goal, such as tools for restricting the collection of geolocation data or communicating with unknown adults. Other tools are unique and novel to Nebraska, such as a tool that allows a minor to “opt out of all unnecessary features.” Businesses in scope of both frameworks will need to do a close read to determine what new features, settings, and tools must be implemented.  

Both frameworks omit requirements for businesses to complete data protection impact assessments, which emerged as one of the key issues with the California AADC, due to California’s requirement to assess and limit the exposure of children to “potentially” harmful content. While the Ninth Circuit did not hold that risk assessments are per se unconstitutional, and the primary issue in California lay with requiring companies to opine on content-based harms, both Nebraska and Vermont steer away from this issue altogether. Instead, Vermont’s framework would require businesses to issue detailed public transparency reports, including on their use of algorithmic recommendation systems, including disclosure of inputs and how they influence results. 

When it comes to targeted advertising, Nebraska is explicit: it prohibits facilitating targeted ads to minors, while allowing exceptions for first-party and contextual advertising. Vermont is less direct, but forbids the use of personal data to prioritize media for viewing unless requested by the minor, which may effectively ban both personalized advertising and certain practices for organizing content based on user interests (though the framework’s algorithmic disclosure requirements suggests an intent that many such systems may remain in use).

Nebraska prohibits the use of so-called “dark patterns” outright – an unusually broad ban that goes beyond previous state privacy laws, which have focused on manipulative practices in obtaining consent or collecting personal information. Instead, Nebraska seeks to prohibit any user interface with the effect of subverting or impairing autonomy, decision-making, or choice. A strict reading of this provision could arguably impact a broad range of design choices including a video game that restricts access to certain areas until you defeat a boss, a button asking you if you’d like to continue, or the content of advertisements (though remember – the number of businesses subject to Nebraska appear incredibly narrow). In contrast, Vermont defers to future rulemaking, authorizing its Attorney General to define and prohibit manipulative design practices by 2027.

Effective dates and next steps

Governor Pillen signed the Nebraska AADC within days of its passage and the law is slated to go into effect on January 1, 2026. However, the Act gives companies some leeway, as the Attorney General is not able to bring actions to recover civil penalties until July 1, 2026. The Vermont AADC would establish a longer onramp for coming into compliance, with an effective date of January 1, 2027. Governor Scott is still considering the bill, though he vetoed a similar effort last year that was included as part of a broader comprehensive privacy package. Assuming the Vermont AADC is enacted, the Attorney General is expected to complete rulemaking on manipulative design practices and methods for conducting age estimation by the effective date. 

Conclusion

With courts signaling that speech-based online safety rules are unlikely to survive First Amendment scrutiny, Nebraska and Vermont are two distinct experiments in how to try to achieve the goal of protecting children online in constitutionally resilient ways. NetChoice, the litigant challenging the California and Maryland AADCs, has already raised First Amendment concerns with both the Nebraska and Vermont frameworks.

Each legislature has taken its own “road less traveled” to children’s online safety. Nebraska has opted for a limited scope, feature-driven approach with no rulemaking and an emphasis on actual knowledge. Vermont has chosen a broader duty-of-care model, backed by a robust rulemaking directive and novel transparency requirements. Both paths attempt to avoid the pitfalls of California’s and Maryland’s laws, but take radically diverging routes in doing so. Which, if either, road “has made all the difference” will ultimately depend on courts, compliance practices, and the experience of minors navigating these services in the years to come.

FPF Experts Take The Stage at the 2025 IAPP Global Privacy Summit

By FPF Communications Intern Celeste Valentino

Earlier this month, FPF participated at the IAPP’s annual Global Privacy Summit (GPS)  at the Convention Center in Washington, D.C. The Summit convened top privacy professionals for a week of expert workshops, engaging panel discussions, and exciting networking opportunities on issues ranging from understanding U.S. state and global privacy governance to the future of technological innovation, policy, and professions.

event recap blog template (3)

FPF started out the festivities by hosting its annual Spring Social with a night full of great company, engaging discussions, and new connections. A special thank you to our sponsors FTI Consulting, Perkins Coie, Qohash, Transcend, and TrustArc!

The IAPP conference started with FPF Senior Director for U.S. Legislation Keir Lamont, who led an informative workshop, “US State Privacy Crash Course – What Is New and What Is Next” with Lothar Determann (Partner, Baker McKenzie) and David Stauss (Partner, Husch Blackwell). The workshop provided an overview of recent U.S. state privacy legislation developments and a lens into how these laws fit into the existing landscape.

img 0823 (2)

The next day, FPF Senior Fellow Doug Miller hosted an insightful discussion with Jocelyn Aqua (Principal, PwC), providing guidance and tools for privacy professionals to avoid workplace burnout. Both began the discussion by arguing that because privacy professionals face different organizational and positional pressures from other business professionals, they experience varying types of burnout that require alternative remedies. The experts then detailed each kind of burnout and provided solutions for how individuals, teams, and leaders can provide support to avoid them. “Giving your team transparency about a decision gives them control, and feeling better about a decision,” Doug explained, highlighting leaders’ vital role in mitigating workplace burnout. You can find additional resources from Doug’s full presentation here.

1745437096396

Next, FPF Vice President for Global Privacy Gabriela Zanfir-Fortuna, moderated a compelling conversation amongst European legislators, including Brando Benifei (Member of European Parliament, co-Rapporteur of the AI Act), John Edwards (Information Commissioner, U.K. Information Commissioner’s Office), and Louisa Specht-Riemenschneider (Federal Commissioner for Data Protection and Freedom of Information, Germany), on Cross-regulatory Cooperation Between Digital Regulators. 

Their panel began by painting a detailed portrait of how the proliferation of digital regulations has created a necessity for cross-regulatory collaboration between differing authorities. Using the EU Artificial Intelligence (AI) Act as an example, the panelists argued that the success of cross-regulation hinges on cooperation and knowledge sharing between data protection agencies of different countries. “It’s important to see how the authority of the data protection authority remains relevant and at the center of regulation around AI. One interesting point in the AI Act is that in the Netherlands, there were around 20 authorities appointed as having competence to enforce and regulate to a certain extent under the AI Act; this speaks to how complex the landscape is,” examined Gabriela Zanfir-Fortuna, Vice President for Global Privacy.

The panel also dissected concrete ways regulators can work together to enable cross-regulation, including a mandatory collaboration mechanism, supervisory authorities, and a more unified approach from governments and regulators alike. 

img 0834 (1)

FPF CEO Jules Polonetsky served as a moderator of a timely dialogue among high-ranking leaders, including Kate Charlet (Director, Privacy, Safety, and Security; Government Affairs and Public Policy, Google), Kate Goodloe (Managing Director, Policy, BSA, The Software Alliance), and Amanda Kane Rapp (Head of Legal, U.S. Government, Palantir Technologies), covering tech in an evolving political era. 

The panel highlighted recent and expected shifts in technology, cybersecurity, privacy, AI governance, and online safety within a new U.S. executive administration. Jules commenced the panel posing, “We’ve seen increasing clashes between privacy and competition, privacy and kids’ issues, etc. Has anything changed in the current environment?” The panelists agreed that, regardless of government dynamics, privacy issues remain relevant for technology companies to address to protect and foster trust in the digital ecosystem with consumers. The panel also provided a master perspective on how tech leaders approach digital governance now and in the future through promoting interoperability, model transparency, and government experimentation and implementation of IT tools and procurement.

img 0282 2

On the second day  of the conference, FPF Managing Director for Asia-Pacific (APAC) Josh Lee Kok Thong, spoke on a panel with Darren Grayson Chng, (Regional Data Protection Director, Asia Pacific, Middle East, and Africa, Electrolux), Haksoo Ko (Chairperson, Personal Information Protection Commission, Republic of Korea), and Angela Xu (Senior Privacy Counsel, APAC Head, Google) exploring the nuanced landscape of AI regulation in Asia-Pacific. 

Through the panel, the discussants highlighted the differing AI regulatory approaches across the Asia-Pacific region, noting that most APAC jurisdictions have preferred not to enact hard AI laws. Instead, these regions focus on regulating elements of AI systems such as the use of personal data (Singapore), addressing risk in AI systems (Australia), promoting industry development (South Korea), fostering international cooperation, and responsible AI practices (Japan), government oversight of deployment of AI systems (India) and regulating misinformation and personal information protection (China). “The APAC region is like a huge experimental lens for AI regulation, with different jurisdictions trying out different approaches, so do pay attention to this region because it will be very influential going forward. There will be increasing diversity and regulation,” Josh noted, providing valuable insider insight about where audience members should focus their attention. 

event recap blog template (4)

Throughout the week, FPF’s booth in the Exhibition Hall was a popular stop for IAPP GPS attendees. Policymakers, industry leaders, and privacy scholars stopped by our booth to learn more about FPF memberships, connect with FPF staff, and learn more about FPF’s ongoing issues, ranging from the future of regulating AI agents to helping schools defend against deepfakes in the classroom. Visitors to the booth stopped by to speak with FPF staff and left with a collection of infographics, membership resources, and an “I Love Privacy” sticker.

img 1735 (1) edited (1)

FPF hosted two roundtable discussions early in the week, with Vice President for Global Privacy, Gabriela Zanfir-Fortuna, leading  conversations on “Navigating Transatlantic Affairs and the EU-US Digital Regulatory Landscape” and “India’s new Data Protection law and what to expect from its implementation phase.” FPF’s U.S. Legislation team also hosted an event at our D.C. office for members to connect with the team and each other to discuss the U.S. legislative landscape. 

img 1691 (1) edited (1)

FPF also hosted two Privacy Executives Network breakfasts and a lunch during the Summit week featuring peer-to-peer discussions top-of-mind issues in data protection and privacy and AI Governance. We discussed the current EU privacy landscape with Commissioner for Data Protection and Chairperson of the Irish Data Protection Commission, Des Hogan, and we spoke with Colorado Attorney General Office’s First Assistant Attorney General, Technology & Privacy Protection Unit, Stevie DeGroff. These roundtable discussions allowed our members to discuss critical topics with one another in a private and dynamic meeting. 

In partnership with the Mozilla Foundation, we also hosted a PETs Workshop featuring short, expert panels exploring new and emerging Privacy Enhancing Technology (PETs) applications. Technology and policy experts presented several leading PETs use cases, analyzed how PETs work with other privacy protections, and discussed how PETs may intersect with data protection rules. This workshop was the first time that several of the use cases were shared in detail with independent experts.

We hope you enjoyed this year’s IAPP Global Privacy Summit as much as we did! If you missed us at our booth, visit FPF.org for all our reports, publications, and infographics. Follow us on X, LinkedIn, Instagram, and YouTube, and subscribe to our newsletter for the latest.

Lessons Learned from FPF “Deploying AI Systems” Workshop

On May 7, 2025, the Future of Privacy Forum (FPF) hosted a “Deploying AI Systems” workshop at the Privacy + Security Academy’s Spring Academy, which took place at The George Washington University in Washington, DC. Workshop participants included students and privacy lawyers from firms, companies, data protection authorities, and regulatory agencies around the world.

img 6436
Pictured left to right: Daniel Berrick, Anne Bradley, Bret Cohen, Brenda Leong, and Amber Ezzell

The two-part workshop explored the emerging U.S. and global legal requirements for AI deployers, and attendees engaged in exercises involving case studies and demos on managing third-party vendors, agentic AI, and red teaming. The workshop was facilitated by FPF’s Amber Ezzell, Policy Counsel for Artificial Intelligence, who was joined by Anne Bradley (Luminos.AI), Brenda Leong (ZwillGen), Bret Cohen (Hogan Lovells), and Daniel Berrick (FPF).

From the workshop, a few key takeaways emerged:

As organizations, policymakers, and regulators grapple with the rapidly evolving landscape of AI development and deployment, FPF will continue to explore a range of issues at the intersection of AI governance.

If you have any questions, comments, or wish to discuss any of the topics related to the Deploying AI Systems workshop, please do not hesitate to reach out to FPF’s Center for Artificial Intelligence at [email protected].

Amendments to the Montana Consumer Data Privacy Act Bring Big Changes to Big Sky Country

On May 8, Montana Governor Gianforte signed SB 297, amending the Montana Consumer Data Privacy Act (MCDPA). This amendment was sponsored by Senator Zolnikov, who also championed the underlying law’s enactment in 2023. Much has changed in the state privacy law landscape since the MCDPA was first enacted, and SB 297 incorporates elements of further reaching state laws into the MCDPA while declining to break new ground. For example, SB 297 adopts heightened protections for minors like those in Connecticut and Colorado as well as privacy notice requirements and a narrowed right of access like in Minnesota’s law. The bill does not include an effective date for these new provisions, so by default the amendments should take effect on October 1, 2025. 

This blog post highlights the important changes made by SB 297 and some key takeaways about what this means for the comprehensive consumer privacy landscape. Changes to the law include (1) a duty of care with respect to minors, (2) new requirements for processing minors’ personal data, (3) a disclaimer that the law does not require age verification, (4) lowered applicability thresholds and narrowed exemptions, (5) a narrowed right of access that prohibits controllers from disclosing certain sensitive information, (6) expanded privacy notice requirements, and (7) modifications to the law’s enforcement provisions. With these changes, Montana yet again reminds us that privacy remains a bipartisan issue as SB 297, like its underlying law, was passed with overwhelmingly bipartisan votes.

1.  New Connecticut- and Colorado-style duty of care with respect to minors. 

The biggest changes to the MCDPA concern protections for children and teenagers. Like legislation enacted by Connecticut in 2023 and Colorado in 2024, SB 297 amends the MCDPA to add privacy protections for consumers under the age of 18 (“minors”). These new provisions apply more broadly than the rest of the law, covering entities that conduct business in Montana without any small business exceptions (i.e., there are no numerical applicability thresholds, although the law’s entity-level and data-level exemptions still apply). 

Under these new provisions, any controller that offers an online service, product, or feature to a consumer whom the controller actually knows or wilfully disregards is a minor must use “reasonable care” to avoid a “heightened risk of harm to minors” caused by the online service, product, or feature (“online service”). Heightened risk of harm to minors is defined as processing a minor’s personal data in a manner that presents a “reasonably foreseeable risk” of: (a) Unfair or deceptive treatment of, or unlawful disparate impact on, a minor; (b) financial, physical, or reputational injury; (c) unauthorized disclosure of personal data as a result of a security breach (as described in Mont. Code Ann. § 30-14-1704); or (d) intrusion upon the solitude or seclusion or private affairs or concerns of a minor, whether physical or otherwise, that would be offensive to a reasonable person. This definition largely aligns with some of the existing triggers for conducting a data protection assessment under the MCDPA. 

At a time when many youth privacy and online safety bills, such as the California Age-Appropriate Design Code (AADC), are mired in litigation over their constitutionality, it is notable that three states—Connecticut, Colorado, and Montana—have now opted for the framework in SB 297. Given that neither Connecticut’s nor Colorado’s laws have been subject to any constitutional challenges as of yet, this approach could be a more constitutionally resilient way than the AADC model to impose a duty of care with respect to minors. Specifically, the duties of care in Connecticut’s, Colorado’s, and now Montana’s laws are rooted in traditional privacy harms and torts (e.g., intrusion upon seclusion) whereas other frameworks that have been challenged have more amorphous concepts of harm that are more likely to implicate protected speech (e.g., the enjoined California AADC requires addressing whether an online service’s design could harm children by exposing them to “harmful, or potentially harmful, content”). 

2.  Controllers are entitled to a rebuttable presumption of having exercised reasonable care if they comply with statutory requirements.

Under Montana’s new duty of care to minors, a controller is entitled to a rebuttable presumption that it used reasonable care if it complies with certain statutory requirements related to design and personal data processing. With respect to design, controllers are prohibited from using consent mechanisms that are designed to impair user autonomy, they are required to establish easy-to-use safeguards to limit unsolicited communications from unknown adults, and they must provide a signal indicating when they are collecting precise geolocation data. For processing, controllers must obtain a minor’s consent before: (a) Processing a minor’s data for targeted advertising, sale, and profiling in furtherance of decisions that produce legal or similarly significant effects; (b) “us[ing] a system design feature to significantly increase, sustain, or extend a minor’s use of the online service, product, or feature”; or (c) collecting precise geolocation data, unless doing so is “reasonably necessary” to provide the online service, or retaining that data for longer than “necessary” to provide the online service.

Controllers subject to these provisions must also conduct data protection assessments for an online service “if there is a heightened risk of harm to minors.” These data protection assessments must comply with all existing requirements under the MCDPA and must provide additional information such as the online service’s purpose, the categories of personal data processed, and the processing purposes. Data protection assessments should be reviewed “as necessary” to account for material changes, and documentation should be retained for either 3 years after the processing operations cease, or the date on which the controller ceases offering the online service, whichever is longer. If a controller conducts an assessment and determines that a heightened risk of harm to minors exists, it must “establish and implement a plan to mitigate or eliminate the heightened risk.” 

Although the substantive requirements of the protections for minors are substantively similar between Connecticut’s, Colorado’s, and Montana’s laws, these states are not fully aligned with respect to the rebuttable presumption of reasonable care. Montana follows Colorado’s approach, whereby a controller is entitled to the rebuttable presumption if it complies with the processing and design restrictions described above. Connecticut’s law, in contrast, provides that a controller is entitled to the rebuttable presumption of having used reasonable care if the controller complies with the data protection assessment requirements. 

3.  The bill clarifies that Montana’s privacy law does not require age verification. 

In addition to adding a duty of care and design and processing restrictions with respect to minors, SB 297 makes a small change to existing adolescent privacy protections. The existing requirement that a controller obtain a consumer’s consent before engaging in targeted advertising or selling personal data for consumers aged 13–15 now applies when a controller willfully disregards the consumer’s age, not just if the controller has actual knowledge of their age. This knowledge standard aligns with that in similar opt-in requirements for adolescents in California, Connecticut, Delaware, New Hampshire, New Jersey, and Oregon. It also aligns with the broader duty of care protections in SB 297, which apply when a controller “actually knows or willfully disregards” that a consumer is a minor. This change may be negligible, however, as the amendment already requires any controller that offers an online service, product, or feature to a consumer whom the controller actually knows or wilfully disregards is a minor (under 18) to obtain consent before processing a minor’s data for targeted advertising, sale, and profiling in furtherance of decisions that produce legal or similarly significant effects.

These new protections and the introduction of a “willfully disregards” knowledge standard for minors implicate a broad, contentious policy debate over age verification, the process by which an entity affirmatively determines the age of individual users, often through the collection of personal data. Across the country, courts are litigating the constitutionality of such requirements under other laws. Presumably to head-off any such constitutional challenges, SB 297 explicitly provides that nothing in the law shall require a controller to engage in age-verification or age-gating. However, it also provides that if a controller chooses to conduct commercially reasonable age estimation to determine which consumers are minors, then the controller is not liable for erroneous age estimation. 

Such a clarification is arguably necessary if “willfully disregards” is implied to require some level of affirmative action on a controller’s part to estimate users’ ages under certain circumstances. For example, the Florida Digital Bill of Rights regulations provide that a controller willfully disregards a consumer’s age if it “should reasonably have been aroused to question whether a consumer was a child and thereafter failed to perform reasonable age verification,” and it incentivizes age verification by providing that a controller will not be found to have willfully disregarded a consumer’s age if it used “a reasonable age verification method with respect to all of its consumers” and determined that the consumer was not a child. Montana takes a different approach, explicitly disclaiming any requirement to engage in age verification, but still incentivizing age estimation. 

4.  Changed applicability requirements expand the law’s reach. 

Owing to its relatively low population, the MCDPA had the lowest numerical applicability thresholds of any of the state comprehensive privacy laws when the law was enacted in 2023. At that time, prior comprehensive privacy laws in Virginia, Colorado, Utah, Connecticut, Iowa, and Indiana all applied to controllers that either (1) control or process the personal data of at least 100,000 consumers (“the general threshold”), or (2) control or process the personal data of at least 25,000 consumers if the controller derived a certain percentage of its gross revenue from the sale of personal data. Montana broke that mold by lowering the general threshold to 50,000 affected consumers. Several states—Delware, New Hampshire, Maryland, and Rhode Island—have since surpassed Montana’s low-water mark. Accordingly, SB 297 lowers the law’s applicability thresholds. The law will now apply to controllers that either (1) control or process the personal data of at least 25,000 consumers, or (2) control or process the personal data of at least 15,000 consumers (down from 25,000) if the controller derives at least 25% of gross revenue from the sale of personal data. 

Following a broader legislative trend in recent years, this bill also narrows or eliminates several entity-level exemptions. Most notably, the entity-level exemption for financial institutions and affiliates governed by the Gramm-Leach-Bliley Act has been narrowed to a data-level exemption, aligning with the approach taken by Oregon and Minnesota. To counterbalance this change, SB 297 adds new entity-level exemptions for certain chartered banks, credit unions, insurers, and third-party administrators of self-insurance engaged in financial activities. SB 297 also narrows the non-profit exemption to apply only to non-profits that are “established to detect and prevent fraudulent acts in connection with insurance.” Thus, Montana’s law now joins those of Colorado, Oregon, Delaware, New Jersey, Maryland, and Minnesota in broadly applying to non-profits. 

5.  The newly narrowed right to access now prohibits controllers from disclosing certain types of highly-sensitive information, such as social security numbers.

The consumer right to access one’s personal data carries a tension between the ability to access the specific data that an entity has collected concerning oneself and the risk that one’s data, especially one’s sensitive data, could be either erroneously or surreptitiously disclosed to a third party or even a bad actor. Responsive to that risk, SB 297 follows Minnesota’s approach by narrowing the right to access to prohibit disclosure of certain types of sensitive data. As amended, a controller now may not, in response to a consumer exercising their right to access their personal data, disclose the following information: social security number; government issued identification number (including driver’s license number); financial account number; health insurance account number or medical identification number; account password, security questions, or answer; or biometric data. If a controller has collected this information, rather than disclosing it, the controller must inform the consumer “with sufficient particularity” that it has collected the information. 

SB 297 also slightly expands one of the law’s opt-out rights. Consumers can now opt out of profiling in furtherance of “automated decisions” that produce legal or similarly significant effects, rather than only “solely automated decisions.”

6.  The MCDPA now includes more prescriptive privacy notice requirements.

SB 297 significantly expands the requirements for privacy notices and related disclosures, largely aligning with the more prescriptive provisions in Minnesota’s law. Changes made by SB 297 include—

The law provides that controllers do not need to provide a separate, Montana-specific privacy notice or section of a privacy notice so long as the controller’s general privacy notice includes all information required by the MCDPA. 

7.  The Attorney General now has increased investigatory power.

Finally, SB 297 reworks the law’s enforcement provisions. The amendments build out the Attorney General’s (AG) investigatory powers by allowing the AG to exercise powers provided by the Montana Consumer Protection Act and Unfair Trade Practices laws, to issue civil investigative demands, and request that controllers disclose any data protection assessments that are relevant to an investigation. Furthermore, the AG is no longer required to offer an opportunity to cure before bringing an enforcement action, in effect closing the cure period six months prior to its previous scheduled expiration date. The statute of limitations is five years after a cause of action accrues. 

* * *

Looking to get up to speed on the existing state comprehensive consumer privacy laws? Check out FPF’s 2024 report, Anatomy of State Comprehensive Privacy Law: Surveying the State Privacy Law Landscape and Recent Legislative Trends

Tags: U.S. Legislation, Youth & Education Privacy

Consent for Processing Personal Data in the Age of AI: Key Updates Across Asia-Pacific

This Issue Brief summarizes key developments in data protection laws across the Asia-Pacific region since 2022, when the Future of Privacy Forum (FPF) and the Asian Business Law Institute (ABLI) published a series of reports examining 14 jurisdictions in the region. We found that while many offer alternative legal bases for data processing, consent remains the most widely used, often due to its familiarity, despite known limitations.

This Issue Brief provides an updated view of evolving consent requirements and alternative legal bases for data processing across key APAC jurisdictions: India, Vietnam, Indonesia, the Philippines, South Korea, and Malaysia.

In August 2023, India passed the Digital Personal Data Protection Act (DPDPA). Once in force, the DPDPA will provide a comprehensive framework for processing personal data. It affirms consent as the primary basis for processing but introduces structured obligations around notice, purpose limitation, and consent withdrawal, while enabling future flexibility for alternative legal bases.

Vietnam‘s Decree on Personal Data Protection took effect in July 2023. It sets clearer standards for consent while formally recognizing alternative legal bases, including for contractual necessity and legal obligations. This marks a key step in broadening lawful processing options for businesses.

Indonesia’s Personal Data Protection Law (PDPL), enacted in October 2022, introduces a unified national privacy law with an extended transition period. It affirms consent but also allows processing based on legitimate interest, public duties, and contract performance, bringing Indonesia closer to global privacy frameworks.

In November 2023, the PhilippinesNational Privacy Commission issued a Circular on Consent, clarifying valid consent standards and promoting transparency. The guidance aims to reduce consent fatigue by encouraging layered, contextual consent interfaces and outlines when consent may not be strictly necessary.

South Korea amended PIPA (in force since September 2023) and related guidelines promote easy-to-understand consent practices and recognize additional legal grounds, especially in the context of AI. A 2025 bill is under consideration to expand the use of non-consent bases for AI-related processing.

The Personal Data Protection (Amendment) Act 2024, published in October 2024, introduces stronger enforcement tools and administrative penalties in Malaysia. While the amendments do not change the legal bases for processing, they enhance the compliance environment and signal stricter oversight.

The Issue Brief also explores how the rise of AI is impacting shifts in lawmaking and policymaking across the region, when it comes to lawful grounds for processing personal data. 

As the APAC region shifts from fragmented, sector-specific rules to unified legal frameworks, understanding the evolving role of consent and the growing adoption of alternative legal bases is essential. From improving user-friendly consent mechanisms to strengthening enforcement and expanding lawful processing grounds, these changes highlight a more flexible and accountable approach to data protection across the region.

The Curse of Dimensionality: De-identification Challenges in the Sharing of Highly Dimensional Datasets

The 2006 release by AOL of search queries linked to individual users and the re-identification of some of those users is one of the best known privacy disasters in internet history. Less well known is that AOL had released the data to meet intense demand from academic researchers who saw this valuable data set as essential to understanding a wide range of human behavior. 

As the executive appointed AOL’s first Chief Privacy Officer as part of a strategy to help prevent further privacy lapses, the benefits as well as the risks of sharing data became a priority in my work. At FPF, our teams have worked on every aspect of enabling privacy safe data sharing for research and social utility, including de-identification1, the ethics of data sharing, privacy-enhancing technologies2 and more3.  Despite the skepticism of critics who maintain that reliable identification is a myth4, I maintain that it is hard, but for many data sets it is feasible, with the application of significant technical, legal and organizational controls. However, for highly dimensional data sets, or complex data sets that are made public or shared with multiple parties, the ability to provide strong guarantees at scale or without extensive impact on utility is far less feasible. 

1. Introduction

The Value and Risk of Search Query Data

Search query logs constitute an unparalleled repository of collective human interest, intent, behavior, and knowledge-seeking activities. As one of the most common activities on the web, searching generates data streams that paint intimate portraits of individual lives, revealing interests, needs, concerns, and plans over time5. This data holds immense potential value for a wide range of applications, including improving search relevance and functionality, understanding societal trends, advancing scientific research (e.g., in public health surveillance or social sciences), developing new products and services, and fueling the digital advertising ecosystem. 

However, the very richness that makes search data valuable also makes it exceptionally sensitive and fraught with privacy risks. Search queries frequently contain explicit personal information such as names, addresses, phone numbers, or passwords, often entered inadvertently by users. Beyond direct identifiers, queries are laden with quasi-identifiers (QIs) – pieces of information that, while not identifying in isolation, can be combined with other data points or external information to single out individuals. These can include searches related to specific locations, niche hobbies, medical conditions, product interests, or unique combinations of terms searched over time. Furthermore, the integration of search engines with advertising networks, user accounts, and other online services creates opportunities for linking search behavior with other extensive user profiles, amplifying the potential for privacy intrusions. The longitudinal nature of search logs, capturing behavior over extended periods, adds another layer of sensitivity, as sequences of queries can reveal evolving life circumstances, intentions, and vulnerabilities. The database reconstruction theorem, referred to as the fundamental law of information reconstruction, posits that publishing too much data derived from a confidential data source, at a high a degree of accuracy, will certainly after a finite number of queries result in the de-identification of the confidential data6. Extensive and extended releases of search data are a model example of this problem.

The De-identification Imperative and Its Inherent Challenges

Faced with the dual imperatives of leveraging valuable data and protecting user privacy, organizations rely heavily on data de-identification. De-identification encompasses a range of techniques aimed at removing or obscuring identifying information from datasets, thereby reducing the risk that the data can be linked back to specific individuals. The goal is to enable data analysis, research, and sharing while mitigating privacy harms and complying with legal and ethical obligations.

Despite its widespread use and appeal, de-identification is far from a perfected solution. Decades of research and numerous real-world incidents have demonstrated that supposedly “de-identified” or “anonymized” data have been re-identified, sometimes with surprising ease. This re-identification potential stems from several factors: the residual information left in the data after processing, the increasing availability of external datasets (auxiliary information) that can be linked to the de-identified data, and the continuous development of sophisticated analytical techniques. In some of these cases, a more rigorous de-identification process could have provided more effective protections, albeit with impact on the availability of the data needed.  In other cases, the impact of the de-identification might “only” be a threat to public figures7. In my experience, expert technical and legal teams can collaborate to support reasonable de-identification efforts for data that is well structured or closely held, but for complex, high-dimensional datasets or data shared broadly, the risks multiply.

Furthermore, the terminology itself is fraught with ambiguity. “De-identification” is often used as a catch-all term, but it can range from simple masking of direct identifiers (which offers weak protection) to more rigorous attempts at achieving true anonymity, where the risk of re-identification is negligible. This ambiguity can foster a false sense of security, as techniques that merely remove names or obvious identifiers have too often been labeled as “de-identified” while still leaving individuals vulnerable. Achieving a state where individuals genuinely cannot be reasonably identified is significantly harder, especially given the inherent trade-off between privacy protection and data utility: more aggressive de-identification techniques reduce re-identification risk but also diminish the data’s value for analysis. The concept of true, irreversible anonymization, where re-identification is effectively impossible, represents a high standard that is particularly challenging to meet for rich behavioral datasets, especially when data is shared with additional parties or made public. For more limited data sets that can be kept private and secure, or shared with extensive controls and legal and technical oversight, effective de-identification that maintains utility while reasonably managing risk can be feasible. This gap between the promise of de-identification and the persistent reality of re-identification risk for rich data sets that are shared lies at the heart of the privacy challenges discussed in this article.

Report Objectives and Structure

This article provides an analysis of the challenges associated with de-identifying massive datasets of search queries. It aims to review the technical, practical, legal, and ethical complexities involved. The analysis will cover:

  1. General De-identification Concepts and Techniques: Defining the spectrum of data protection methods and outlining common technical approaches.
  2. Unique Characteristics of Search Data: Examining the properties of search logs (dimensionality, sparsity, embedded identifiers, longitudinal nature) that make de-identification particularly difficult.
  3. The Re-identification Threat: Reviewing the mechanisms of re-identification attacks and landmark case studies (AOL, Netflix, etc.) where de-identification failed.
  4. Limitations of Techniques: Assessing the vulnerabilities and shortcomings of various de-identification methods when applied to search data.
  5. Harms and Ethics: Identifying the potential negative consequences of re-identification and exploring the ethical considerations surrounding user expectations, transparency, and consent.

The report concludes by synthesizing these findings to summarize the core privacy challenges, risks, and ongoing debates surrounding the de-identification of massive search query datasets.

2. Understanding Data De-identification

To analyze the challenges of de-identifying search queries, it is essential first to establish a clear understanding of the terminology and techniques involved in de-identification. The landscape includes various related but distinct concepts, each carrying different technical implications and legal weight.

Defining the Spectrum: De-identification, Anonymization, Pseudonymization8

The terms used to describe processes that reduce the linkability of data to individuals are often employed inconsistently, leading to confusion. 

Key De-identification Techniques and Mechanisms

A variety of techniques can be employed, often in combination, to achieve different levels of de-identification or anonymization. Each has distinct mechanisms, strengths, and weaknesses:

The following table provides a comparative overview of these techniques:

Table 1: Comparison of Common De-identification Techniques

Technique NameMechanism DescriptionPrimary GoalKey StrengthsKey Weaknesses/LimitationsApplicability to Search Logs
Suppression/ RedactionRemove specific values or recordsRemove specific identifiers/sensitive dataSimple; Effective for targeted removalHigh utility loss if applied broadly; Doesn’t address linkage via remaining dataLow (Insufficient alone; high utility loss for QIs)
MaskingObscure parts of data values (e.g., XXXX)Obscure direct identifiersSimple; Preserves formatLimited privacy protection; Can reduce utility; Hard for free textLow (Insufficient for QIs in queries)
GeneralizationReplace specific values with broader categoriesReduce identifiability via QIsBasis for k-anonymitySignificant utility loss, especially in high dimensions (“curse of dimensionality”)Low (Requires extreme generalization, destroying query meaning)
AggregationCombine data into summary statisticsHide individual recordsSimple; Useful for high-level trendsLoses individual detail; Vulnerable to differencing attacks ; Low utility for user-level analysisLow (Loses essential query sequence/context)
Noise AdditionAdd random values to data/resultsObscure true values; Enable DPBasis for DP; Provable guarantees possibleReduces accuracy/utility; Requires careful calibrationLow (Core of DP, but utility trade-off is key challenge, application to non-numeric fields like query text uncertain)
SwappingExchange values between recordsPreserve aggregates while perturbing recordsMaintains marginal distributionsIntroduces record-level inaccuracies; Complex implementation; Limited privacy guaranteeLow (Disrupts relationships within user history)
Hashing (Salted)Apply one-way function with unique salt per recordCreate non-reversible identifiersCan prevent simple lookups if salted properlyVulnerable if salt/key compromised; Doesn’t prevent linkage if hash is used as QILow (Hash of query text loses semantics; Hash of user ID is just pseudonymization)
PseudonymizationReplace identifiers with artificial codesAllow tracking/linking without direct IDsEnables longitudinal analysis; ReversibleStill personal data; High risk of pseudonym reversal/linkage, QIs remaining in data set create major risksLow (Allows user tracking, but privacy relies on pseudonym security/unlinkability)
k-AnonymityEnsure record indistinguishable among k based on QIsPrevent linkage via QIsIntuitive conceptFails in high dimensions; High utility loss; Vulnerable to homogeneity/background attacks; Not compositionalMedium (Impractical due to data characteristics)
l-Diversity / t-Closenessk-Anonymity variants adding sensitive attribute constraintsPrevent attribute disclosure within k-groupsStronger attribute protection than k-anonymityInherits k-anonymity issues; Adds complexity; Further utility reductionLow (Impractical due to k-anonymity’s base failure)
Differential Privacy (DP)Mathematical framework limiting inference about individuals via noiseProvable privacy guarantee against inference/linkageStrongest theoretical guarantees; Composable; Robust to auxiliary infoUtility/accuracy trade-off; Implementation complexity; Can be hard for complex queriesLow (Theoretically strongest, but practical utility for granular search data is a major hurdle)
Synthetic DataGenerate artificial data mimicking original statisticsProvide utility without real recordsCan avoid direct disclosure of real dataHard to ensure utility & privacy simultaneously; Risk of memorization/inference if model overfits; Bias amplificationMedium (Promising, but technically demanding for complex behavioral data like search, future potential, but research still early)

3. The Unique Nature and Privacy Sensitivity of Search Query Data

Search query data possesses several intrinsic characteristics that make it particularly challenging to de-identify effectively while preserving its analytical value. These properties distinguish it from simpler, structured datasets often considered in introductory anonymization examples.

High Dimensionality, Sparsity, and the “Curse of Dimensionality”

Search logs are inherently high-dimensional datasets. Each interaction potentially captures a multitude of attributes associated with a user or session: the query terms themselves, the timestamp of the query, the user’s IP address (providing approximate location), browser type and version, operating system, language settings, cookies or other identifiers linking sessions, the rank of clicked results, the URL or domain of clicked results, and potentially other contextual signals. When viewed longitudinally, the sequence of these interactions adds further dimensions representing temporal patterns and evolving interests.

Simultaneously, individual user data within this high-dimensional space is typically very sparse. Any single user searches for only a tiny fraction of all possible topics or keywords, clicks on a minuscule subset of the web’s pages, and exhibits specific patterns of activity at particular time17.

This combination of high dimensionality and sparsity poses a fundamental challenge known as the “curse of dimensionality18” in the context of data privacy. In high-dimensional spaces, data points tend to become isolated; the concept of a “neighbor” or “similar record” becomes less meaningful because points are likely to differ across many dimensions19. Consequently, even without explicit identifiers, the unique combination of attributes and behaviors across many dimensions can act as a distinct “fingerprint” for an individual user. This uniqueness makes re-identification through linkage or inference significantly easier.

The curse of dimensionality challenges traditional anonymization techniques like k-anonymity20. Since k-anonymity relies on finding groups of at least k individuals who are identical across all quasi-identifying attributes, the sparsity and uniqueness inherent in high-dimensional search data make finding such groups highly improbable without resorting to extreme measures. To force records into equivalence classes, one would need to apply such broad generalization (e.g., reducing detailed query topics to very high-level categories) or suppress so much data that the resulting dataset loses significant analytical value. 

Implicit Personal Identifiers and Quasi-Identifiers in Queries

Beyond the metadata associated with a search (IP, timestamp, etc.), the content of the search queries themselves is a major source of privacy risk.  Firstly, users frequently, though often unintentionally, include direct personal information within their search queries. This could be their own name, address, phone number, email address, social security number, account numbers, or similar details about others. The infamous AOL search log incident provided stark evidence of this, where queries directly contained names and location information that facilitated re-identification.  Secondly, and perhaps more pervasively, search queries are rich with quasi-identifiers (QIs). These are terms, phrases, or concepts that, while not uniquely identifying on their own, become identifying when combined with each other or with external auxiliary information. Examples abound in the search context:

The challenge lies in the unstructured, free-text nature of search queries. Unlike structured databases where QIs like date of birth, gender, and ZIP code often reside in well-defined columns, the QIs in search queries are embedded within the semantic meaning and contextual background of the text string itself. Identifying and removing or generalizing all such potential QIs automatically is an extremely difficult task, particularly if done at large scale and by automated means. Standard natural language processing techniques might identify common entities like names or locations, but would struggle with the vast range of potentially identifying combinations and context-dependent sensitivities. Passwords or coded unique urls of private documents may be entered by users and impossible to recognize for automated redaction. This inherent difficulty in scrubbing QIs from unstructured query text makes search data significantly harder to de-identify reliably compared to structured data.

Temporal Dynamics and Longitudinal Linkability

Search logs are not static snapshots; they are longitudinal records capturing user behavior as it unfolds over time. A user’s search history represents a sequence of actions, reflecting evolving interests, ongoing tasks, changes in location, and shifts in life circumstances. This temporal dimension adds significant identifying power beyond that of individual, isolated queries.

Even if session-specific identifiers like cookies are removed or periodically changed, the continuity of a user’s behavior can allow for linking queries across different sessions or time periods. Consistent patterns (e.g., regularly searching for specific technical terms related to one’s profession), evolving interests (e.g., searches related to pregnancy progressing over months), or recurring needs (e.g., checking commute times) can serve as anchors to connect seemingly disparate query records back to the same individual. The sequence itself becomes a quasi-identifier.  This poses a significant challenge for de-identification. Techniques applied cross-sectionally—treating each query or session independently—may fail to protect against longitudinal linkage attacks that exploit these behavioral trails. Effective de-identification of longitudinal data requires considering the entire user history, or at least sufficiently long windows of activity, to assess and mitigate the risk of temporal linkage. This inherently increases the complexity of the de-identification process and potentially necessitates even greater data perturbation or suppression to break these temporal links, further impacting utility. Anonymization techniques that completely sever links between records over time would prevent valuable longitudinal analysis altogether.

The Uniqueness and Re-identifiability Potential of Search Histories

The combined effect of high dimensionality, sparsity, embedded quasi-identifiers, and temporal dynamics results in search histories that are often highly unique to individual users. Research has repeatedly shown that even limited sets of behavioral data points can uniquely identify individuals within large populations. Latanya Sweeney’s seminal work demonstrated that 87% of the US population could be uniquely identified using just three quasi-identifiers: 5-digit ZIP code, gender, and full date of birth21. Search histories contain far more dimensions and potentially identifying attributes than this minimal set.

Studies on analogous high-dimensional behavioral datasets confirm this potential for uniqueness and re-identification. The successful de-anonymization of Netflix users based on a small number of movie ratings linked to public IMDb profiles is a prime example. Similarly, research has shown high re-identification rates for mobile phone location data and credit card transactions, purely based on the patterns of activity. Su and colleagues showed that de-identified web browsing histories can be linked to social media profiles using only publicly available data22. Given that search histories encapsulate a similarly rich and diverse set of user actions and interests over time, it is highly probable that many users possess unique or near-unique search “fingerprints” even after standard de-identification techniques (like removing IP addresses and user IDs) are applied. This inherent uniqueness makes search logs exceptionally vulnerable to re-identification, particularly through linkage attacks that correlate the de-identified search patterns with other available data sources. The simple assumption that removing direct identifiers is sufficient to protect privacy is demonstrably false for this type of rich, behavioral data. The very detail that makes search logs valuable for understanding behavior also makes them inherently difficult to anonymize effectively.

4. The Re-identification Threat: Theory and Practice

The potential for re-identification is not merely theoretical; it is a practical threat demonstrated through various attack methodologies and real-world incidents. Understanding these mechanisms is crucial for appreciating the limitations of de-identification for search query data.

Mechanisms of Re-identification: Linkage, Inference, and Reconstruction Attacks

Re-identification attacks exploit residual information in de-identified data or leverage external knowledge to uncover identities or sensitive attributes. Key mechanisms include:

The threat landscape for re-identification is diverse and evolving. While linkage attacks relying on external data remain a primary concern, inference and reconstruction attacks, potentially powered by advanced AI/ML techniques, pose growing risks even to datasets processed with sophisticated methods. This necessitates robust privacy protections that anticipate a wide range of potential attack vectors.

Landmark Case Study: The AOL Search Log Release (2006)

In August 2006, AOL publicly released a dataset containing approximately 20 million search queries made by over 650,000 users during a three-month period. The data was intended for research purposes and was presented as “anonymized.” The primary anonymization step involved replacing the actual user identifiers with arbitrary numerical IDs. However, the dataset retained the raw query text, query timestamps, and information about clicked results (rank and domain URL). Later statements suggest IP address and cookie information were also altered, though potentially insufficiently.

The attempt at anonymization failed dramatically and rapidly. Within days, reporters Michael Barbaro and Tom Zeller Jr. of The New York Times were able to re-identify one specific user, designated “AOL user No. 4417749,” as Thelma Arnold, a 62-year-old widow living in Lilburn, Georgia23. They achieved this by analyzing the sequence of queries associated with her user number. The queries contained a potent mix of quasi-identifiers, including searches for “landscapers in Lilburn, Ga,” searches for individuals with the surname “Arnold,” and searches for “homes sold in shadow lake subdivision gwinnett county georgia,” alongside other personally revealing (though not directly identifying) queries like “numb fingers,” “60 single men,” and “dog that urinates on everything.” The combination of these queries created a unique pattern easily traceable to Ms. Arnold through publicly available information.

The AOL incident became a watershed moment in data privacy. It starkly demonstrated several critical points relevant to search data de-identification:

  1. Removing explicit user IDs is fundamentally insufficient when the underlying data itself contains rich identifying information.
  2. Search queries, even seemingly innocuous ones, are laden with Personally Identifiable Information (PII) and powerful quasi-identifiers embedded in the text.
  3. The temporal sequence of queries provides crucial context and significantly increases identifiability.
  4. Linkage attacks using query content combined with publicly available information are feasible and effective.
  5. Simple anonymization techniques fail to account for the identifying power of combined attributes and behavioral patterns.

The incident led to significant public backlash, the resignation of AOL’s CTO, and a class-action lawsuit. It remains a canonical example of the pitfalls of naive de-identification and the unique sensitivity of search query data.

Landmark Case Study: The Netflix Prize De-anonymization (2007-2008)

In 2006, Netflix launched a public competition, the “Netflix Prize,” offering $1 million to researchers who could significantly improve the accuracy of its movie recommendation system. To facilitate this, Netflix released a large dataset containing approximately 100 million movie ratings (1-5 stars, plus date) from nearly 500,000 anonymous subscribers, collected between 1998 and 2005. User identifiers were replaced with random numbers, and any other explicit PII was removed.

In 2007, researchers Arvind Narayanan and Vitaly Shmatikov published a groundbreaking paper demonstrating how this supposedly anonymized dataset could be effectively de-anonymized24. Their attack relied on linking the Netflix data with a publicly available auxiliary dataset: movie ratings posted by users on the Internet Movie Database (IMDb).

They developed statistical algorithms that could match users across the two datasets based on shared movie ratings and the approximate dates of those ratings. Their key insight was that while many users might rate popular movies similarly, the combination of ratings for less common movies, along with the timing, created unique signatures. They showed that an adversary knowing only a small subset (as few as 2, but more reliably 6-8) of a target individual’s movie ratings and approximate dates could, with high probability, uniquely identify that individual’s complete record within the massive Netflix dataset. Their algorithm was robust to noise, meaning the adversary’s knowledge didn’t need to be perfectly accurate (e.g., dates could be off by weeks, ratings could be slightly different).

Narayanan and Shmatikov successfully identified the Netflix records corresponding to several non-anonymous IMDb users, thereby revealing their potentially private Netflix viewing histories, including ratings for sensitive or politically charged films that were not part of their public IMDb profiles.

The Netflix Prize de-anonymization study had significant implications:

  1. It demonstrated the vulnerability of high-dimensional, sparse datasets (characteristic of much behavioral data, including search logs) to linkage attacks.
  2. It proved that even seemingly non-sensitive data (movie ratings) can become identifying when combined with auxiliary information.
  3. It highlighted the inadequacy of simply removing direct identifiers and replacing them with pseudonyms when dealing with rich datasets.
  4. It underscored the power of publicly available auxiliary data in undermining anonymization efforts.

The research led to a class-action lawsuit against Netflix alleging privacy violations and the subsequent cancellation of a planned second Netflix Prize competition due to privacy concerns raised by the Federal Trade Commission (FTC). It remains a pivotal case study illustrating the fragility of anonymization for behavioral data.

Other Demonstrations of Re-identification Across Data Types

The AOL and Netflix incidents are not isolated cases. Numerous studies and breaches have demonstrated the feasibility of re-identifying individuals from various types of supposedly de-identified data, reinforcing the systemic nature of the challenge, especially for rich, individual-level records.

The following table summarizes some of these key incidents:

Table 2: Summary of Notable Re-identification Incidents

Incident Name/YearData Type“Anonymization” Method UsedRe-identification MethodAuxiliary Data UsedKey Finding/Significance
MA Governor Weld (1990s)Hospital Discharge DataRemoval of direct identifiers (name, address, SSN)Linkage AttackPublic Voter Registration List (ZIP, DoB, Gender)Early demonstration that QIs in supposedly de-identified data allow linkage to identified data.
AOL Search Logs (2006)Search QueriesUser ID replaced with number; Query text, timestamps retainedLinkage/Inference from Query ContentPublic knowledge, location directoriesSearch queries themselves contain rich PII/QIs enabling re-identification. Simple ID removal is insufficient.
Netflix Prize (2007-8)Movie Ratings (user, movie, rating, date)User ID replaced with numberLinkage AttackPublic IMDb User RatingsHigh-dimensional, sparse behavioral data is vulnerable. Small amounts of auxiliary data can enable re-id.
NYC Taxis (2014)Taxi Trip Records (incl. hashed medallion/license)Weak (MD5) hashing of identifiersPseudonym Reversal (Hash cracking)Knowledge of hashing algorithmPoorly chosen pseudonymization (weak hashing) is easily reversible.
Australian Health Records (MBS/PBS) (2016)Medical Billing DataClaimed de-identification (details unclear)Linkage AttackPublicly available information (e.g., birth year, surgery dates)Government-released health data, claimed anonymous, was re-identifiable.
Browsing History / Social Media Web Browsing HistoryAssumed de-identified (focus on linking)Linkage AttackSocial Media Feeds (e.g., Twitter)Unique patterns of link clicking in browsing history mirror unique social feeds, enabling linkage.
Genomic Beacons (Various studies)Aggregate Genomic Data (allele presence/absence)Query interface limits information releaseMembership Inference Attack (repeated queries, linkage)Individual’s genome sequence, Genealogical databasesEven aggregate or restricted-query genomic data can leak membership information.
Credit Card Data (de Montjoye et al. 2015)Transaction Records (merchant, time, amount)Assumed de-identifiedUniqueness Analysis / Linkage(Implicit) External knowledge correlating purchases/locationsSparse transaction data is highly unique; few points needed for re-identification.
Location Data (Various studies)Mobile Phone Location TracesVarious (often simple ID removal or aggregation)Uniqueness Analysis / Linkage AttackMaps, Points of Interest, Public RecordsHuman mobility patterns are highly unique; location data is easily re-identifiable..

These examples collectively illustrate that re-identification is not a niche problem confined to specific data types but a systemic risk inherent in sharing or releasing granular data about individuals, especially when that data captures complex behaviors over time or across multiple dimensions. Search query logs share many characteristics with these vulnerable datasets (high dimensionality, sparsity, behavioral patterns, embedded QIs, longitudinal nature), strongly suggesting they face similar, if not greater, re-identification risks.

The Critical Role of Auxiliary Information

A recurring theme across nearly all successful re-identification demonstrations is the crucial role played by auxiliary information. This refers to any external data source or background knowledge an attacker possesses or can obtain about individuals, which can then be used to bridge the gap between a de-identified record and a real-world identity.

The sources of auxiliary information are vast and continuously expanding in the era of Big Data:

The critical implication is that the privacy risk associated with a de-identified dataset cannot be assessed in isolation. Its vulnerability depends heavily on the external data ecosystem and what information might be available for linkage. De-identification performed today might be broken tomorrow as new auxiliary data sets become available or linkage techniques improve. This makes robust anonymization a moving target. Any assessment of re-identification risk must therefore be contextual, considering the specific data being released, the intended recipients or release environment, and the types of auxiliary information reasonably available to potential adversaries. Relying solely on removing identifiers without considering this broader context creates a fragile and likely inadequate privacy protection strategy.

5. Limitations of De-identification Techniques on Search Data

Given the unique characteristics of search query data and the demonstrated power of re-identification attacks, it is essential to critically evaluate the limitations of specific de-identification techniques when applied to this context.

The Fragility of k-Anonymity in High-Dimensional, Sparse Data

As established in Section 3.1, k-anonymity aims to protect privacy by ensuring that any individual record in a dataset is indistinguishable from at least k-1 other records based on their quasi-identifier (QI) values. This is typically achieved through generalization (making QI values less specific) and suppression (removing records or values).

However, k-anonymity proves fundamentally ill-suited for high-dimensional and sparse datasets like search logs. The core problem lies in the “curse of dimensionality”:

  1. Uniqueness: In datasets with many attributes (dimensions), individual records tend to be unique or nearly unique across the combination of those attributes. Finding k search users who have matching patterns across numerous QIs (specific query terms, timestamps, locations, click behavior, etc.) is highly improbable.
  2. Utility Destruction: To force records into equivalence classes of size k, massive amounts of generalization or suppression are required. Generalizing query terms might mean reducing specific searches like “side effects of lisinopril” to a broad category like “health query,” destroying the semantic richness crucial for analysis. Suppressing unique or hard-to-group records could eliminate vast portions of the dataset. This results in an unacceptable level of information loss, potentially rendering the data useless for its intended purpose.
  3. Vulnerability to Attacks: Even if k-anonymity is technically achieved, it remains vulnerable. The homogeneity attack occurs if all k records in a group share the same sensitive attribute (e.g., all searched for the same sensitive topic), revealing that attribute for anyone linked to the group. Background knowledge attacks can allow adversaries to further narrow down possibilities within a group.

Refinements like l-diversity and t-closeness attempt to address attribute disclosure vulnerabilities by requiring diversity or specific distributional properties for sensitive attributes within each group. However, they inherit the fundamental problems of k-anonymity regarding high dimensionality and utility loss, while adding implementation complexity. Furthermore, k-anonymity lacks robust compositionality; combining multiple k-anonymous releases does not guarantee privacy. Therefore, k-anonymity and its derivatives face challenges when used for de-identifying massive, complex search logs. They force difficult choices between retaining minimal utility or providing inadequate privacy protection against linkage and inference attacks.

Differential Privacy: The Utility-Privacy Trade-off and Implementation Hurdles

Differential Privacy (DP) offers a fundamentally different approach, providing mathematically rigorous, provable privacy guarantees29. Instead of modifying data records directly to achieve indistinguishability, DP focuses on the output of computations (queries, analyses, models) performed on the data. It ensures that the result of any computation is statistically similar whether or not any single individual’s data is included in the input dataset. This is typically achieved by adding carefully calibrated random noise to the computation’s output.

DP’s strengths are significant: its guarantees hold regardless of an attacker’s auxiliary knowledge, and privacy loss (quantified by \epsilon and \delta) composes predictably across multiple analyses. However, applying DP effectively to massive search logs presents substantial challenges:

  1. Applicability to Complex Queries and Data Types: DP is well-understood for basic aggregate queries (counts, sums, averages, histograms) on numerical or categorical data. Applying it effectively to the complex structures and query types relevant to search logs—such as analyzing free-text query semantics, mining sequential patterns in user sessions, building complex machine learning models (e.g., for ranking or recommendations), or analyzing graph structures (e.g., click graphs)—is more challenging and an active area of research. Standard DP mechanisms might require excessive noise or simplification for such tasks. Techniques like DP-SGD (Differentially Private Stochastic Gradient Descent) exist for training models, but again involve utility trade-offs30.
  1. The Utility-Privacy Trade-off31: This is the most fundamental challenge. The strength of the privacy guarantee (lower \epsilon) is inversely proportional to the amount of noise added. More noise provides better privacy but reduces the accuracy and utility of the results. For the complex, granular analyses often desired from search logs (e.g., understanding rare query patterns, analyzing specific user journeys, training accurate prediction models), the amount of noise required to achieve a meaningful level of privacy (a small \epsilon) might overwhelm the signal, rendering the results unusable. While DP performs better on larger datasets where individual contributions are smaller, the sensitivity of queries on sparse, high-dimensional data can still necessitate significant noise. Finding an acceptable balance between privacy and utility for diverse use cases remains a major hurdle.
  1. Implementation Complexity and Correctness: Implementing DP correctly requires significant expertise in both the theory and the practical nuances of noise calibration, sensitivity analysis (bounding how much one individual can affect the output), and privacy budget management. Errors in implementation, such as underestimating sensitivity or mismanaging the privacy budget across multiple queries (due to composition rules), can silently undermine the promised privacy guarantees. Defining the “privacy unit” (e.g., user, query, session) appropriately is critical; misclassification can lead to unintended disclosures. Auditing DP implementations for correctness is also non-trivial.
  1. Local vs. Central Models: DP can be implemented in two main models. In the central model, a trusted curator collects raw data and then applies DP before releasing results. This generally allows for higher accuracy (less noise for a given \epsilon) but requires users to trust the curator with their raw data. In the local model (LDP), noise is added on the user’s device before data is sent to the collector. This offers stronger privacy guarantees as the collector never sees raw data, but typically requires significantly more noise to achieve the same level of privacy, often leading to much lower utility. The choice of model impacts both trust assumptions and achievable utility.

In essence, while DP provides the gold standard in theoretical privacy guarantees, its practical application to the scale and complexity of  search logs involves significant compromises in data utility and faces non-trivial implementation hurdles. It is not a simple “plug-and-play” solution for making granular search data both private and fully useful.

Inadequacies of Aggregation, Masking, and Generalization for Search Logs

Simpler, traditional de-identification techniques prove largely insufficient for protecting privacy in search logs while preserving meaningful utility:

These foundational techniques, while potentially useful as components within a more sophisticated strategy (e.g., aggregation combined with differential privacy), are individually incapable of addressing the complex privacy challenges posed by massive search query datasets without sacrificing the data’s core value.  As we discuss further, even combined they fall short.

Challenges with Synthetic Data Generation for Complex Behavioral Data

Generating synthetic data—artificial data designed to mirror the statistical properties of real data without containing actual individual records—has emerged as a promising privacy-enhancing technology. It offers the potential to share data insights without sharing real user information. However, creating high-quality, privacy-preserving synthetic search logs faces significant hurdles32:

  1. Utility Preservation: Search logs capture complex patterns: semantic relationships between query terms, sequential dependencies in user sessions, temporal trends, correlations between queries and clicks, and vast individual variability. Training a generative model (e.g., a statistical model or a deep learning model like an LLM) to accurately capture all these nuances without access to the original data is extremely challenging. If the synthetic data fails to replicate these properties faithfully, it will have limited utility for downstream tasks like training accurate machine learning models or conducting reliable behavioral research. Generating realistic sequences of queries that maintain semantic coherence and plausible user intent is particularly difficult.
  2. Privacy Risks (Memorization and Inference): Generative models, especially large and complex ones like LLMs, run the risk of “memorizing” or “overfitting” to their training data. If this happens, the model might generate synthetic examples that are identical or very close to actual records from the sensitive training dataset, thereby leaking private information. This risk is often higher for unique or rare records (outliers) in the original data. Even if exact records aren’t replicated, the synthetic data might still be vulnerable to membership inference attacks, where an attacker tries to determine if a specific person’s data was used to train the generative model. Ensuring the generation process itself is privacy-preserving, for example by using DP during model training is crucial but adds complexity and can impact the fidelity (utility) of the generated data. Evaluating the actual privacy level achieved by synthetic data is also a complex task.
  3. Bias Amplification: Generative models learn patterns from the data they are trained on. If the original search log data contains societal biases (e.g., stereotypical associations, skewed representation of demographic groups), the synthetic data generated is likely to replicate, and potentially even amplify, these biases. This can lead to unfair or discriminatory outcomes if the synthetic data is used for training downstream applications.

Therefore, while synthetic data holds promise, generating truly useful and private synthetic search logs is a frontier research problem. The very complexity that makes search data valuable also makes it incredibly difficult to synthesize accurately without inadvertently leaking information or perpetuating biases. It requires sophisticated modeling techniques combined with robust privacy-preserving methods like DP integrated directly into the generation workflow.

6. Harms, Ethics, and Societal Implications

The challenges of de-identifying search query data are not merely technical or legal; they extend into architectural and organizational domains that fundamentally shape privacy outcomes. How data is released—through what mechanisms, under what controls, and with what oversight—represents an architectural problem bound by organizational principles and norms. The key architectural building block lies in the design of APIs (Application Programming Interfaces), which can act as critical shields between raw data and external access. Re-identification attempts can be partially mitigated at the API level through strict query limits, access controls, auditing mechanisms, and purpose restrictions—complementing the privacy-enhancing technologies discussed throughout this paper. These architectural choices embed ethical values and reflect organizational commitments to privacy beyond mere technical implementation. They carry significant weight and potential for real-world harm if privacy is compromised. These controls can perhaps be observed and managed at an individual organizational level, with extensive oversight and a data protection legal regime including enforcement in place, but are challenging to envision for ongoing large scale access to data by multiple unrelated independent parties.  Once data is released, it is beyond the control of the API.  Cutting off future API access when multiple releases create a re-identification risk may not be feasible.  Knowing whether multiple API users collaborate or combine data is also a limitation.

Potential Harms from Re-identified Search Data: From Embarrassment to Discrimination

If supposedly de-identified search query data is successfully re-linked to individuals, the consequences can range from personal discomfort to severe, tangible harms. Search histories can reveal extremely sensitive aspects of a person’s life, including:

The exposure of such information through re-identification can lead to a spectrum of harms:

These potential harms underscore the high stakes involved in handling search query data. The impact extends beyond individual privacy violations to potential societal harms, such as reinforcing existing inequalities through discriminatory profiling or undermining trust in digital services. Critically, legal systems often struggle to recognize and provide remedies for many of these harms, particularly those that are non-financial, cumulative, or relate to future risks.

7. Conclusion: Synthesizing the Challenges and Risks

The de-identification of massive search query datasets presents a complex and formidable challenge, sitting at the intersection of immense data value and profound privacy risk. While the potential benefits of analyzing search behavior for societal good, service improvement, and innovation are undeniable, the inherent nature of this data makes achieving meaningful privacy protection through de-identification exceptionally difficult.

The Core Privacy Paradox of Search Data De-identification

The fundamental paradox lies in the richness of the data itself. Search logs capture a high-dimensional, sparse, and longitudinal record of human intent and behavior. This richness, containing myriad explicit and implicit identifiers and quasi-identifiers embedded within unstructured query text and temporal patterns, creates unique individual fingerprints. Consequently, techniques designed to obscure identity often face a stark trade-off: either they fail to adequately protect against re-identification attacks (especially linkage attacks leveraging the vast ecosystem of auxiliary data ), or they must apply such aggressive generalization, suppression, or noise addition that the data’s analytical utility is severely compromised.

Traditional methods like k-anonymity are fundamentally crippled by the “curse of dimensionality” inherent in this data type. More advanced techniques like differential privacy offer stronger theoretical guarantees but introduce significant practical challenges related to the privacy-utility balance, implementation complexity, and applicability to the diverse analyses required for search data. Synthetic data generation, while promising, faces similar difficulties in capturing complex behavioral nuances without leaking information or amplifying bias.

Summary of Key Risks and Vulnerabilities

The analysis presented in this report highlights several critical risks associated with attempts to de-identify  search query data:

  1. High Re-identification Risk: Due to the data’s uniqueness and the power of linkage attacks using auxiliary information, the risk of re-identifying individuals from processed search logs remains substantial. Landmark failures like the AOL and Netflix incidents serve as potent warnings.
  2. Inadequacy of Simple Techniques: Basic methods like removing direct identifiers, masking, simple aggregation, or naive generalization are insufficient to protect against sophisticated attacks on this type of data.
  3. Limitations of Advanced Techniques: Even state-of-the-art methods like differential privacy and synthetic data generation face significant hurdles in balancing provable privacy with practical utility for complex, granular search data analysis.
  4. Evolving Threat Landscape: The continuous growth of available data and the increasing sophistication of analytical techniques, including AI/ML-driven attacks, mean that re-identification risks are dynamic and likely increasing over time.
  5. Potential for Serious Harm: Re-identification can lead to tangible harms, including discrimination, financial loss, reputational damage, psychological distress, and chilling effects on free expression and inquiry.

The Ongoing Debate

The challenges outlined fuel an ongoing debate about the viability and appropriate role of de-identification in the context of large-scale behavioral data. While organizations invest in Privacy Enhancing Technologies (PETs) and implement policies aimed at protecting user privacy, the demonstrable risks and technical limitations suggest that achieving true, robust anonymity for granular search query data, while maintaining high utility, remains an elusive goal.

During the preparation of this work the author used ChatGPT to reword and rephrase text and for a first draft of the two charts in the document. After using this tool/service, the author reviewed and edited the content as needed and takes full responsibility for the content of the publication.

  1. https://fpf.org/issue/deid/ ↩︎
  2. https://fpf.org/tag/privacy-enhancing-technologies/ ↩︎
  3.  https://fpf.org/issue/research-and-ethics/ ↩︎
  4. Ohm: https://heinonline.org/HOL/LandingPage?handle=hein.journals/uclalr57&div=48&id=&page= ↩︎
  5. Cooper: https://citeseerx.ist.psu.edu/document? ↩︎
  6. Dinur, Nissim: https://weizmann.elsevierpure.com/en/publications/revealing-information-while-preserving-privacy ↩︎
  7. Barth-Jones: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2076397 ↩︎
  8. Polonetsky, Tene and Finch: https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=2827&context=lawreview ↩︎
  9. We note the European Court of Justice Breyer decision and subsequent EU court decisions that may open up a legal argument that it may be possible to consider a party that does not reasonably have potential access to the additional data to be in possession of non-personal data. https://curia.europa.eu/juris/document/document.jsf?docid=184668&doclang=EN ↩︎
  10. Sweeney: https://www.hks.harvard.edu/publications/k-anonymity-model-protecting-privacy
    ↩︎
  11. Aggarwal, Charu C. (2005). “On k-Anonymity and the Curse of Dimensionality”. VLDB ’05 – Proceedings of the 31st International Conference on Very large Data Bases. Trondheim, Norway. CiteSeerX 10.1.1.60.3155 ↩︎
  12. Marcus Olson:https://marcusolsson.dev/k-anonymity-and-l-diversity/ ↩︎
  13. Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian, “t-Closeness: Privacy Beyond k-Anonymity and ℓ-Diversity,” Proceedings of the 23rd IEEE International Conference on Data Engineering (2007 ↩︎
  14. Dwork, C. (2006). Differential Privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds) Automata, Languages and Programming. ICALP 2006. Lecture Notes in Computer Science, vol 4052. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11787006_1 ↩︎
  15. Simson Garfinkel NIST SP 800 ↩︎
  16. https://research.google/blog/protecting-users-with-differentially-private-synthetic-training-data/ ↩︎
  17. https://sparktoro.com/blog/who-sends-traffic-on-the-web-and-how-much-new-research-from-datos-sparktoro/ ↩︎
  18. Mitigating the Curse of Dimensionality in Data Anonymization – CRISES / URV, https://crises-deim.urv.cat/web/docs/publications/lncs/1084.pdf 59 ↩︎
  19. Bellman: https://link.springer.com/referenceworkentry/10.1007/978-0-387-39940-9_133 ↩︎
  20. On k-anonymity and the curse of dimensionality, https://www.vldb.org/archives/website/2005/program/slides/fri/s901-aggarwal.pdf ↩︎
  21. Latanya Sweeney, “Uniqueness of Simple Demographics in the U.S. Population,” Carnegie Mellon University, Data Privacy Working Paper 3, 2000 ↩︎
  22. Su, Goel, Shukla, Narayana https://www.cs.princeton.edu/~arvindn/publications/browsing-history-deanonymization.pdf ↩︎
  23. Michael Barbaro and Tom Zeller Jr., “A Face Is Exposed for AOL Searcher No. 4417749,” The New York Times, August 9, 2006 ↩︎
  24. Shmatikov How To Break Anonymity of the Netflix Prize Dataset. arxiv cs/0610105 ↩︎
  25. Systematic Review of Re-Identification Attacks on Health Data – PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC3229505/ 115 ↩︎
  26. https://medium.com/vijay-pandurangan/of-taxis-and-rainbows-f6bc289679a1 ↩︎
  27. https://dspace.mit.edu/handle/1721.1/96321 ↩︎
  28. https://www.cs.princeton.edu/~arvindn/publications/browsing-history-deanonymization.pdf ↩︎
  29. Cynthia Dwork, “Differential Privacy,” in Automata, Languages and Programming, 33rd International Colloquium, ICALP 2006, Proceedings, Part II, ed. Michele Bugliesi et al., Lecture Notes in Computer Science 4052 (Berlin: Springer, 2006) ↩︎
  30. https://research.google/blog/generating-synthetic-data-with-differentially-private-llm-inference/ ↩︎
  31. Guidelines for Evaluating Differential Privacy Guarantees – NIST Technical Series Publications, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-226.pdf ↩︎
  32. Privacy Tech-Know blog: When what is old is new again – The reality of synthetic data, https://www.priv.gc.ca/en/blog/20221012/ 95 ↩︎

FPF Launches Major Initiative to Study Economic and Policy Implications of AgeTech

FPF and University of Arizona Eller College of Management Awarded Grant by Alfred P. Sloan Foundation to Address Privacy Implications, and Data Uses of Technologies Aimed at Aging At Home

The Future of Privacy Forum (FPF) — a global non-profit focused on data protection, AI and emerging technologies–has been awarded a grant from the Alfred P. Sloan Foundation to lead a two-year research project entitled Aging at Home: Caregiving, Privacy, and Technology, in partnership with the University of Arizona Eller College of Management. The project, which launched on April 1, will explore the complex intersection of privacy, economics, and the use of emerging technologies designed to support aging populations (“AgeTech”). AgeTech includes a wide range of applications and technologies, from fall detection devices and health monitoring apps to artificial intelligence (AI)-powered assistants.

As of 2024, older adults out number children in almost half of U.S. counties with projections that about one in five Americans will be age 65 or older by 2034 (a year sooner than originally estimated.) This rapidly aging population presents complex challenges and opportunities, particularly in the increased demand for resources necessary for senior care and the use of AgeTech to promote improved autonomy and independence.

FPF will lead rigorous, independent research into these issues, with a particular focus on the privacy expectations of seniors and caregivers, cost barriers to adoption, and the policy gaps surrounding AgeTech. The research will include experimental surveys, roundtables with industry and policy leaders, and a systematic review of economic and privacy challenges facing AgeTech solutions.

The project will be led by co-principals Jules Polonetsky, CEO of FPF, and Dr. Laura Brandimarte, Associate Professor of Management Information Systems at the University of Arizona Eller College of Management. Polonetsky is an internationally recognized privacy expert and co-editor of the Cambridge Handbook on Consumer Privacy. Brandimarte’s work focused on the ethics of technology, with an emphasis on privacy and security, uses quantitative methods including survey and experimental design, and econometric data analysis.  

Jordan Wrigley, a data and policy analyst who leads FPF health data research, will play a lead role for FPF along with members of FPF’s U.S., Global, and AI Policy teams.  Jordan is a recognized and awarded health meta-analytic methodologist and researcher, whose work has informed medical care guidelines and AI data practices.

“The privacy aspects of AgeTech, such as consent and authorization, data sensitivity, and cost, need to be studied and considered holistically to create sustainable policies and build trust with seniors and caregivers as the future of aging becomes the present,” said Wrigley. “This research will seek to do just that.”

“At FPF, we believe that technology and data can benefit society and improve lives when the right laws, policies, and safeguards are in place,” added Polonetsky. “The goal of AgeTech – to assist seniors in living independently while reducing healthcare costs and caregiving burdens – impacts us all. As this field grows, it’s essential that we have the right rules in place to protect privacy and preserve dignity.”

“Technology has the potential to increase the autonomy and overall wellbeing of an ageing population, but for that to happen there has to be trust on the part of users – both that the technology will effectively be of assistance and that it will not constitute another source of data privacy and security intrusions,” added Brandimarte. “We currently know very little about the level of trust the elderly place in AgingTech and the specific needs of this at-risk population when they interact with it, including data accessibility by family members or caregivers.”

Dr. Daniel Goroff, Vice President and Program Director for Sloan, agrees, “As AgeTech evolves, it brings enormous promise—along with pressing questions about equity, access, and privacy. This initiative will provide insights about how innovations can ethically and responsibly enhance the autonomy and dignity of older adults. We’re excited to see FPF and the University of Arizona leading the way on this timely research.”

Key project outputs will include:

Sign-up for our mailing list to stay informed about future progress, and reach out to Jordan Wrigley ([email protected]) if you are interested in learning more about the project. 

Aging at Home: Caregiving, Privacy, and Technology is supported by the Alfred P. Sloan Foundation under Grant No. G-2025-25191.

About The Alfred P. Sloan Foundation

The ALFRED P. SLOAN FOUNDATION is a not-for-profit, mission-driven grantmaking institution dedicated to improving the welfare of all through the advancement of scientific knowledge. Established in 1934 by Alfred Pritchard Sloan Jr., then-President and Chief Executive Officer of the General Motors Corporation, the Foundation makes grants in four broad areas: direct support of research in science, technology, engineering, mathematics, and economics; initiatives to increase the quality, equity, diversity, and inclusiveness of scientific institutions and the science workforce; projects to develop or leverage technology to empower research; and efforts to enhance and deepen public engagement with science and scientists.
sloan.org | @SloanFoundation

About Future of Privacy Forum (FPF)

FPF is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on X and LinkedIn.

About the University of Arizona Eller College of Management

The Eller College of Management at The University of Arizona offers highly ranked undergraduate (BSBA and BSPA), MBA, MPA, masters, and doctoral, Ph.D. degrees in accounting, economics, entrepreneurship, finance, marketing, management and organizations, management information systems (MIS), and public administration and policy in Tucson, Arizona and Phoenix, Arizona.

FPF and OneTrust publish the Updated Guide on Conformity Assessments under the EU AI Act

The Future of Privacy Forum (FPF) and OneTrust have published an updated version of their Conformity Assessments under the EU AI Act: A Step-by-Step Guide, along with an accompanying Infographic. This updated Guide reflects the text of the EU Artificial Intelligence Act (EU AIA), adopted in 2024.  

Conformity Assessments (CAs) play a significant role in the EU AIA’s accountability and compliance framework for high-risk AI systems. The updated Guide and Infographic provide a step-by-step roadmap for organizations seeking to understand whether they must conduct a CA. Both resources are designed to support organizations as they navigate their obligations under the AIA and build internal processes that reflect the Act’s overarching accountability. However, they do not constitute legal advice for any specific compliance situation. 

Key highlights from the Updated Guide and Infographic:

You can also view the previous version of the Conformity Assessment Guide here.

South Korea’s New AI Framework Act: A Balancing Act Between Innovation and Regulation

On 21 January 2025, South Korea became the first jurisdiction in the Asia-Pacific (APAC) region to adopt comprehensive artificial intelligence (AI) legislation. Taking effect on 22 January 2026, the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or simply, Act) introduces specific obligations for “high-impact” AI systems in critical sectors, including healthcare, energy, and public services, and mandatory labeling requirements for certain applications of generative AI. The Act also includes substantial public support for private sector AI development and innovation through its support for AI data centers, as well as projects that create and provide access to training data, and encouragement of technological standardization to support SMEs and start-ups in fostering AI innovation. 

In the broader context of public policies in South Korea that are designed to allow the advancement of AI, the Act is notable for its layered, transparency-focused approach to regulation, moderate enforcement approach compared to the EU AI Act, and significant public support intended to foster AI innovation and development. We cover these in Parts 2 to 4 below. 

Key features of the law include:

In Part 5, we provide a comparison below to the European Union (EU)’s AI Act (EU AI Act). We note that while the AI Framework Act shares some common elements with the EU AI Act, including tiered classification and transparency mandates, South Korea’s regulatory approach differs in its simplified risk categorization, including absence of prohibited AI practices, comparatively lower financial penalties, and the establishment of initiatives and government bodies aimed at promoting the development and use of AI technologies. The intent of this comparison is to assist practitioners in understanding and analyzing key commonalities and differences between both laws.

Finally, Part 6 of this article places the Act within South Korea’s broader AI innovation strategy and discusses the challenges of regulatory alignment between the Ministry of Science and IT (MSIT) and South Korea’s data protection authority, the Personal Information Protection Commission (PIPC) in South Korea’s evolving AI governance landscape.

1. Background 

On 26 December 2024, South Korea’s National Assembly passed the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or Act). 

The AI Framework Act was officially promulgated on 21 January 2025 and will take effect on 22 January 2026, following a one-year transition period to prepare for compliance. During this period, MSIT will assist with the issuance of Presidential Decrees and other sub-regulations and guidelines to clarify implementation details.

South Korea was the first country in the Asia-Pacific region to introduce a comprehensive AI law in 2021: the Bill on Fostering Artificial Intelligence and Creating a Foundation of Trust. However, the legislative process faced significant hurdles, including political uncertainty surrounding the April 2024 general elections, raising concerns that the bill could be scrapped entirely.

However, by November 2024, South Korea’s AI policy landscape had grown increasingly complex, with 20 separate AI governance bills since the National Assembly began its new term in June 2024, each independently proposed by different members. In November 2024, the Information and Communication Broadcasting Bill Review Subcommittee conducted a comprehensive review of these AI-related bills and consolidated them into a single framework, leading to the passage of the AI Framework Act.

At its core, the AI Framework Act adopts a risk-based approach to AI regulation. In particular, it introduces specific obligations for high-impact AI systems and generative AI applications. The AI Framework Act also has extraterritorial reach: it applies to AI activities that impact South Korea’s domestic market or users.

This blog post examines the key provisions of the Act, including its scope, regulatory requirements, and implications for organizations developing or deploying AI systems.

2. The Act establishes a layered approach to AI regulation

2.1 Definitions lay the foundation for how different AI systems will be regulated under the Act

Article 2 of the Act provides three AI-related definitions. 

At the core of the Act’s layered approach is its definition of “high-impact AI” (which is subject to more stringent requirements). “High-impact AI” refers to AI systems “that may have a significant impact on or pose a risk to human life, physical safety, and basic rights,” and is utilized in critical sectors identified under the AI Framework Act, including energy, healthcare, nuclear operations, biometric data analysis, public decision-making, education, or other areas that have a significant impact on the safety of human life and body and the protection of basic rights as prescribed by Presidential Decree.

The Act also introduces specific provisions for “generative AI.” The Act defines generative AI as AI systems that create text, sounds, images, videos, or other outputs by imitating the structure and characteristics of the input data. 

The Act also defines an “AI Business Operator” as corporations, organizations, government agencies, or individuals conducting business related to the AI industry. The Act subdivides AI Business Operators into two sub-categories (which effectively reflect a developer-deployer distinction): 

Currently, as will be covered in more detail below, the obligations under the Act apply to both categories of AI Business Operators, regardless of their specific roles in the AI lifecycle. For example, transparency-related obligations apply to all AI Business Operators, regardless of whether they are involved in the development and/or deployment phases of AI systems. It remains to be seen if forthcoming Presidential Decrees to implement the Act will introduce more differentiated obligations for each type of entity.

While the Act expressly excludes AI used solely for national defense and security from its scope, the Act applies to both government agencies and public bodies when they are involved in the development, provision, or use of AI technology in a business-related context. More broadly, the Act also assigns the government a significant role in shaping AI policy, providing support, and overseeing the development and use of AI.

2.2. The AI Framework Act has broad extraterritorial reach 

Under Article 4(1), the Act applies not only to acts conducted within South Korea but also to those conducted abroad that impact South Korea’s domestic market, or users in South Korea. This means that foreign companies providing AI systems or services to users in South Korea will be subject to the Act’s requirements, even if they lack a physical presence in the country. 

However, Article 4(2) of the Act introduces a notable exemption for AI systems developed and deployed exclusively for national defense or security purposes. These systems, which will be designated by Presidential Decree, fall outside the Act’s regulatory framework.

For global organizations, the Act’s jurisdictional scope raises key compliance considerations. Companies will likely need to assess whether their AI activities fall under South Korea’s regulatory reach, particularly if they:

This last criterion appears to be a novel policy proposition and differentiates the AI Framework Act from the EU AI Act, potentially making it broader in reach. This is because it does not seem necessary for an AI system to be placed on the South Korean market for the condition to be triggered, but simply for the AI-related activity of a covered entity to “indirectly impact” the South Korean market. 

2.3. The Act establishes a multi-layered approach to AI safety and trustworthiness requirements

(i) The Act emphasizes oversight of high-impact AI but does not prohibit particular AI uses 

For most AI Business Operators, compliance obligations under the AI Framework Act are minimal. There are, however, noteworthy obligations – relating to transparency, safety, risk management and accountability – that apply to AI Business Operators deploying high-impact AI systems. 

Under Article 33, AI Business Operators providing AI products and services must “review in advance” (this presumably means before the relevant product or service is released into a live environment or goes to market) whether their AI systems is considered “high-impact AI.” Businesses may request confirmation from the MSIT on whether their AI system is to be considered “high-impact AI.”

Under Article 34, organizations that offer high-impact AI, or products or services using high-impact AI, must meet much stricter requirements, including:

1. Establishing and operating a risk management plan.

2. Establishing and operating a plan to provide explanation for AI-generated results within technical limits, including key decision criteria and an overview of training data.

3. Establishing and operating “user protection measures.”

4. Ensuring human oversight and supervision of high-impact AI.

5. Preserving and storing documents that demonstrate measures taken to ensure AI safety and reliability.

6. Following any additional requirements imposed by the National AI Committee (established under the Act) to enhance AI safety and 7. reliability.

Under Article 35, AI Business Operators are also encouraged to conduct impact assessments for high-impact AI systems to evaluate their potential effects on fundamental rights. While the language of the Act (i.e., “shall endeavor to conduct an impact assessment”) suggests that these assessments are not mandatory, the Act introduces an incentive: where a government agency intends to use a product or service using high-impact AI, the agency is to prioritize AI products or services that have undergone impact assessments in public procurement decisions. Legislatively stipulating the use of public procurement processes to incentivize businesses to conduct impact assessments appears to be a relatively novel move and arguably reflects the innovation-risk duality seen across the Act.

(ii) The Act prioritizes user awareness and transparency for generative AI products and services 

The AI Framework Act introduces specific transparency obligations for generative AI providers. Under Article 31(1), AI Business Operators offering high-impact or generative AI-powered products or services must notify users in advance that the product or service utilizes AI. Further, under Article 31(2), AI Business Operators providing generative AI as a product or service must also indicate that output generated was generated by generative AI. 

Beyond general disclosure, Article 31(3) of the Act mandates that where an AI Business Operator uses an AI system to provide virtual sounds, images, video or other content that are “difficult to distinguish from reality,” the AI Business Operator must “notify or display the fact that the result was generated by an (AI) system in a manner that allows users to clearly recognize it.” 

However, the provision also provides flexibility for artistic and creative expressions. It permits notifications or labelling to be displayed in ways intended to not hinder creative expression or appreciation. This approach appears aimed at balancing the creative utility of generative AI with transparency requirements. Technical details, such as how notification or labelling should be implemented, will be prescribed by Presidential Decree.

(iii) The Act establishes other requirements that apply when certain thresholds are met

The following requirements focus on safety measures and operational oversight, including specific provisions for foreign AI providers.

Under Article 32, AI Business Operators that operate AI systems whose computational learning capacity exceeds prescribed thresholds are required to identify, assess, and mitigate risks throughout the AI lifecycle, and establish a risk management system to monitor and respond to AI-related safety incidents. AI Business Operators must document and submit their findings to the MSIT. 

For accountability, Article 36 provides that AI Business Operators without a domestic address or place of business and cross certain user number or revenue thresholds (to be prescribed) must appoint a “domestic representative” with an address or place of business in South Korea. The details of the domestic representative must be provided to the MSIT. 

These domestic representatives take on significant responsibilities, including:

3. The Act grants the MSIT significant investigative and enforcement powers

3.1 The legislation empowers the MSIT with broad authority to investigate potential violations of the Act 

Under Article 40 of the Act, the MSIT is empowered to investigate businesses that it suspects of breaching any of the following requirements under the Act:

When potential breaches are identified, the MSIT may carry out necessary investigations, including the authority to conduct on-site investigations and to compel AI Business Operators to submit relevant data. During these inspections, authorized officials can examine business records, operational documents, and other critical materials, following established administrative investigation protocols.

If violations are confirmed, the MSIT can issue corrective orders, requiring businesses to immediately halt non-compliant practices and implement necessary remediation measures. 

3.2 The Act takes a relatively moderate approach to penalties compared to other global AI regulations 

Under Articles 43 of the Act, administrative fines of up to KRW 30 million (approximately USD 20,707) may be imposed for:

This enforcement structure caps fines at lower amounts than other global AI regulations. 

4. The Act promotes the development of AI technologies through strategic support for data infrastructure and learning resources

The MSIT is responsible for developing comprehensive policies to support the entire lifecycle of AI training data, ensuring that businesses have access to high-quality datasets essential for AI development. To achieve this, the Act mandates government-led initiatives to:

A key initiative under the Act can be found in Article 25, which provides for the promotion of policies to establish and operate AI Data Centers. Under Article 25(2), the South Korean government may provide administrative and financial support to facilitate the construction and operation of data centers. These centers will provide infrastructure for AI model training and development, ensuring that businesses of all sizes – including small and medium-sized enterprises (SMEs) – have access to these resources.

The Act also promotes the advancement and safe use of AI by encouraging technological standardization (Articles 13 and 14), supporting SMEs and start-ups, and fostering AI-driven innovation. It also facilitates international collaboration and market expansion while establishing a framework for AI testing and verification (Articles 13 and 14). Together, these measures aim to strengthen South Korea’s broader AI ecosystem and ensure its responsible development and deployment.

5. Comparing the approaches of South Korea’s AI Framework Act and the EU’s AI Act reveals both convergences and divergences

As South Korea is only the second jurisdiction globally to enact comprehensive national AI regulation, comparing its AI Framework Act with the EU AI Act helps illuminate both its distinctive features and its place in the emerging landscape of global AI governance. As many companies will need to navigate both frameworks, understanding of their similarities and differences is essential for global compliance strategies.

Table 1. Comparison of Key Aspects of the South Korea AI Framework Act and EU AI Act

6. Looking ahead

South Korea’s AI Framework Act is the first omnibus AI regulation in the APAC region., The South Korean model is notable for establishing an alternative approach to AI regulation: one that seeks to balance the promotion of AI innovation, development, and use, along with safeguards for high-impact aspects.

6.1 Though the Act establishes a framework for direct regulation of AI, several critical areas require further definition through Presidential Decree

The areas that are expected to be clarified through Presidential Decree include:

The interpretation and implementation of these provisions will significantly shape compliance expectations, influencing how AI businesses—both domestic and international—navigate the regulatory landscape.

6.2 The Act must also be considered in the context of South Korea’s broader efforts to position the country as a leader in AI innovation 

The first – and arguably most significant – of these efforts is a significant bill recently introduced by members of the National Assembly, which seeks to amend the Personal Information Protection Act (PIPA) by creating a new legal basis for the processing of personal information specifically for the development and use of AI. The bill introduces a new Article 28-12, which would permit the use of personal information beyond its original purpose of collection, specifically for the development and improvement of AI systems. This amendment would allow such processing provided that:

Second, South Korea’s government is also reportedly exploring other legal reforms to its data protection law to facilitate the development of AI. According to PIPC Chairman Haksoo Ko’s recent interview with a global regulatory news outlet, these reforms could potentially include reforming the “legitimate interests” basis for processing personal information under the PIPA.

South Korea’s Minister for Science and ICT Yoo Sang-im has also reportedly urged the National Assembly to swiftly pass a law on the management and use of government-funded research data to advance scientific and technological development in the AI era.

Third, while creating these pathways for innovation, the PIPC has simultaneously been developing mechanisms to provide oversight over AI systems. For instance, the PIPC’s comprehensive policy roadmap for 2025 (Policy Roadmap) announced in January 2025 outlines an ambitious regulatory framework for AI governance and data protection. In particular, the Policy Roadmap envisions the implementation of specialized regulatory and oversight provisions for the use of unmodified personal data in AI development. 

The Policy Roadmap is supplemented by the PIPC’s Work Direction for Investigations in 2025 (Work Direction). Published in January 2025, the Work Direction includes measures intended to provide additional oversight over AI services, including conducting preliminary onsite inspections of AI-powered services, such as AI agents, and reviewing the use of personal information in AI-based legal and human resources services.

A possible instance of this additional emphasis on providing oversight arose in February 2025, when the PIPC announced a temporary suspension of new downloads of the Chinese generative AI application Deepseek over concerns about potential breaches of the PIPA.

Fourth, South Korea is seeking to strengthen the accountability of foreign organizations. The PIPC has expressed its support for a bill amending the PIPA’s domestic representative system for foreign organizations, which was subsequently amended and became effective from April 1, 2025. This amendment bill addresses a significant gap in the current system, which has allowed foreign companies to designate unrelated third parties as their domestic agents in South Korea, often resulting in what one lawmaker described as “formal” compliance without meaningful accountability.

The new requirements would mandate that foreign companies with established business units in South Korea designate those local entities as their representatives, while imposing explicit obligations on foreign headquarters to properly manage and supervise these domestic agents. The bill also establishes sanctions for violations of these requirements, including fines of up to KRW 20 million (approximately USD 14,000). 

Fifth, South Korea is seeking to position itself as a global leader in privacy and AI governance through international cooperation and thought leadership. As South Korea prepares to host the annual Global Privacy Assembly in September 2025 – an event involving participants from 95 countries – the PIPC is positioning itself as a bridge between different regional approaches to data protection and AI governance.

6.3 However, these efforts highlight a persistent challenge to ensure clear alignment between key regulatory authorities in South Korea’s AI governance landscape 

Whilst the MSIT was working to finalize the AI Framework Act, the PIPC, like its counterparts in many other jurisdictions globally, has been assuming a de facto regulatory role for AI applications involving personal data.

However, while the AI Framework Act assigns primary responsibility for AI governance to the MSIT, it does not appear to address or acknowledge the PIPC’s role in the regulatory landscape. This creates a potential situation where two parallel AI regulators – one de jure and the other de facto – will likely continue to operate: the MSIT overseeing general AI system safety and trustworthiness under the AI Framework Act, and the PIPC maintaining its oversight of personal data processing in AI systems under the PIPA.

As a result, organizations developing or deploying AI systems in South Korea may need to navigate compliance requirements from both authorities, particularly when their AI systems process personal data. How this dual regulatory structure evolves and whether a more unified governance approach emerges will be a critical factor in determining the success of South Korea’s ambitious AI strategy in the coming years.

Despite these practical challenges, South Korea’s approach to AI regulation offers a potential governance model for other APAC jurisdictions. Regardless, the success of the Act will ultimately depend on how effectively it balances its dual objectives — fostering AI innovation while ensuring responsible deployment. As AI governance evolves globally, the South Korean experience will provide valuable insights for policymakers, regulators, and industry stakeholders worldwide.

Note: Please note that the summary of the AI Framework Act above is based on an English machine translation, which may contain inaccuracies. Additionally, the information should not be considered legal advice. For specific legal guidance, kindly consult a qualified lawyer practicing in South Korea.

The authors would like to thank Josh Lee Kok Thong, Dominic Paulger, and Vincenzo Tiani for their contributions to this post.

Little Rock, Minor Rights: Arkansas Leads with COPPA 2.0-Inspired Law

With thanks to Daniel Hales and Keir Lamont for their contributions.

Shortly before the close of its 2025 session, the Arkansas legislature passed HB 1717, the Arkansas Children and Teens’ Online Privacy Protection Act, with unanimous votes. As the name suggests, Arkansas modeled this legislation after Senator Markey’s federal “COPPA 2.0” proposal, which passed the U.S. Senate as part of a broad child online safety package last year. Presuming enactment by Governor Sarah Huckabee Sanders, HB 1717 will take effect on July 1, 2026. The Arkansas law, or “Arkansas COPPA 2.0” establishes privacy protections for teens aged 13 to 16, introduces substantive data minimization requirements including prohibitions on targeted advertising, and provides new rights to access, delete, and correct personal information for teens. The legislature also considered an Arkansas version of the federal Kids Online Safety Act but this proposal ultimately failed, with the bill’s sponsor noting some uncertainties about its constitutionality.

What to know about Arkansas HB 1717: 

The substantive data minimization trend continues

While the federal COPPA framework is largely focused on consent, former Commissioner Slaughter noted in 2022 that people “may be surprised to know that COPPA provides for perhaps the strongest, though under-enforced, data minimization rule in US privacy law.” Arkansas builds on these requirements and follows the recent shift towards substantive data minimization with a complex web of layered requirements that operators must satisfy to use both child and teen data:

 In practice, the interaction between these distinct requirements may raise difficult questions of statutory interpretation.

Differences from federal COPPA 2.0

As originally introduced, Arkansas’s bill was nearly identical to last year’s federal COPPA 2.0 bill. Arkansas’ framework went through various, largely business-friendly amendments (and one bill number switch) during its legislative journey. Though HB 1717 maintains the same general framework of COPPA 2.0, it includes several important divergences:

Could COPPA preempt the Arkansas law?

One question likely to emerge from Arkansas COPPA 2.0 is whether certain provisions, or the entire law, may be subject to federal preemption under the existing COPPA statute. COPPA includes an express preemption clause that prohibits state laws from imposing requirements that are inconsistent with COPPA. This is relevant in two ways as the Arkansas law will both (1) extend protections to teens and (2) introduce new substantive limitations on the use of children’s and teens’ data, such as limits on targeted advertising and strict data minimization requirements, that go beyond COPPA’s scope. 

The question of COPPA preemption was recently explored in Jones v. Google, with the FTC filing an amicus brief arguing that state laws that “supplement” or “require the same thing” as COPPA are not inconsistent. The FTC references the Congressional record from when COPPA was contemplated, arguing that “Congress viewed ‘the States as partners’. . . rather than as potential intruders on an exclusively federal arena,” and that “the state law protections at issue ‘complement–rather than obstruct–Congress’ ‘full purposes and objectives in enacting the statute.’” Something to additionally keep in mind is that the FTC has been in the process of finalizing an update to the COPPA Rule and which could introduce additional inconsistencies, or at least compliance confusion, between the new final Rule and Arkansas COPPA 2.0 when it comes to key terms like the definition of personal information or whether targeted advertising is allowed with consent. 

A trend to watch?

The passage of Arkansas COPPA 2.0 may signal an emerging trend towards a potentially more constitutionally resilient approach to protecting children and teens online. Unlike age-appropriate design codes or social media age verification mandates, which have faced significant First Amendment challenges, Arkansas COPPA 2.0 takes a more targeted approach focused on privacy and data governance, rather than access, online safety, or content. Questions of preemption and drafting quirks aside, this approach may be on firmer ground by focusing on data protection practices and building on a longstanding federal privacy framework. As states explore new ways to safeguard youth online without triggering constitutional pitfalls, privacy-focused legislation modeled on COPPA standards could become a popular path forward.