Five Big Questions (and Zero Predictions) for the U.S. Privacy and AI Landscape in 2026
Introduction
For better or worse, the U.S. is heading into 2026 under a familiar backdrop: no comprehensive federal privacy law, plenty of federal rumblings, and state legislators showing no signs of slowing down. What has changed is just how intertwined privacy, youth, and AI policy debates have become, whether the issue is sensitive data, data-driven pricing, or the increasingly spirited discussions around youth online safety. And with a new administration reshuffling federal priorities, the balance of power between Washington and the states may shift yet again.
In a landscape this fluid, it’s far too early to make predictions (and unwise to pretend otherwise). Instead, this post highlights five key questions that will influence how legislators and regulators navigate the evolving intersection of privacy and AI policy in the year ahead.
- No new comprehensive privacy laws in 2025: A portent of stability, or will 2026 increase legal fragmentation?
One of the major privacy storylines of 2025 is that no new state comprehensive privacy laws were enacted this year. Although that is a significant departure from the pace set in prior years, it is not due to an overall decrease in legislative activity on privacy and related issues. FPF’s U.S. Legislation team tracked hundreds of privacy bills, nine states amended their existing comprehensive privacy laws, and many more enacted notable sectoral laws dealing with artificial intelligence, health, and youth privacy and online safety. Nevertheless, the number of comprehensive privacy laws remains fixed for now at 19 (or 20, for those who count Florida).
Reading between the lines, there are several things this could mean for 2026. Perhaps the lack of new laws this year was more due to chance than anything else, and next year will return to business-as-usual. After all, Alabama, Arkansas, Georgia, Massachusetts, Oklahoma, Pennsylvania, Vermont, and West Virginia all had bills make it to a floor vote or progress into cross-chamber, and some of those bills have been carried over into the 2026 legislative session. Or perhaps this is indicative that a critical capacity of state laws has been reached and we should expect stability, at least in terms of which states do and do not have comprehensive privacy laws.
A third possibility is that next year promises something different. Although the landscape has come to be dominated by the “Connecticut model” for privacy, a growing bloc of other New England states are experimenting with bolder, more restrictive frameworks. Vermont, Maine, and Massachusetts all have live bills going into the 2026 legislative session that would, if enacted, represent some of the strictest state privacy laws on the books–many drawing heavily from Maryland’s substantive data minimization requirements. Vermont’s proposal would also include private right of action, and Massachusetts’ proposals, S.2619 and H.4746, would ban selling sensitive data and targeted advertising to minors. State privacy law is clearly at an inflection point, and what these states do in 2026—including whether they move in lock-step—could prove influential on the state privacy landscape.
— Jordan Francis
- Are age signals the future of youth online protections in 2026?
As states have ramped up youth online privacy and safety legislation in recent years, a perennial question emerges each legislative session like clockwork: how can entities apply protections to minors if they don’t know who is a minor? Historically, legislatures have tried to solve this riddle with different approaches to knowledge standards that define when entities know, or should know, whether a user is a minor, while others tested age assurance requirements placed at the point of access to covered services. In 2025, however, that experimentation took a notable turn with the emergence of novel “age signals” frameworks.
Unlike earlier models that focused on service-level age assurance, age signals frameworks seek to shift age determination responsibilities upstream in the technology stack, relying on app stores or operating system providers to generate and transmit age signals to developers. In 2025, lawmakers enacted two distinct versions of this approach: the App Store Accountability Act (ASAA) model in Utah, Texas, and Louisiana; and the California AB 1043 model.
While both frameworks rely on age signaling concepts, they diverge significantly in scope and regulatory ambition. The ASAA model assigns app stores responsibility for age verification and parental consent, and requires them to send developers age signals that indicate (1) users’ ages and (2), for minors, whether parental consent has been obtained. These obligations introduce new and potentially significant technical challenges for companies, which must integrate age-signaling systems while reconciling these obligations with requirements under COPPA and state privacy laws. Meanwhile, the Texas’ ASAA law is facing two First Amendment challenges in federal court, with plaintiffs seeking to obtain preliminary injunctions before the law’s January 1 effective date.
California’s AB 1043 represents a different approach. The law requires operating system (OS) providers to collect age information during device setup and share this information with developers via the app store. This law does not require parental consent or additional substantive protections for minors; its sole purpose is to enable age data sharing to support compliance with laws like the CCPA and COPPA. The AB 1043 model—while still mandating novel age signaling dynamics between operating system providers, app stores, and developers— could be simpler to implement and received notable support from industry stakeholders prior to enactment.
So what might one ponder—but not dare predict—about the future of age signals in 2026? Two developments bear watching. The highly anticipated decision on the plaintiff’s request for an injunction against the Texas law may set the direction for how aggressively states will replicate this model—though momentum may continue, particularly given federal interest reflected in the House Energy & Commerce Committee’s introduction of H.R. 3149 to nationalize the ASAA framework. Second, the California AB 1043 model, which has not yet been challenged in court, may gain traction in 2026 as a more constitutionally durable option. With some states that have robust protections for minors established in existing privacy law, perhaps the AB 1043 model may serve as an attractive model for facilitating compliance with such obligations.
– Daniel Hales
- Is 2026 shaping up to be another “Year of the Chatbots,” or is a legislative plot twist on the horizon?
If 2025 taught us anything, it’s that chatbots have stepped out of the supporting cast and into the starring role in AI policy debates. This year marked the first time multiple states (including Utah, New York, California, and Maine) enacted laws that explicitly address AI chatbots. Much of that momentum followed a wave of high-profile incidents involving “companion chatbots,” systems designed to simulate emotional relationships. Several families alleged that these tools encouraged their children to self-harm, sparking litigation, congressional testimony, and inquiries from both the Federal Trade Commission (FTC) and Congress and carrying chatbots to the forefront of policymakers’ minds.
States responded quickly. California (SB 243) and New York (S-3008C) enacted disclosure-based laws requiring companion chatbot operators to maintain safety protocols and clearly tell users when they are interacting with AI, with California adding extra protections for minors. Importantly, neither state opted for a ban on chatbot use, setting their focus on transparency and notice rather than prohibition.
And the story isn’t slowing down in 2026. Several states have already pre-filed chatbot bills, most centering once again on youth safety and mental health. Some may build on California’s SB 243 with stronger youth-specific requirements or tighter ties to age assurance frameworks. It is possible other states may broaden the conversation, like looking at chatbot use in elders, education, or employment, as well as diving deeper into questions of sensitive data.
The big question for the year ahead: Will policymakers stick with disclosure-first models, or pivot toward outright use restrictions on chatbots, especially for minors? Congress is now weighing in with three bipartisan proposals (the GUARD Act, the CHAT Act, and the SAFE Act), ranging from disclosure-forward approaches to full restrictions on minors’ access to companion chatbots. With public attention high and lawmakers increasingly interested in action, 2026 may be the year Congress steps in, potentially reshaping, or even preempting, state frameworks adopted in 2025.
– Justine Gluck
4. Will health and location data continue to dominate conversations around sensitive data in 2026?
While 2025 did not produce the hoped-for holiday gift of compliance clarity for sensitive or health data, the year did supply flurries, storms, light dustings, and drifts of legislative and enforcement activity. In 2025, states focused heavily on health inferences, neural data, and location data, often targeting the sale and sharing of this information.
For health, the proposed New York Health Information Privacy Act captured headlines and left us in waiting. That bill (still active at the time of writing) broadly defined “regulated health information” to include data such as location and payment information. It included a “strictly necessary” standard for the use of regulated health information and unique, heightened consent requirements. Health data also remains a topic of interest at the federal level. Senator Cassidy (R-LA) recently introduced the Health Information Privacy Reform Act (HIPRA / S. 3097), which would expand federal health privacy protections to include new technologies such as smartwatches and health apps. Enforcers, too, got in on the action. The California DOJ completed a settlement concerning the disclosure of consumers’ viewing history with respect to web pages that create sensitive health inferences.
Location was another sensitive data category singled out by lawmakers and enforcers in 2025. In Oregon, HB 2008 amended the Oregon Consumer Privacy Act to ban the sale of precise location data (as well as the personal data of individuals under the age of 16). Colorado also amended its comprehensive privacy law to add precise location data (defined as within 1,850’) to the definition of sensitive data, subjecting it to opt-in consent requirements. Other states, such as California, Illinois, Massachusetts, and Rhode Island, also introduced laws restricting the collection and use of location data, often by requiring heightened consent for companies to sell or share such data (if not outright banning it). Like with health data, enforcers were also looking at location data practices. In Texas, we saw the first lawsuit under a state comprehensive privacy law, and it focused on the collection and use of location data (namely, inadequate notice and failure to obtain consent). The FTC was likewise looking at location data practices throughout the year.
Sensitive data—health, location, or otherwise—is unlikely to get less complex in 2026. New laws are being enacted and enforcement activity is heating up. The regulatory climate is shifting—freezing out old certainties and piling on high-risk categories like health inferences, location data, and neural data. In light of drifting definitions, fractal requirements, technologist-driven investigations, and slippery contours, robust data governance may offer an option to glissade through a changing landscape. Accurately mapping data flows and having ready documentation seems like essential equipment for unfavorable regulatory weather.
— Jordan Wrigley, Beth Do & Jordan Francis
5. Will a federal moratorium steer the AI policy conversation in 2026?
If there’s been one recurring plot point in 2025, it was the interest at the White House and among some congressional leaders in hitting the pause button on state AI regulation. The year opened with lawmakers attempting to tuck a 10-year moratorium on state AI laws into the “One Big Beautiful Bill,” a move that would have frozen enforcement of a wide swath of state frameworks. That effort fizzled due to push back from a range of Republican and Democratic leaders, but the idea didn’t: similar language resurfaced during negotiations over the annual defense spending bill (NDAA). Ultimately, in December, President Trump signed an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” with the goal of curbing state regulations on AI deemed excessive via an AI Litigation Task Force and restrictions on funding for states enforcing AI laws that conflict with the principles outlined in the EO. This EO tees up a moment where states, agencies, and industry may soon be navigating not just compliance with new laws, but also federal challenges to how those laws operate (as well as federal challenges to the EO itself).
A core challenge of the EO is the question of what, exactly, qualifies as an “AI law.” While standalone statutes such as Colorado’s AI Act (SB 205) are explicit targets of the EO’s efforts, many state measures are not written as AI-specific laws at all. Instead they are embedded within broader privacy, safety, or consumer protection frameworks. Depending on how “AI law” is construed, a wide range of existing state requirements could fall within scope and potentially face challenge, including AI-related updates to existing civil rights or anti-discrimination statutes, privacy law provisions governing automated decisionmaking, profiling, and the use of personal data for AI training, and criminal statutes addressing deepfakes and non-consensual intimate images.
Notably, however, the EO also identifies specific areas where future federal action would not preempt state laws, including child safety protections, AI compute and data-center infrastructure, state government procurement and use of AI, and (more open-endedly) “other topics as shall be determined.” That last carveout leaves plenty of room for interpretation and makes clear that the ultimate boundaries of federal preemption are still very much in flux. In practice, what ends up in or out of scope will hinge on how the EO’s text is interpreted and implemented. Technologies like chatbots highlight this ambiguity, as they can simultaneously trigger child safety regimes and AI governance requirements that the administration may seek to constrain.’
That breadth raises another big question for 2026: As the federal government steps in to limit state AI activity, will a substantive federal framework emerge in its place? Federal action on AI has been limited so far, which means a pause on state laws could arrive without a national baseline to fill the gaps, a notable departure from traditional preemption, where federal standards typically replace state ones outright. At the same time, Section 8(a) of the EO signals the Administration’s commitment to work with Congress to develop a federal legislative framework, while the growing divergence in state approaches has created a compliance patchwork that organizations operating nationwide must navigate.
With this EO, the role of state versus federal law in technology policy is likely to be the defining issue of 2026, with the potential to reshape not only state AI laws but the broader architecture of U.S. privacy regulation.
— Tatiana Rice & Justine Gluck